torchjd icon indicating copy to clipboard operation
torchjd copied to clipboard

Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with multiple losses (e.g. multi-task learning).

Results 11 torchjd issues
Sort by recently updated
recently updated
newest added

TODO: - [ ] Add a special type of error for that, like UnsupportedOnSparseError. We could catch that + NotImplementedError, but not catch ValueError for instance. - [ ] Prioritize...

feat
package: sparse

Already making the PR to keep track of progress, and to be able to update the dev-new-engine branch easily when main is updated. - [x] Remove explicit batched optimizations #470...

feat
package: autogram
package: sparse

Note that this does not remove tags if not needed anymore. I think this shouldn't in case we want to manually add a tag, for instance if we change documentation...

ci
package: aggregation

Not sure what I'm doing here, but as far as I understand, the generated vmap rule for JacobianAccumulator is never used. I think (correct me if I'm wrong) that it...

refactor
package: autogram

* Add and use JacobianBasedGramianComputerWithoutCrossTerms * Make ModelAlsoUsingSubmoduleParamsDirectly and InterModuleParamReuse xpass * Fix missing type hint * Test against no cross-terms

feat
package: autogram

I think we need some script in `.github/workflow` and a branch protection. It would make the `TODO` keyword very useful and safe.

ci

Here are some thoughts on how we could support randomness in `autogram`. Consider this context: ```python class local_rng_context: """ A context manager that saves the global CPU and CUDA RNG...

Hi folks, it's me again, I had noticed a few months back that TorchJD bugged when trying to input sparse matrices into the custom JD backward pass due to incompatibility...

Thanks for your solid work! Since `jax` has been widely used, will this repo offer a `jax` version?