Skip to content

Latest commit

 

History

History
134 lines (101 loc) · 6.2 KB

CHANGELOG.md

File metadata and controls

134 lines (101 loc) · 6.2 KB

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning. This changelog does not include internal changes that do not affect the user.

[Unreleased]

[0.3.1] - 2024-12-21

Changed

  • Improved the performance of the graph traversal function called by backward and mtl_backward to find the tensors with respect to which differentiation should be done. It now visits every node at most once.

[0.3.0] - 2024-12-10

Added

  • Added a default value to the inputs parameter of backward. If not provided, the inputs will default to all leaf tensors that were used to compute the tensors parameter. This is in line with the behavior of torch.autograd.backward.
  • Added a default value to the shared_params and to the tasks_params arguments of mtl_backward. If not provided, the shared_params will default to all leaf tensors that were used to compute the features, and the tasks_params will default to all leaf tensors that were used to compute each of the losses, excluding those used to compute the features.
  • Note in the documentation about the incompatibility of backward and mtl_backward with tensors that retain grad.

Changed

  • BREAKING: Changed the name of the parameter A to aggregator in backward and mtl_backward.
  • BREAKING: Changed the order of the parameters of backward and mtl_backward to make it possible to have a default value for inputs and for shared_params and tasks_params, respectively. Usages of backward and mtl_backward that rely on the order between arguments must be updated.
  • Switched to the PEP 735 dependency groups format in pyproject.toml (from a [tool.pdm.dev-dependencies] to a [dependency-groups] section). This should only affect development dependencies.

Fixed

  • BREAKING: Added a check in mtl_backward to ensure that tasks_params and shared_params have no overlap. Previously, the behavior in this scenario was quite arbitrary.

[0.2.2] - 2024-11-11

Added

  • PyTorch Lightning integration example.
  • Explanation about Jacobian descent in the README.

Fixed

  • Made the dependency on ecos explicit in pyproject.toml (before cvxpy 1.16.0, it was installed automatically when installing cvxpy).

[0.2.1] - 2024-09-17

Changed

  • Removed upper cap on numpy version in the dependencies. This makes torchjd compatible with the most recent numpy versions too.

Fixed

  • Prevented IMTLG from dividing by zero during its weight rescaling step. If the input matrix consists only of zeros, it will now return a vector of zeros instead of a vector of nan.

[0.2.0] - 2024-09-05

Added

  • autojac package containing the backward pass functions and their dependencies.
  • mtl_backward function to make a backward pass for multi-task learning.
  • Multi-task learning example.

Changed

  • BREAKING: Moved the backward module to the autojac package. Some imports may have to be adapted.
  • Improved documentation of backward.

Fixed

  • Fixed wrong tensor device with IMTLG in some rare cases.
  • BREAKING: Removed the possibility of populating the .grad field of a tensor that does not expect it when calling backward. If an input t provided to backward does not satisfy t.requires_grad and (t.is_leaf or t.retains_grad), an error is now raised.
  • BREAKING: When using backward, aggregations are now accumulated into the .grad fields of the inputs rather than replacing those fields if they already existed. This is in line with the behavior of torch.autograd.backward.

[0.1.0] - 2024-06-22

Added