Skip to content

v0.2.0

Compare
Choose a tag to compare
@ValerianRey ValerianRey released this 04 Sep 23:07
· 64 commits to main since this release
929be29

The multi-task learning update

This version mainly introduces mtl_backward, enabling multi-task learning with Jacobian descent. See this new example to get started!
It also brings many improvements to the documentation, to the unit tests and to the internal code structure. Lastly, it fixes a few bugs and invalid behaviors.

Changelog:

Added

  • autojac package containing the backward pass functions and their dependencies.
  • mtl_backward function to make a backward pass for multi-task learning.
  • Multi-task learning example.

Changed

  • BREAKING: Moved the backward module to the autojac package. Some imports may have to be
    adapted.
  • Improved documentation of backward.

Fixed

  • Fixed wrong tensor device with IMTLG in some rare cases.
  • BREAKING: Removed the possibility of populating the .grad field of a tensor that does not
    expect it when calling backward. If an input t provided to backward does not satisfy
    t.requires_grad and (t.is_leaf or t.retains_grad), an error is now raised.
  • BREAKING: When using backward, aggregations are now accumulated into the .grad fields
    of the inputs rather than replacing those fields if they already existed. This is in line with the
    behavior of torch.autograd.backward.

Contributors