v0.2.0
The multi-task learning update
This version mainly introduces mtl_backward, enabling multi-task learning with Jacobian descent. See this new example to get started!
It also brings many improvements to the documentation, to the unit tests and to the internal code structure. Lastly, it fixes a few bugs and invalid behaviors.
Changelog:
Added
autojac
package containing the backward pass functions and their dependencies.mtl_backward
function to make a backward pass for multi-task learning.- Multi-task learning example.
Changed
- BREAKING: Moved the
backward
module to theautojac
package. Some imports may have to be
adapted. - Improved documentation of
backward
.
Fixed
- Fixed wrong tensor device with
IMTLG
in some rare cases. - BREAKING: Removed the possibility of populating the
.grad
field of a tensor that does not
expect it when callingbackward
. If an inputt
provided to backward does not satisfy
t.requires_grad and (t.is_leaf or t.retains_grad)
, an error is now raised. - BREAKING: When using
backward
, aggregations are now accumulated into the.grad
fields
of the inputs rather than replacing those fields if they already existed. This is in line with the
behavior oftorch.autograd.backward
.