Skip to content

Releases: hallvardnmbu/neurons

v2.4.0

26 Sep 07:34
Compare
Choose a tag to compare

Feedback blocks.

Thorough expansion of the feedback module.
Feedback blocks automatically handle weight coupling and skip connections.

When defining a feedback block in the network's layers, the following syntax is used:

network.feedback(
    vec![feedback::Layer::Convolution(
        1,
        activation::Activation::ReLU,
        (3, 3),
        (1, 1),
        (1, 1),
        None,
    )],
    2,
    true,
);

v2.3.0

19 Sep 16:33
Compare
Choose a tag to compare

Add the possibility of skip connections.

Limitations:

  • Only works between equal shapes.
  • Backward pass assumes an identity mapping (gradients are simply added).

v2.2.0

14 Sep 13:45
Compare
Choose a tag to compare

Selectable scaling wrt. loopbacks.

Add possibility of selecting the scaling function.

  • tensor::Scale
  • feedback::Accumulation

See implementations of the above for more information.

v2.1.0

14 Sep 11:38
Compare
Choose a tag to compare

Maxpool tensor consistency.

  • Update maxpool logic to ensure consistency wrt. other layers.
  • Maxpool layers now return a tensor::Tensor (of shape tensor::Shape::Quintuple), instead of nested Vecs.
  • This will lead to consistency when implementing maxpool for feedback blocks.

v2.0.5

14 Sep 10:28
Compare
Choose a tag to compare

Bug fixes and renaming.

  • Minor bug fixes to feedback connections.
  • Rename simple feedback connections to loopback connections for consistency.

v2.0.4

05 Sep 17:08
Compare
Choose a tag to compare

Add skeleton for feedback block structure #9.
Missing correct handling of the backward pass.

How should the optimizer be handled (wrt. buffer, etc.)?

v2.0.3

02 Sep 14:56
Compare
Choose a tag to compare

Improved optimizer creation.

Before:

network.set_optimizer(
      optimizer::Optimizer::AdamW(
          optimizer::AdamW {
              learning_rate: 0.001,
              beta1: 0.9,
              beta2: 0.999,
              epsilon: 1e-8,
              decay: 0.01,

              // To be filled by the network:
              momentum: vec![],
              velocity: vec![],
          }
      )
  );

Now:

network.set_optimizer(optimizer::RMSprop::create(
      0.001,                     // Learning rate
      0.0,                       // Alpha
      1e-8,                      // Epsilon
      Some(0.01),                // Decay
      Some(0.01),                // Momentum
      true,                      // Centered
  ));

v2.0.2

29 Aug 19:19
Compare
Choose a tag to compare

Improved layer->layer compatibility #26.

v2.0.1

29 Aug 10:21
Compare
Choose a tag to compare

Improved optimizer step, minimizing the amount of (repeated) loops.

v2.0.0

28 Aug 11:02
Compare
Choose a tag to compare

Fix bug in batched weight updates.