Releases: hallvardnmbu/neurons
Releases · hallvardnmbu/neurons
v2.4.0
Feedback blocks.
Thorough expansion of the feedback module.
Feedback blocks automatically handle weight coupling and skip connections.
When defining a feedback block in the network's layers, the following syntax is used:
network.feedback(
vec![feedback::Layer::Convolution(
1,
activation::Activation::ReLU,
(3, 3),
(1, 1),
(1, 1),
None,
)],
2,
true,
);
v2.3.0
Add the possibility of skip connections.
Limitations:
- Only works between equal shapes.
- Backward pass assumes an identity mapping (gradients are simply added).
v2.2.0
Selectable scaling wrt. loopbacks.
Add possibility of selecting the scaling function.
tensor::Scale
feedback::Accumulation
See implementations of the above for more information.
v2.1.0
Maxpool tensor consistency.
- Update maxpool logic to ensure consistency wrt. other layers.
- Maxpool layers now return a
tensor::Tensor
(of shapetensor::Shape::Quintuple
), instead of nestedVec
s. - This will lead to consistency when implementing maxpool for
feedback
blocks.
v2.0.5
Bug fixes and renaming.
- Minor bug fixes to feedback connections.
- Rename simple feedback connections to
loopback
connections for consistency.
v2.0.4
v2.0.3
Improved optimizer creation.
Before:
network.set_optimizer(
optimizer::Optimizer::AdamW(
optimizer::AdamW {
learning_rate: 0.001,
beta1: 0.9,
beta2: 0.999,
epsilon: 1e-8,
decay: 0.01,
// To be filled by the network:
momentum: vec![],
velocity: vec![],
}
)
);
Now:
network.set_optimizer(optimizer::RMSprop::create(
0.001, // Learning rate
0.0, // Alpha
1e-8, // Epsilon
Some(0.01), // Decay
Some(0.01), // Momentum
true, // Centered
));
v2.0.2
v2.0.1
Improved optimizer step, minimizing the amount of (repeated) loops.
v2.0.0
Fix bug in batched weight updates.