Release 0.5.2
Added
- TensorNode outputs can now define a
post_build
function that will be
executed after the simulation is initialized (see the TensorNode
documentation for details). - Added functionality for outputting summary data during the training process
that can be viewed in TensorBoard (see the sim.train documentation). - Added some examples demonstrating how to use Nengo DL in a more complicated
task using semantic pointers to encode/retrieve information - Added
sim.training_step
variable which will track the current training
iteration (can be used, e.g., for TensorFlow's variable learning rate
operations). - Users can manually create
tf.summary
ops and pass them tosim.train
summaries - The Simulator context will now also set the default TensorFlow graph to the
one associated with the Simulator (so any TensorFlow ops created within the
Simulator context will automatically be added to the correct graph) - Users can now specify a different objective for each output probe during
training/loss calculation (see the sim.train documentation).
Changed
- Resetting the simulator now only rebuilds the necessary components in the
graph (as opposed to rebuilding the whole graph) - The default
"mse"
loss implementation will now automatically convert
np.nan
values in the target to zero error - If there are multiple target probes given to
sim.train
/sim.loss
the
total error will now be summed across probes (instead of averaged)
Fixed
sim.data
now implements the fullcollections.Mapping
interface- Fixed bug where signal order was non-deterministic for Networks containing
objects with duplicate names (#9) - Fixed bug where non-slot optimizer variables were not initialized
(#11) - Implemented a modified PES builder in order to avoid slicing encoders on
non-decoded PES connections - TensorBoard output directory will be automatically created if it doesn't
exist