Release 3.3.0
Compatible with Nengo 3.0.0
Compatible with TensorFlow 2.2.0 - 2.3.0
Added
- Added support for new Nengo core
NeuronType
state implementation. (#159) - Compatible with TensorFlow 2.3.0. (#159)
- Added support for
nengo.Tanh
,nengo.RegularSpiking
,nengo.StochasticSpiking
, andnengo.PoissonSpiking
neuron types. (#159) - Added
nengo_dl.configure_settings(learning_phase=True/False)
configuration option. This mimics the previous behaviour oftf.keras.backend.learning_phase_scope
(which was deprecated by TensorFlow). That is, if you would like to override the default behaviour so that, e.g.,sim.predict
runs in training mode, setnengo_dl.configure_settings(learning_phase=True)
. (#163)
Changed
Simulator.evaluate
no longer prints any information to stdout in TensorFlow 2.2 in graph mode (due to a TensorFlow issue, see tensorflow/tensorflow#39456). Loss/metric values will still be returned from the function as normal. (#153)- A warning will now be raised if activation types are passed to
Converter.swap_activations
that aren't actually in the model. (#168) - Updated TensorFlow installation instruction in documentation. (#170)
- NengoDL will now use TensorFlow's eager mode by default. The previous graph-mode behaviour can be restored by calling
tf.compat.v1.disable_eager_execution()
, but we cannot guarantee that that behaviour will be supported in the future. (#163) - NengoDL will now use TensorFlow's "control flow v2" by default. The previous behaviour can be restored by calling
tf.compat.v1.disable_control_flow_v2()
, but we cannot guarantee that that behaviour will be supported in the future. (#163) - NengoDL will now default to allowing TensorFlow's "soft placement" logic, meaning that even if you specify an explicit device like
"/gpu:0"
, TensorFlow may not allocate an op to that device if there isn't a compatible implementation available. The previous behaviour can be restored by callingtf.config.set_soft_device_placement(False)
. (#163) - Internal NengoDL
OpBuilder
classes now separate the "pre build" stage fromOpBuilder.__init__
(so that the sameOpBuilder
class can be re-used across multiplecalls
, rather than instantiating a newOpBuilder
each time). Note that this has no impact on front-end users, this is only relevant to anyone that has implemented a custom build class. The logic that would previously have gone inOpBuilder.__init__
should now go inOpBuilder.build_pre
. In addition, theops
argument has been removed fromOpBuilder.build_pre
; that will be passed toOpBuilder.__init__
( and will be available inbuild_pre
asself.ops
). Similarly, theops
andconfig
argument have been removed frombuild_post
, and can instead be accessed throughself.ops/config
. (#163) - Minimum TensorFlow version is now 2.2.0. (#163)
Fixed
- Support Sparse transforms in
Simulator.get_nengo_params
. (#149) - Fixed bug in TensorGraph log message when logging was enabled. (#151)
- Updated the KerasWrapper class in the
tensorflow-models
example to fix a compatibility issue in TensorFlow 2.2. (#153) - Handle Nodes that are not connected to anything else, but are probed (this only occurs in Nengo>=3.1.0). (#159)
- More robust support for converting nested Keras models in TensorFlow 2.3. (#161)
- Fix bug when probing slices of certain probeable attributes (those that are directly targeting a Signal in the model). (#164)
Removed
- Removed
nengo_dl.utils.print_op
(usetf.print
instead). (#163)