Skip to content

Commit

Permalink
Change nengo.github.io links to www.nengo.ai
Browse files Browse the repository at this point in the history
  • Loading branch information
drasmuss committed Jul 11, 2017
1 parent 8c31d55 commit c0c36dc
Show file tree
Hide file tree
Showing 7 changed files with 15 additions and 12 deletions.
10 changes: 5 additions & 5 deletions CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ Release History

- Added ``nengo_dl.tensor_layer`` to help with the construction of
layer-style TensorNodes (see the `TensorNode documentation
<https://nengo.github.io/nengo_dl/tensor_node.html>`_)
<http://www.nengo.ai/nengo_dl/tensor_node.html>`_)
- Added an example demonstrating `how to train a neural network
that can run in spiking neurons
<https://nengo.github.io/nengo_dl/examples/spiking_mnist.html>`_
<http://www.nengo.ai/nengo_dl/examples/spiking_mnist.html>`_
- Added some distributions for weight initialization to ``nengo_dl.dists``
- Added ``sim.train(..., profile=True)`` option to collect profiling information
during training
Expand All @@ -50,7 +50,7 @@ Release History
the more general ``nengo_dl.configure_settings(trainable=x)``. This has
resulted in some small changes to how trainability is controlled within
subnetworks; see the `updated documentation
<https://nengo.github.io/nengo_dl/training.html#choosing-which-elements-to-optimize>`_
<http://www.nengo.ai/nengo_dl/training.html#choosing-which-elements-to-optimize>`_
for details.
- Calling ``Simulator.train``/``Simulator.loss`` no longer resets the internal
state of the simulation (so they can be safely intermixed with calls to
Expand Down Expand Up @@ -87,10 +87,10 @@ Release History

- Added ability to manually specify which parts of a model are trainable
(see the `sim.train documentation
<https://nengo.github.io/nengo_dl/training.html>`_)
<http://www.nengo.ai/nengo_dl/training.html>`_)
- Added some code examples (see the ``docs/examples`` directory, or the
`pre-built examples in the documentation
<https://nengo.github.io/nengo_dl/examples.html>`_)
<http://www.nengo.ai/nengo_dl/examples.html>`_)
- Added the SoftLIFRate neuron type for training LIF networks (based on
`this paper <https://arxiv.org/abs/1510.08829>`_)

Expand Down
4 changes: 2 additions & 2 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,10 +57,10 @@ adds a number of unique features, such as:
convolutional neural networks) directly into a Nengo model

More details can be found in the `NengoDL documentation
<https://nengo.github.io/nengo_dl/>`_.
<http://www.nengo.ai/nengo_dl/>`_.

Installation
============

Installation instructions can be found `here
<https://nengo.github.io/nengo_dl/installation.html>`_.
<http://www.nengo.ai/nengo_dl/installation.html>`_.
2 changes: 1 addition & 1 deletion docs/examples/nef_init.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use the `sim.loss` function to check the initial error for our network on this data. We'll use mean-squared-error (MSE) as our error measure (see [the documentation](https://nengo.github.io/nengo_dl/training.html#objective) for more detail on specifying different error functions). Note that we'll also re-build the model with `minibatch_size=32` (so that we can process the 1024 inputs in chunks of 32 rather than one at a time)."
"We can use the `sim.loss` function to check the initial error for our network on this data. We'll use mean-squared-error (MSE) as our error measure (see [the documentation](http://www.nengo.ai/nengo_dl/training.html#objective) for more detail on specifying different error functions). Note that we'll also re-build the model with `minibatch_size=32` (so that we can process the 1024 inputs in chunks of 32 rather than one at a time)."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/pretrained_model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use a [TensorNode](https://nengo.github.io/nengo_dl/tensor_node.html) to insert our TensorFlow code into Nengo. `nengo_dl.TensorNode` works very similarly to `nengo.Node`, except instead of using the node to insert Python code into our model we will use it to insert TensorFlow code. \n",
"We will use a [TensorNode](http://www.nengo.ai/nengo_dl/tensor_node.html) to insert our TensorFlow code into Nengo. `nengo_dl.TensorNode` works very similarly to `nengo.Node`, except instead of using the node to insert Python code into our model we will use it to insert TensorFlow code. \n",
"\n",
"The first thing we need to do is define our TensorNode output. This should be a function that accepts the current simulation time (and, optionally, a batch of vectors) as input, and produces a batch of vectors as output. All of these variables will be represented as `tf.Tensor` objects, and the internal operations of the TensorNode will be implemented with TensorFlow operations. For example, we could use a TensorNode to output a `sin` function:"
]
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/spiking_mnist.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use [TensorNodes](https://nengo.github.io/nengo_dl/tensor_node.html) to construct the network, as they allow us to easily include features such as convolutional connections. To make things even easier, we'll use `nengo_dl.tensor_layer`. This is a utility function for constructing `TensorNodes` that mimics the layer-based syntax of many deep learning packages (e.g. [`tf.layers`](https://www.tensorflow.org/api_docs/python/tf/layers)). The full documentation for this function can be found [here](https://nengo.github.io/nengo_dl/tensor_node.html). \n",
"We will use [TensorNodes](http://www.nengo.ai/nengo_dl/tensor_node.html) to construct the network, as they allow us to easily include features such as convolutional connections. To make things even easier, we'll use `nengo_dl.tensor_layer`. This is a utility function for constructing `TensorNodes` that mimics the layer-based syntax of many deep learning packages (e.g. [`tf.layers`](https://www.tensorflow.org/api_docs/python/tf/layers)). The full documentation for this function can be found [here](http://www.nengo.ai/nengo_dl/tensor_node.html). \n",
"\n",
"`tensor_layer` is used to build a sequence of layers, where each layer takes the output of the previous layer and applies some transformation to it. So when we build a `tensor_layer` we pass it the input to the layer, the transformation we want to apply (expressed as a function that accepts a `tf.Tensor` as input and produces a `tf.Tensor` as output), and any arguments to that transformation function. `tensor_layer` also has optional `transform` and `synapse` parameters that set those respective values on the Connection from the previous layer to the one being constructed.\n",
"\n",
Expand Down
3 changes: 3 additions & 0 deletions nengo_dl/tensor_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -359,6 +359,9 @@ def loop_body(step, stop, loop_i, probe_arrays, base_vars):
tuple(x._ref() if isinstance(x, tf.Variable) else x
for x in self.base_vars))

# TODO: add option to disable backprop through loop, for when users
# want to train a network running over time, but optimize on a
# timestep-by-timestep basis
loop_vars = tf.while_loop(
loop_condition, loop_body, loop_vars=loop_vars,
parallel_iterations=1, back_prop=True)
Expand Down
4 changes: 2 additions & 2 deletions nengo_dl/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -353,11 +353,11 @@ def configure_settings(**kwargs):
or True/False will override the default for all objects. In either
case trainability can be further configured on a per-object basis (e.g.
``net.config[my_ensemble].trainable = True``. See `the documentation
<https://nengo.github.io/nengo_dl/training.html#choosing-which-elements-to-optimize>`_
<http://www.nengo.ai/nengo_dl/training.html#choosing-which-elements-to-optimize>`_
for more details.
planner : graph planning algorithm
Pass one of the `graph planners
<https://nengo.github.io/nengo_dl/graph_optimizer.html>`_ to change the
<http://www.nengo.ai/nengo_dl/graph_optimizer.html>`_ to change the
default planner.
"""

Expand Down

0 comments on commit c0c36dc

Please sign in to comment.