Skip to content

Commit

Permalink
Fix docs build
Browse files Browse the repository at this point in the history
  • Loading branch information
m-wojnar committed Feb 5, 2024
1 parent 2e744ad commit 6417c55
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 9 deletions.
8 changes: 0 additions & 8 deletions docs/source/agents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -167,11 +167,3 @@ Upper confidence bound (UCB)
.. autoclass:: UCB
:show-inheritance:
:members:


Particle filter (Core)
----------------------

.. automodule:: reinforced_lib.agents.core.particle_filter
:show-inheritance:
:members:
2 changes: 1 addition & 1 deletion docs/source/custom_agents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ Although the above example is a simple one, it is not hard to extend it to deep
This can be achieved by leveraging the JAX ecosystem, along with the `flax <https://flax.readthedocs.io/>`_
library, which provides a convenient way to define neural networks, and `optax <https://optax.readthedocs.io/>`_,
which provides a set of optimizers. Below, we provide excerpts of the code for the :ref:`deep Q-learning agent
<Deep Q-Learning>`.
<Deep Q-Learning (DQN)>`.

The state of the DRL agent often contains parameters and state of the neural network as well as an experience
replay buffer:
Expand Down
7 changes: 7 additions & 0 deletions docs/source/utils.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,10 @@ Experience Replay

.. automodule:: reinforced_lib.utils.experience_replay
:members:

Particle filter (Core)
----------------------

.. automodule:: reinforced_lib.utils.particle_filter
:show-inheritance:
:members:

0 comments on commit 6417c55

Please sign in to comment.