Skip to content

Releases: sony/nnabla-rl

Version 0.16.0 Release

Version 0.15.0 Release

29 May 02:52
Compare
Choose a tag to compare

special notes

We now support Gymnasium environments in this version of nnabla-rl. You can utilize the Gymnasium environments by importing Gymnasium2GymWrapper from nnabla_rl.environments.wrappers.gymnasium. With this wrapper, all nnabla-rl algorithms should be compatible.

Additionally, we would like to inform you that we have discontinued support for Python 3.7 and have introduced support for Python 3.10, in line with the changes in supported Python versions by nnabla, the deep learning framework we use.

release-note-support

release-note-bugfix

release-note-algorithm

release-note-utility

Install the latest nnablaRL by:

pip install nnabla-rl

Version 0.14.0 Release

07 Nov 01:20
Compare
Choose a tag to compare

special notes

This version does NOT support the version v0.26.0 and greater of openai gym.
We're going to support openai gym version v0.26.0 and greater in the next release of nnablaRL. nnablaRL will stop officially supporting version less than v0.26.0 of openai gym from the next release.

release-note-bugfix

release-note-algorithm

release-note-utility

release-note-docs

release-note-samples

Install the latest nnablaRL by:

pip install nnabla-rl

Version 0.13.0 Release

30 Mar 09:09
Compare
Choose a tag to compare

special notes

  • This version does NOT support the version v0.26.0 and greater of openai gym.
  • We're going to support openai gym version v0.26.0 and greater in the next release of nnablaRL. nnablaRL will stop officially supporting version less than v0.26.0 of openai gym from the next release.

release-note-bugfix

release-note-algorithm

release-note-utility

release-note-docs

Install the latest nnablaRL by:

pip install nnabla-rl

Version 0.12.0 Release

07 Oct 07:04
Compare
Choose a tag to compare

special notes

  • This version does NOT support the version v0.26.0 and greater of openai gym.
  • We're going to support openai gym version v0.26.0 and greater in the next release of nnablaRL. nnablaRL will stop officially supporting version less than v0.26.0 of openai gym from the next release.
  • Only support python 3.7 or greater
    • Python 3.6 is not supported from this new release

release-note-bugfix

release-note-algorithm

release-note-distributions

release-note-utility

release-note-docs

release-note-build

Install the latest nnablaRL by:

pip install nnabla-rl

Version 0.11.0 Release

Version 0.10.0 Release

Version 0.9.0 Release

14 Jun 02:39
b689917
Compare
Choose a tag to compare

We are happy to announce the release of nnablaRL, a deep reinforcement learning (RL) library built on top of nnabla.
Reinforcement learning is one of the cutting edge machine learning technology that achieves super human performance in the field of gaming, robotics, etc..
We hope that this new library, nnablaRL, helps RL experts and also non-RL experts using reinforcement learning algorithms easily among our nnabla ecosystem.

Features of nnablaRL is the following.

Friendly API

nnablaRL has friendly Python APIs which enables to start training with only 3 lines of python code.

import nnabla_rl
import nnabla_rl.algorithms as A
from nnabla_rl.utils.reproductions import build_atari_env

env = build_atari_env("BreakoutNoFrameskip-v4") # 1
dqn = A.DQN(env)  # 2
dqn.train(env)  # 3

You can also customize the algorithm's hyper parameters easily. For example, you can change the batch size of training data as follows.

import nnabla_rl
import nnabla_rl.algorithms as A
from nnabla_rl.utils.reproductions import build_atari_env

env = build_atari_env("BreakoutNoFrameskip-v4")
config = A.DQNConfig(batch_size=100)
dqn = A.DQN(env, config=config)
dqn.train(env)

In addition to algorithm hyper parameters, you can also flexibly change the training component such as neural network models and model solvers. For details, see sample codes and API documents.

Many builtin algorithms

Most of famous/SoTA deep reinforcement learning algorithms, such as DQN, SAC, BCQ, GAIL, etc., is already implemented in nnablaRL. Implemented algorithms are carefully tested and evaluated. You can easily start training your agent using these verified implementations.
Please check the sample codes and document for detail usage of each algorithm.
You can find the list of implemented algorithms here.

Seemless switching of online and offline training

In reinforcement learning, there are two main training procedures, online and offline, to train the agent. Online training is a training procedure that executes both data collection and network update alternately. Conversely, offline training is a training procedure that updates the network using only existing data. With nnablaRL, you can switch these two training procedures seemlessly. For example, as shown below, you can easily train a robot's controller online using simulated environment and finetune it offline with real robot dataset.

import nnabla_rl
import nnabla_rl.algorithms as A


simulator = get_simulator() # This is just an example. Assuming that simulator exists
dqn = A.DQN(simulator, config=config)
dqn.train_online(simulator)

real_data = get_real_data() # This is also an example. Assuming that you have real robot data
dqn.train_offline(real_data)

Getting started

You can find both notebook style interactive demos and raw python scripts as a sample code to get started. If you are unfamiliar with reinforcement learning, we recommend trying the notebook as a starting point. You can immediately launch and start training through google colaboratory! Check the list of notebooks here.

Development of nnablaRL has just started. We will continue adding new reinforcement learning algorithms and SoTA techniques to nnablaRL. Feedbacks, feature requests and contributions are welcome! Check the contribution guide for details.