Skip to content

Releases: thu-ml/tianshou

v1.1.0

10 Aug 16:50
Compare
Choose a tag to compare

Release 1.1.0

Highlights

Evaluation Package

This release introduces a new package evaluation that integrates best
practices for running experiments (seeding test and train environmets) and for
evaluating them using the rliable
library. This should be especially useful for algorithm developers for comparing
performances and creating meaningful visualizations. This functionality is
currently in alpha state
and will be further improved in the next releases.
You will need to install tianshou with the extra eval to use it.

The creation of multiple experiments with varying random seeds has been greatly
facilitated. Moreover, the ExpLauncher interface has been introduced and
implemented with several backends to support the execution of multiple
experiments in parallel.

An example for this using the high-level interfaces can be found
here, examples that use low-level
interfaces will follow soon.

Improvements in Batch

Apart from that, several important
extensions have been added to internal data structures, most notably to Batch.
Batches now implement __eq__ and can be meaningfully compared. Applying
operations in a nested fashion has been significantly simplified, and checking
for NaNs and dropping them is now possible.

One more notable change is that torch Distribution objects are now sliced when
slicing a batch. Previously, when a Batch with say 10 actions and a dist
corresponding to them was sliced to [:3], the dist in the result would still
correspond to all 10 actions. Now, the dist is also "sliced" to be the
distribution of the first 3 actions.

A detailed list of changes can be found below.

Changes/Improvements

  • evaluation: New package for repeating the same experiment with multiple
    seeds and aggregating the results. #1074 #1141 #1183
  • data:
    • Batch:
      • Add methods to_dict and to_list_of_dicts. #1063 #1098
      • Add methods to_numpy_ and to_torch_. #1098, #1117
      • Add __eq__ (semantic equality check). #1098
      • keys() deprecated in favor of get_keys() (needed to make iteration
        consistent with naming) #1105.
      • Major: new methods for applying functions to values, to check for NaNs
        and drop them, and to set values. #1181
      • Slicing a batch with a torch distribution now also slices the
        distribution. #1181
    • data.collector:
      • Collector:
        • Introduced BaseCollector as a base class for all collectors.
          #1123
        • Add method close #1063
        • Method reset is now more granular (new flags controlling
          behavior). #1063
      • CollectStats: Add convenience
        constructor with_autogenerated_stats. #1063
  • trainer:
    • Trainers can now control whether collectors should be reset prior to
      training. #1063
  • policy:
    • introduced attribute in_training_step that is controlled by the trainer.
      #1123
    • policy automatically set to eval mode when collecting and to train
      mode when updating. #1123
    • Extended interface of compute_action to also support array-like inputs
      #1169
  • highlevel:
    • SamplingConfig:
      • Add support for batch_size=None. #1077
      • Add training_seed for explicit seeding of training and test
        environments, the test_seed is inferred from training_seed. #1074
    • experiment:
      • Experiment now has a name attribute, which can be set
        using ExperimentBuilder.with_name and
        which determines the default run name and therefore the persistence
        subdirectory.
        It can still be overridden in Experiment.run(), the new parameter
        name being run_name rather than
        experiment_name (although the latter will still be interpreted
        correctly). #1074 #1131
      • Add class ExperimentCollection for the convenient execution of
        multiple experiment runs #1131
      • The World object, containing all low-level objects needed for experimentation,
        can now be extracted from an Experiment instance. This enables customizing
        the experiment prior to its execution, bridging the low and high-level interfaces. #1187
      • ExperimentBuilder:
        • Add method build_seeded_collection for the sound creation of
          multiple
          experiments with varying random seeds #1131
        • Add method copy to facilitate the creation of multiple
          experiments from a single builder #1131
    • env:
      • Added new VectorEnvType called SUBPROC_SHARED_MEM_AUTO and used in
        for Atari and Mujoco venv creation. #1141
  • utils:
    • logger:
      • Loggers can now restore the logged data into python by using the
        new restore_logged_data method. #1074
      • Wandb logger extended #1183
    • net.continuous.Critic:
      • Add flag apply_preprocess_net_to_obs_only to allow the
        preprocessing network to be applied to the observations only (without
        the actions concatenated), which is essential for the case where we
        want
        to reuse the actor's preprocessing network #1128
    • torch_utils (new module)
      • Added context managers torch_train_mode
        and policy_within_training_step #1123
    • print
      • DataclassPPrintMixin now supports outputting a string, not just
        printing the pretty repr. #1141

Fixes

  • highlevel:
    • CriticFactoryReuseActor: Enable the Critic
      flag apply_preprocess_net_to_obs_only for continuous critics,
      fixing the case where we want to reuse an actor's preprocessing network
      for the critic (affects usages
      of the experiment builder method with_critic_factory_use_actor with
      continuous environments) #1128
    • Policy parameter action_scaling value "default" was not correctly
      transformed to a Boolean value for
      algorithms SAC, DDPG, TD3 and REDQ. The value "default" being truthy
      caused action scaling to be enabled
      even for discrete action spaces. #1191
  • atari_network.DQN:
    • Fix constructor input validation #1128
    • Fix output_dim not being set if features_only=True
      and output_dim_added_layer is not None #1128
  • PPOPolicy:
    • Fix max_batchsize not being used in logp_old computation
      inside process_fn #1168
  • Fix Batch.__eq__ to allow comparing Batches with scalar array values #1185

Internal Improvements

  • Collectors rely less on state, the few stateful things are stored explicitly
    instead of through a .data attribute. #1063
  • Introduced a first iteration of a naming convention for vars in Collectors.
    #1063
  • Generally improved readability of Collector code and associated tests (still
    quite some way to go). #1063
  • Improved typing for exploration_noise and within Collector. #1063
  • Better variable names related to model outputs (logits, dist input etc.).
    #1032
  • Improved typing for actors and critics, using Tianshou classes
    like Actor, ActorProb, etc.,
    instead of just nn.Module. #1032
  • Added interfaces for most Actor and Critic classes to enforce the presence
    of forward methods. #1032
  • Simplified PGPolicy forward by unifying the dist_fn interface (see
    associated breaking change). #1032
  • Use .mode of distribution instead of relying on knowledge of the
    distribution type. #1032
  • Exception no longer raised on len of empty Batch. #1084
  • tests and examples are covered by mypy. #1077
  • NetBase is more used, stricter typing by making it generic. #1077
  • Use explicit multiprocessing context for creating Pipe in subproc.py.
    #1102
  • Improved documentation and naming in many places

Breaking Changes

  • data:
    • Collector:
      • Removed .data attribute. #1063
      • Collectors no longer reset the environment on initialization.
        Instead, the user might have to call reset expicitly or
        pass reset_before_collect=True . #1063
      • Removed no_grad argument from collect method (was unused in
        tianshou). #1123
    • Batch:
      • Fixed iter(Batch(...) which now behaves the same way
        as Batch(...).__iter__().
        Can be considered a bugfix. #1063
      • The methods to_numpy and to_torch in are not in-place anymore
        (use to_numpy_ or to_torch_ instead). #1098, #1117
      • The method Batch.is_empty has been removed. Instead, the user can
        simply check for emptiness of Batch by using len on dicts. #1144
      • Stricter cat_, only concatenation of batches with the same structure
        is allowed. #1181
      • to_torch and to_numpy are no longer static methods.
        So Batch.to_numpy(batch) should be replaced by batch.to_numpy().
        #1200
  • utils:
    • logger:
      • BaseLogger.prepare_dict_for_logging is now abstract. #1074
      • Removed deprecated and unused BasicLogger (only affects users who
        subclassed it). #1074
    • utils.net:
      • Recurrent now receives and returns
        a RecurrentStateBatch instead of a dict. #1077
    • Modules with code that was copied from sensAI have been replaced by
      imports from new dependency sensAI-utils:
      • tianshou.utils.logging is replaced with sensai.util.logging
      • tianshou.utils.string is replaced with sensai.util.string
      • tianshou.utils.pickle is replaced with sensai.util.pickle
  • env:
    • All VectorEnvs now return a numpy array of info-dic...
Read more

1.0.0 - High level API, Improved Interfaces and Typing

20 Mar 20:41
1a4d7de
Compare
Choose a tag to compare

Release 1.0.0

This release focuses on updating and improving Tianshou internals (in particular, code quality) while creating relatively few breaking changes (apart from things like the python and dependencies' versions).

We view it as a significant step for transforming Tianshou into the go-to place both for RL researchers, as well as for RL practitioners working on industry projects.
 
This is the first release after the appliedAI Institute (the TransferLab division) has decided to further develop Tianshou and provide long-term support. 

Breaking Changes

  • dropped support of python<3.11
  • dropped support of gym, from now on only Gymnasium envs are supported
  • removed functions like offpolicy_trainer in favor of OffpolicyTrainer(...).run() (this affects all example scripts)
  • several breaking changes related to removing **kwargs from signatures, renamings of internal attributes (like critic1 -> critic)
  • Outputs of training methods are now dataclasses instead of dicts

Functionality Extensions

Major

  • High level interfaces for experiments, demonstrated by the new example scripts with names ending in _hl.py

Minor

  • Method to compute action directly from a policy's observation, can be used for unrolling
  • Support for custom keys in ReplayBuffer
  • Support for CalQL as part of CQL
  • Support for explicit setting of multiprocessing context for SubprocEnvWorker
  • critic2 no longer has to be explicitly constructed and passed if it is supposed to be the same network as critic (formerly critic1)

Internal Improvements

Build and Docs

  • Completely changed the build pipeline. Tianshou now uses poetry, black, ruff, poethepoet, nbqa and other niceties.
  • Notebook tutorials are now part of the repository (previously they were in a drive). They were fixed and are executed during the build as integration tests, in addition to serving as documentation. Parts of the content have been improved.
  • Documentation is now built with jupyter book. JavaScript code has been slightly improved, JS dependencies are included as part of the repository.
  • Many improvements in docstrings

Typing

  • Adding BatchPrototypes to cover the fields needed and returned by methods relying on batches in a backwards compatible way
  • Removing **kwargs from policies' constructors
  • Overall, much stricter and more correct typing. Removing kwargs and replacing dicts by dataclasses in several places.
  • Making use of Generic to express different kinds of stats that can be returned by learn and update
  • Improved typing in tests and examples, close to passing mypy

General

  • Reduced duplication, improved readability and simplified code in several places
  • Use dist.mode instead of inferring loc or argmax from the dist_fn input

Contributions

The OG creators

  • @Trinkle23897 participated in almost all aspects of the coordination and reviewed most of the merged PRs
  • @nuance1979 participated in several discussions

From appliedAI

The team working on this release of Tianshou consisted of @opcode81 @MischaPanch @maxhuettenrauch @carlocagnetta @bordeauxred

External contributions

  • @BFAnas participated in several discussions and contributed the CalQL implementation, extending the pre-processing logic.
  • @dantp-ai fixed many mypy issues and improved the tests
  • @arnaujc91 improved the logic of computing deterministic actions
  • Many other contributors, among them many new ones participated in this release. The Tianshou team is very grateful for your contributions!

0.5.0: Gymnasium Support

13 Mar 05:16
f0afdea
Compare
Choose a tag to compare

Enhancement

  1. Gymnasium Integration (#789, @Markus28)
  2. Implement args/kwargs for init of norm_layers and activation (#788, @janofsun)
  3. Add "act" to preprocess_fn call in collector. (#801, @jamartinh)
  4. Various update (#803, #826, @Trinkle23897)

Bug fix

  1. Fix a bug in batch._is_batch_set (#825, @zbenmo)
  2. Fix a bug in HERReplayBuffer (#817, @sunkafei)

0.4.11

24 Dec 21:17
1037627
Compare
Choose a tag to compare

Enhancement

  1. Hindsight Experience Replay as a replay buffer (#753, @Juno-T)
  2. Fix Atari PPO example (#780, @nuance1979)
  3. Update experiment details of MuJoCo benchmark (#779, @ChenDRAG)
  4. Tiny change since the tests are more than unit tests (#765, @fzyzcjy)

Bug Fix

  1. Multi-agent: gym->gymnasium; render() update (#769, @WillDudley)
  2. Updated atari wrappers (#781, @Markus28)
  3. Fix info not pass issue in PGPolicy (#787, @Trinkle23897)

0.4.10

17 Oct 05:17
41ae346
Compare
Choose a tag to compare

Enhancement

  1. Changes to support Gym 0.26.0 (#748, @Markus28)
  2. Added pre-commit (#752, @Markus28)
  3. Added support for new PettingZoo API (#751, @Markus28)
  4. Fix docs tictactoc dummy vector env (#749, @5cat)

Bug fix

  1. Fix 2 bugs and refactor RunningMeanStd to support dict obs norm (#695, @Trinkle23897)
  2. Do not allow async simulation for test collector (#705, @CWHer)
  3. Fix venv wrapper reset retval error with gym env (#712, @Trinkle23897)

0.4.9

04 Jul 17:10
6505484
Compare
Choose a tag to compare

Bug Fix

  1. Fix save_checkpoint_fn return value to checkpoint_path (#659, @Trinkle23897)
  2. Fix an off-by-one bug in trainer iterator (#659, @Trinkle23897)
  3. Fix a bug in Discrete SAC evaluation; default to deterministic mode (#657, @nuance1979)
  4. Fix a bug in trainer about test reward not logged because self.env_step is not set for offline setting (#660, @nuance1979)
  5. Fix exception with watching pistonball environments (#663, @ycheng517)
  6. Use env.np_random.integers instead of env.np_random.randint in Atari examples (#613, @ycheng517)

API Change

  1. Upgrade gym to >=0.23.1, support seed and return_info arguments for reset (#613, @ycheng517)

New Features

  1. Add BranchDQN for large discrete action spaces (#618, @BFAnas)
  2. Add show_progress option for trainer (#641, @michalgregor)
  3. Added support for clipping to DQNPolicy (#642, @michalgregor)
  4. Implement TD3+BC for offline RL (#660, @nuance1979)
  5. Add multiDiscrete to discrete gym action space wrapper (#664, @BFAnas)

Enhancement

  1. Use envpool in vizdoom example (#634, @Trinkle23897)
  2. Add Atari (discrete) SAC examples (#657, @nuance1979)

0.4.8

05 May 12:05
2a7c151
Compare
Choose a tag to compare

Bug fix

  1. Fix action scaling bug in SAC (#591, @ChenDRAG)

Enhancement

  1. Add write_flush in two loggers, fix argument passing in WandbLogger (#581, @Trinkle23897)
  2. Update Multi-agent RL docs and upgrade pettingzoo (#595, @ycheng517)
  3. Add learning rate scheduler to BasePolicy (#598, @alexnikulkov)
  4. Add Jupyter notebook tutorials using Google Colaboratory (#599, @ChenDRAG)
  5. Unify utils.network: change action_dim to action_shape (#602, @Squeemos)
  6. Update Mujoco bemchmark's webpage (#606, @ChenDRAG)
  7. Add Atari results (#600, @gogoduan) (#616, @ChenDRAG)
  8. Convert RL Unplugged Atari datasets to tianshou ReplayBuffer (#621, @nuance1979)
  9. Implement REDQ (#623, @Jimenius)
  10. Improve data loading from D4RL and convert RL Unplugged to D4RL format (#624, @nuance1979)
  11. Add vecenv wrappers for obs_norm to support running mujoco experiment with envpool (#628, @Trinkle23897)

0.4.7

21 Mar 20:32
2a9c928
Compare
Choose a tag to compare

Bug Fix

  1. Add map_action_inverse for fixing the error of storing random action (#568)

API Change

  1. Update WandbLogger implementation and update Atari examples, use Tensorboard SummaryWritter as core with wandb.init(..., sync_tensorboard=True) (#558, #562)
  2. Rename save_fn to save_best_fn to avoid ambiguity (#575)
  3. (Internal) Add tianshou.utils.deprecation for a unified deprecation wrapper. (#575)

New Features

  1. Implement Generative Adversarial Imitation Learning (GAIL), add Mujoco examples (#550)
  2. Add Trainers as generators: OnpolicyTrainer, OffpolicyTrainer, and OfflineTrainer; remove duplicated code and merge into base trainer (#559)

Enhancement

  1. Add imitation baselines for offline RL (#566)

0.4.6.post1

25 Feb 16:08
c248b4f
Compare
Choose a tag to compare

This release is to fix the conda pkg publish, support more gym version instead of only the newest one, and keep compatibility of internal API. See #536.

0.4.6

25 Feb 02:03
97df511
Compare
Choose a tag to compare

Bug Fix

  1. Fix casts to int by to_torch_as(...) calls in policies when using discrete actions (#521)

API Change

  1. Change venv internal API name of worker: send_action -> send, get_result -> recv (align with envpool) (#517)

New Features

  1. Add Intrinsic Curiosity Module (#503)
  2. Implement CQLPolicy and offline_cql example (#506)
  3. Pettingzoo environment support (#494)
  4. Enable venvs.reset() concurrent execution (#517)

Enhancement

  1. Remove reset_buffer() from reset method (#501)
  2. Add atari ppo example (#523, #529)
  3. Add VizDoom PPO example and results (#533)
  4. Upgrade gym version to >=0.21 (#534)
  5. Switch atari example to use EnvPool by default (#534)

Documentation

  1. Update dqn tutorial and add envpool to docs (#526)