Skip to content

Releases: m-wojnar/reinforced-lib

Reinforced-lib 1.1.4

13 Nov 18:30
Compare
Choose a tag to compare
  • Add experimental masked MAB agent.

Reinforced-lib 1.1.3

17 Oct 12:18
Compare
Choose a tag to compare
  • Add epsilon decay in e-greedy MAB.

Reinforced-lib 1.1.2

10 Aug 12:55
Compare
Choose a tag to compare
  • Update dependencies.
  • Fix error with action space size of one.

Reinforced-lib 1.1.1

13 Apr 17:30
Compare
Choose a tag to compare

Improvements:

  • Update documentation.
  • Add reference to the SoftwareX paper.

Fix:

  • Normal Thompson sampling allows the lam parameter to be zero.
  • Bernoulli Thompson sampling is stationary by default.
  • Update the default value of the decay parameter the in ra-sim example.

Reinforced-lib 1.1.0

11 Feb 14:54
Compare
Choose a tag to compare

Major API changes:

  • Migrate from haiku (deprecated) to flax as the base naural network library.
  • Update agent names to match literature:
    • QLearning (deep Q-learning) -> DQN,
    • DQN (deep double Q-learning) -> DDQN.
  • Move particle filter from agents to utils.
  • New behavior of loggers - all declared loggers get values from all sources.

New functionalities:

  • Add Weights & Biases logger.

Other important changes:

  • Fix updates with empty replay buffer.
  • Fix logging of arrays to TensorBoard.
  • Minor improvements in documentation.
  • Rewrite Gymnasium integration example in documentation.
  • Improve the CCOD example to better reflect the original implementation.

Reinforced-lib 1.0.4

19 Dec 15:59
Compare
Choose a tag to compare

Improvements:

  • Update documentation.
  • Enable the use of 64-bit JAX.

Reinforced-lib 1.0.3

15 Dec 19:16
Compare
Choose a tag to compare

New functionalities:

  • Add the normal-gamma Thompson sampling agent.
  • Add the log-normal Thompson sampling agent.

Reinforced-lib 1.0.2

11 Dec 23:02
Compare
Choose a tag to compare

Fix:

  • Make it easier to import the BasicMab extension.

Reinforced-lib 1.0.1

11 Dec 22:48
Compare
Choose a tag to compare

Important changes:

  • Move to pyproject.toml configuration file.
  • Add basic extension for MABs.
  • Update dependencies.
  • Fix bug modifying user values passed to library functions.
  • Fix agents behavior with multiple optimal actions - now agents draw one of the optimal actions instead of selecting the first one.

Reinforced-lib 1.0.0

23 Jul 11:36
Compare
Choose a tag to compare

Major API changes:

  • Added a support for deep reinforcement learning agents.
  • Relaxation of the requirements for the implementation of custom agents.
  • Major changes in the logging module (e.g., custom logging, synchronization).
  • Removed ability of the sample method to change state.
  • Introduced an inference only mode.

New functionalities:

  • Added new deep learning agents: deep Q-learning, deep expected SARSA, DQN, DDPG.
  • Added the Exp3 algorithm.
  • Added the Gymnasium extension.
  • Added the TensorBoard logger.
  • Added an easy export to TensorFlow Lite.
  • Added an automatic checkpointing.

Other important changes:

  • Upgraded the library to Python 3.9.
  • Updated and polished the documentation.
  • Added several new examples.
  • Moved Wi-Fi specific classes to examples.
  • Fixed known bugs.