Skip to content

Pytorch Implementation of Stochastic MuZero for gym environment. This algorithm is capable of supporting a wide range of action and observation spaces, including both discrete and continuous variations.

License

Notifications You must be signed in to change notification settings

DHDev0/Stochastic-muzero

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stochastic MuZero

Pytorch Implementation of Stochastic MuZero. Base on Muzero Unplugged.

It is suggested to refer to Stochastic MuZero as "unplugged," as setting the reanalyze_ratio to 0 is necessary to achieve Stochastic MuZero. This is because the original "Stochastic MuZero" paper highlights online reinforcement learning, however, as an enhancement to "MuZero Unplugged," it also encompasses offline reinforcement learning capabilities.

MuZero -> MuZero Unplugged -> Stochastic MuZero

*A scheduled update is planned for the release of PyTorch 2.1.

Table of contents

Getting started

Local Installation

PIP dependency : requirement.txt

git clone https://github.com/DHDev0/Stochastic-muzero.git

cd Stochastic-muzero

pip install -r requirements.txt

If you experience some difficulty refer to the first cell Tutorial or use the dockerfile.

Docker

Build image: (building time: 22 min , memory consumption: 8.75 GB)

docker build -t stochastic_muzero .

(do not forget the ending dot)

Start container:

docker run --cpus 2 --gpus 1 -p 8888:8888 stochastic_muzero
#or
docker run --cpus 2 --gpus 1 --memory 2000M -p 8888:8888 stochastic_muzero
#or
docker run --cpus 2 --gpus 1 --memory 2000M -p 8888:8888 --storage-opt size=15g stochastic_muzero

The docker run will start a jupyter lab on https://localhost:8888//lab?token=token (you need the token) with all the necessary dependency for cpu and gpu(Nvidia) compute.

Option meaning:
--cpus 2 -> Number of allocated (2) cpu core
--gpus 1 -> Number of allocated (1) gpu
--storage-opt size=15gb -> Allocated storage capacity 15gb (not working with windows WSL)
--memory 2000M -> Allocated RAM capacity of 2GB
-p 8888:8888 -> open port 8888 for jupyter lab (default port of the Dockerfile)

Stop the container:

docker stop $(docker ps -q --filter ancestor=stochastic_muzero)

Delete the container:

docker rmi -f stochastic_muzero

Dependency

Language :

  • Python 3.8 to 3.10 (bound by the retro compatibility of Ray and Pytorch)

Library :

  • torch 1.13.0
  • torchvision 0.14.0
  • ray 2.0.1
  • gymnasium 0.27.0
  • matplotlib >=3.0
  • numpy 1.21.5

More details at: requirement.txt

Usage

Jupyter Notebook

For practical example, you can use the Tutorial.

CLI

Set your config file (example): https://github.com/DHDev0/Stochastic-muzero/blob/main/config/

First and foremost cd to the project folder:

cd Stochastic-muzero

Construct your dataset through experimentation.

python muzero_cli.py human_buffer config/experiment_450_config.json

Training :

python muzero_cli.py train config/experiment_450_config.json

Training with report

python muzero_cli.py train report config/experiment_450_config.json

Inference (play game with specific model) :

python muzero_cli.py train play config/experiment_450_config.json

Training and Inference :

python muzero_cli.py train play config/experiment_450_config.json

Benchmark model :

python muzero_cli.py benchmark config/experiment_450_config.json

Training + Report + Inference + Benchmark :

python muzero_cli.py train report play benchmark play config/experiment_450_config.json

Features

Core Muzero and Muzero Unplugged features:

  • Work for any Gymnasium environments/games. (any combination of continous or/and discrete action and observation space)
  • MLP network for game state observation. (Multilayer perceptron)
  • LSTM network for game state observation. (LSTM)
  • Transformer decoder for game state observation. (Transformer)
  • Residual network for RGB observation using render. (Resnet-v2 + MLP)
  • Residual LSTM network for RGB observation using render. (Resnet-v2 + LSTM)
  • MCTS with 0 simulation (use of prior) or any number of simulation.
  • Model weights automatically saved at best selfplay average reward.
  • Priority or Uniform for sampling in replay buffer.
  • Manage illegal move with negative reward.
  • Scale the loss using the importance sampling ratio.
  • Custom "Loss function" class to apply transformation and loss on label/prediction.
  • Load your pretrained model from tag number.
  • Single player mode.
  • Training / Inference report. (not live, end of training)
  • Single/Multi GPU or Single/Multi CPU for inference, training and self-play.
  • Support mix precision for training and inference.(torch_type: bfloat16,float16,float32,float64)
  • Pytorch gradient scaler for mix precision in training.
  • Tutorial with jupyter notebook.
  • Pretrained weights for cartpole. (you will find weight, report and config file)
  • Commented with link/page to the paper.
  • Support : Windows , Linux , MacOS.
  • Fix pytorch linear layer initialization. (refer to : https://tinyurl.com/ykrmcnce)
  • Support of Gymnasium 0.27.0
  • The ability to accommodate any number of players with the provision of player cycle information.
  • The incorporation of reanalyze buffer(offline learning) and reanalyze ratio functionality.
  • The capability to construct human play datasets through experimentation (CLI only).
  • The capability to load human play datasets into the Demonstration buffer or Replay buffer for training.
  • The ability to specify the amount of sampled action that MCTS should utilize.
  • The implementation of a priority scale on neural network and replay buffer priority.
  • Various options for bounding, saving, and deleting games from the reanalyze buffer.
  • The introduction of the reanalyze_fraction_mode, which allows for the statistical or
    quantitative switch between new game and reanalyze data with a ratio of reanalyze buffer vs replay buffer."
  • The implementation of a scaling parameter of the value loss.

Muzero Stochastic new add-on features include:

  • No gradient scaling.
  • Add model of afterstate_prediction_function, aftstate_dynamic_function and encoder_function.
  • Extend batch with all observation following an initial index.
  • Extend mcts with chance node.
  • Extend forward pass with afterstate_prediction, aftstate_dynamic and encoder.
  • Extend loss function with value_afterstate_loss, distribution_afterstate_loss and vq-vae_commitment_cost.
  • [Encoder] The encoder embedding c_e_t is modeled as a categorical variable.
  • [Encoder] Selecting the closest code c_t is equivalent to computing the expression one_hot(arg_max(c_e_t)).
  • [Encoder] Use of the Gumbel-Softmax reparameterization trick with zero temperature during the forward pass. (meaning you just forward the encoder with random noise during training and without noise during inference. Since the temperature is 0 you don't forward anything)
  • [Encoder] A straight-through estimator is used during the backward of the encoder to allow the gradients to flow only to the encoder during the backpropagation.
  • [Encoder] There is no explicit decoder in the model and it does not use a reconstruction loss.
  • [Encoder] The network is trained end-to-end in a fashion similar to MuZero.

TODO:

  • Hyperparameter search. (pseudo-code available in self_play.py)
  • Training and deploy on cloud cluster using Kubeflow, Airflow or Ray for aws,gcp and azure.

How to make your own custom gym environment?

Refer to the Gym documentation

You will be able to call your custom gym environment in muzero after you register it in gym.

Authors

Subjects

Deep reinforcement learning

License

GPL-3.0 license