Welcome to the code base for the MOOC on Reinforcement Learning! Below is an overview of the repository structure, instructions for setting up the environment locally, the binder link, and how to access and use the code.
The repository is organized into folders by weeks. We provide notebooks for Week 3, Week 5, Week 7, and Week 8.
There are two possible options for working with the provided notebooks: setting up a local environment on you machine or using Binder. Note that Binder is a free hosting service and therefore training a model will likely be faster on you local machine.
To run the code and notebooks on your local machine, you'll need to set up a Python environment that includes all the required dependencies. This repository includes an environment.yml
file that specifies the necessary packages. Follow the steps below to set up the environment:
-
Clone the repository:
git clone https://github.com/Data-Science-in-Mechanical-Engineering/mooc_rl.git cd mooc_rl
-
Create a new conda environment:
conda env create -f environment.yml
-
Activate the environment:
conda activate MOOC_RL
-
Launch Jupyter Lab:
jupyter lab
If you prefer not to install the environment locally, you can easily run the notebooks in a cloud environment using Binder. Simply click the link below to launch the repository in Binder:
This will open the repository in an interactive Jupyter Notebook environment without needing to set up anything on your local machine.
If you find any issues or have suggestions for improvements, feel free to open an issue or submit a pull request. We will not actively manage these but suggestions may improve future iteration of this course.
Happy learning, and good luck with your Reinforcement Learning journey!
Your MOOC RL Team