Repository for the Cart-Pole problem.
- PID
- Pole-Placement in State-Space
- LQR
- MPC
- Implement RL agent (discrete and continuous actions)
- Create a venv
- Clone the OpenAi Gym repo into project directory (
Toy-CartPole/
) - Add the
/gym/gym/envs/classic_control/cartpole_cont.py
file to the corresponding path - Add to
/gym/gym/envs/__init__.py
register(
id="CartPole-cont",
entry_point="gym.envs.classic_control.cartpole_cont:CartPoleEnv",
max_episode_steps=500,
reward_threshold=475.0,
)
- Do
$pip install -e .
inside/gym/