sched-rl-gym
is an OpenAI Gym
environment for job scheduling problems. Currently, it implements the
Markov Decision
Process
defined by
DeepRM.
You can use it as any other OpenAI Gym environment, provided the module is registered. Lucky for you, it supports auto registration upon first import.
Therefore, you can get started by importing the environment with
import schedgym.envs as schedgym
.
As a parallel with the CartPole example in the Gym documentation, the following code will implement a random agent:
import gym
import schedgym.envs as schedgym
env = gym.make('DeepRM-v0', use_raw_state=True)
env.reset()
for _ in range(200):
env.render()
observation, reward, done, info = env.step(env.action_space.sample())
env.close()
With the following rendering:
- OpenAI Gym environment
- Human rendering
- Configurable environment
The easiest/quickest way to install sched-rl-gym is to use pip
with
the command:
pip install -e git+https://github.com/renatolfc/sched-rl-gym.git#egg=sched-rl-gym
We do recommend you use a virtual environment, to not pollute your python installation with custom packages.
If you want to be able to edit the code, then your best bet is to clone this repository with
git clone https://github.com/renatolfc/sched-rl-gym.git
In this case, you will need to install the dependencies manually.
The dependencies are documented in the requirements.txt
file. You
can install them with
pip install -r requirements.txt
- Issue tracker: https://github.com/renatolfc/sched-rl-gym/issues
- Source code: https://github.com/renatolfc/sched-rl-gym
If you’re having issues, please let us know. The easiest way is to open an issue on github.
The project is licensed under the MIT license.