A collection of multi agent environments based on OpenAI gym.
git clone https://github.com/comp0124/ma-gym.git
cd ma-gym
pip install -e .
Please use this bibtex if you would like to cite it:
@misc{magym,
author = {Koul, Anurag},
title = {ma-gym: Collection of multi-agent environments based on OpenAI gym.},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/koulanurag/ma-gym}},
}
import gym
env = gym.make('ma_gym:Switch2-v0')
done_n = [False for _ in range(env.n_agents)]
ep_reward = 0
obs_n = env.reset()
while not all(done_n):
env.render()
obs_n, reward_n, done_n, info = env.step(env.action_space.sample())
ep_reward += sum(reward_n)
env.close()
Please refer to Wiki for complete usage details
- Checkers
- Combat
- PredatorPrey
- Pong Duel
(two player pong game)
- Switch
- Lumberjacks
Note : openai's environment can be accessed in multi agent form by prefix "ma_".Eg: ma_CartPole-v0
This returns an instance of CartPole-v0 in "multi agent wrapper" having a single agent.
These environments are helpful during debugging.
Please refer to Wiki for more details.
Checkers-v0 | Combat-v0 | Lumberjacks-v0 |
---|---|---|
PongDuel-v0 | PredatorPrey5x5-v0 | PredatorPrey7x7-v0 |
Switch2-v0 | Switch4-v0 | |
- Install:
pip install -e ".[test]"
- Run:
pytest
- This project was initially developed to complement my research internship @ SAS (Summer - 2019).