Skip to content

Latest commit

 

History

History
executable file
·
79 lines (62 loc) · 3.1 KB

README.md

File metadata and controls

executable file
·
79 lines (62 loc) · 3.1 KB

ma-gym

A collection of multi agent environments based on OpenAI gym.

Python package Upload Python Package

Installation

git clone https://github.com/comp0124/ma-gym.git
cd ma-gym
pip install -e .

Reference:

Please use this bibtex if you would like to cite it:

@misc{magym,
      author = {Koul, Anurag},
      title = {ma-gym: Collection of multi-agent environments based on OpenAI gym.},
      year = {2019},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/koulanurag/ma-gym}},
    }

Usage:

import gym

env = gym.make('ma_gym:Switch2-v0')
done_n = [False for _ in range(env.n_agents)]
ep_reward = 0

obs_n = env.reset()
while not all(done_n):
    env.render()
    obs_n, reward_n, done_n, info = env.step(env.action_space.sample())
    ep_reward += sum(reward_n)
env.close()

Please refer to Wiki for complete usage details

Environments:

  • Checkers
  • Combat
  • PredatorPrey
  • Pong Duel (two player pong game)
  • Switch
  • Lumberjacks
Note : openai's environment can be accessed in multi agent form by prefix "ma_".Eg: ma_CartPole-v0
This returns an instance of CartPole-v0 in "multi agent wrapper" having a single agent. 
These environments are helpful during debugging.

Please refer to Wiki for more details.

Zoo!

Checkers-v0 Combat-v0 Lumberjacks-v0
Checkers-v0.gif Combat-v0.gif Lumberjacks-v0.gif
PongDuel-v0 PredatorPrey5x5-v0 PredatorPrey7x7-v0
PongDuel-v0.gif PredatorPrey5x5-v0.gif PredatorPrey7x7-v0.gif
Switch2-v0 Switch4-v0
Switch2-v0.gif Switch4-v0.gif

Testing:

  • Install: pip install -e ".[test]"
  • Run: pytest

Acknowledgement:

  • This project was initially developed to complement my research internship @ SAS (Summer - 2019).