-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiple DQN trained agents for a single environment #1
Comments
its kind of competitive between agent A vs agent B of same environment |
Hi @indhra007 .
import marl
from marl.agent import DQNAgent
env = my_env()
# This part may depend of the implementation of the environment
obs_space1 = env.observation_space[0]
act_space1 = env.action_space[0]
obs_space2 = env.observation_space[1]
act_space2 = env.action_space[1]
agent1 = DQNAgent("MlpNet", obs_space1, act_space1)
print("#> Agent 1 :\n", agent1, "\n")
agent2 = DQNAgent("MlpNet", obs_space2, act_space2)
print("#> Agent 2 :\n", agent2, "\n")
mas = MARL(agents_list=[agent1, agent2])
# Train the agent for 100 000 timesteps
mas.learn(env, nb_timesteps=100000)
# Test the agent for 10 episodes
mas.test(env, nb_episodes=10) I hope this will help you. I continue to implement some module for this API and I hope I will have time |
@blavad thanks for quick reply If possible can u share an environment which has multi agents-with some documentation |
@indhra007 sorry for the late answer. In order to avoid problems of importing packages when using notebook, go to the marl directory before installing it.
or
If you are using command line, something as follow should work: git clone https://github.com/blavad/marl.git
cd marl
pip install -e . |
@blavad Any multi agent environment other than soccer, |
@indhra007 For the moment I cannot share another well documented environment. I am currently working with another environment (for the game Hanabi) but it is not online yet. I will let you know as soon as I make a repo with some multi-agent environments. |
Ok |
@blavad
thanks for the huge repo
hi i have a query
i would like to train two agents with DQN of same environment, but independent of them (agents)
is it possible, if so help me out
thanks for the huge repo
The text was updated successfully, but these errors were encountered: