Replies: 2 comments
-
Hi @berttggg Indeed, those algorithms are designed for agents that are homogeneous (have the same spaces) and share parameters.
You can share the same model (e.g.: critic across all agents) by simply creating one instance and passing it to all the agents as follows: value = Value(env.observation_space(env.possible_agents[0]), env.action_space(env.possible_agents[0]), env.device)
models = {}
for agent_name in env.possible_agents:
models[agent_name] = {}
models[agent_name]["policy"] = # ...
models[agent_name]["value"] = value |
Beta Was this translation helpful? Give feedback.
-
@Toni-SM Hi, I actually tried your method, but it did not go well. If I do not use parameter sharing, all the robots can perform the task. I am wondering if it is the proper way to do the parameter sharing. |
Beta Was this translation helpful? Give feedback.
-
Hi.
Thank you so much for your effort for SKRL. I really appreciate it.
Regarding IPPO and MAPPO, if I am not mistaken, both should be using parameter sharing.
But In SKRL, each agent has its own actor and critic network, may I know why?
Any concern about using parameter sharing?
And, if I want to use parameter sharing, e.g. parameter od critic shared across all agents, can I use this API?
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions