- To develop a Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm to solve a Multi-Agent Environment (i.e., Vehicle Scheduling Environment) and Simple Adversary: OpenAI Multi-Agent particle environment.
- Two cars in a 4x4 Grid-world environment
- 1st car – Goal - To reach top right of the environment
- 2nd car – Goal - To reach top left of the environment
- State space: 16 states: {s0, s1, s2,...s15}
- Action space: {0: down, 1: up, 2: right, 3: left, 4: no move}
- Reward structure
- Towards the target: 1
- Away from the target: -3
- Stays in same position: -5
- Reaches target: 100
- 3 agents – 1 adversary and 2 good agents (Physical deception)
- Environment – 2 landmarks (Green – target landmark, Black – dummy landmark)
- Rewards:
- For agents:
- Positive reward - based on the distance between the closest agent to the target landmark
- Negative reward – based on the distance between the adversary to the target landmark
- For adversary:
- Positive reward – based on the distance between the adversary to target landmark
- For agents:
- Implemented Q-learning and MADDPG on both Vehicle Scheduling and Simple Adversary Environments
- Every Agent has
- Actor Network:
- Inputs: States, Actions
- Outputs: Probabilities
- Critic Network:
- Inputs: States, Actions
- Outputs: Q values
- Actor Network:
- To avoid running targets (i.e. freeze weights) target networks are used
- Target Actor Network (i.e. performed soft updates)
- Target Critic Network (i.e. performed soft updates)
- I developed an improved version of MADDPG, where I have used the ε-greedy approach even after applying noise to actions chosen from the deterministic policy.
- Q learning is not working well for the Vehicle Scheduling environment.
- The MADDPG algorithm is working better when compared to the Q-learning algorithm.
- Proper attention should be given while implementing the MADDPG algorithm since it may lead to over-estimation of the Q-value using the Critic network.
- MADDPG is working well for a continuous action-state value environment (i.e., Simple Adversary)