Simple Push#
![../../../_images/mpe_simple_push.gif](../../../_images/mpe_simple_push.gif)
This environment is part of the MPE environments. Please read that page first for general information.
Import |
|
---|---|
Actions |
Discrete/Continuous |
Parallel API |
Yes |
Manual Control |
No |
Agents |
|
Agents |
2 |
Action Shape |
(5) |
Action Values |
Discrete(5)/Box(0.0, 1.0, (5,)) |
Observation Shape |
(8),(19) |
Observation Values |
(-inf,inf) |
State Shape |
(27,) |
State Values |
(-inf,inf) |
This environment has 1 good agent, 1 adversary, and 1 landmark. The good agent is rewarded based on the distance to the landmark. The adversary is rewarded if it is close to the landmark, and if the agent is far from the landmark (the difference of the distances). Thus the adversary must learn to push the good agent away from the landmark.
Agent observation space: [self_vel, goal_rel_position, goal_landmark_id, all_landmark_rel_positions, landmark_ids, other_agent_rel_positions]
Adversary observation space: [self_vel, all_landmark_rel_positions, other_agent_rel_positions]
Agent action space: [no_action, move_left, move_right, move_down, move_up]
Adversary action space: [no_action, move_left, move_right, move_down, move_up]
Arguments#
simple_push_v2.env(max_cycles=25, continuous_actions=False)
max_cycles
: number of frames (a step for each agent) until game terminates