Simple Push#

../../../_images/mpe_simple_push.gif

This environment is part of the MPE environments. Please read that page first for general information.

Import

from pettingzoo.mpe import simple_push_v3

Actions

Discrete/Continuous

Parallel API

Yes

Manual Control

No

Agents

agents= [adversary_0, agent_0]

Agents

2

Action Shape

(5)

Action Values

Discrete(5)/Box(0.0, 1.0, (5,))

Observation Shape

(8),(19)

Observation Values

(-inf,inf)

State Shape

(27,)

State Values

(-inf,inf)

This environment has 1 good agent, 1 adversary, and 1 landmark. The good agent is rewarded based on the distance to the landmark. The adversary is rewarded if it is close to the landmark, and if the agent is far from the landmark (the difference of the distances). Thus the adversary must learn to push the good agent away from the landmark.

Agent observation space: [self_vel, goal_rel_position, goal_landmark_id, all_landmark_rel_positions, landmark_ids, other_agent_rel_positions]

Adversary observation space: [self_vel, all_landmark_rel_positions, other_agent_rel_positions]

Agent action space: [no_action, move_left, move_right, move_down, move_up]

Adversary action space: [no_action, move_left, move_right, move_down, move_up]

Arguments#

simple_push_v3.env(max_cycles=25, continuous_actions=False)

max_cycles: number of frames (a step for each agent) until game terminates

Usage#

AEC#

from pettingzoo.mpe import simple_push_v3

env = simple_push_v3.env(render_mode="human")
env.reset(seed=42)

for agent in env.agent_iter():
    observation, reward, termination, truncation, info = env.last()

    if termination or truncation:
        action = None
    else:
        # this is where you would insert your policy
        action = env.action_space(agent).sample()

    env.step(action)
env.close()

Parallel#

from pettingzoo.mpe import simple_push_v3

env = simple_push_v3.parallel_env(render_mode="human")
observations, infos = env.reset()

while env.agents:
    # this is where you would insert your policy
    actions = {agent: env.action_space(agent).sample() for agent in env.agents}

    observations, rewards, terminations, truncations, infos = env.step(actions)
env.close()

API#

class pettingzoo.mpe.simple_push.simple_push.raw_env(max_cycles=25, continuous_actions=False, render_mode=None)[source]#
action_spaces: dict[AgentID, gymnasium.spaces.Space]#
agent_selection: AgentID#
agents: list[AgentID]#
infos: dict[AgentID, dict[str, Any]]#
observation_spaces: dict[AgentID, gymnasium.spaces.Space]#
possible_agents: list[AgentID]#
rewards: dict[AgentID, float]#
terminations: dict[AgentID, bool]#
truncations: dict[AgentID, bool]#