Simple Spread#
This environment is part of the MPE environments. Please read that page first for general information.
Import |
|
---|---|
Actions |
Discrete/Continuous |
Parallel API |
Yes |
Manual Control |
No |
Agents |
|
Agents |
3 |
Action Shape |
(5) |
Action Values |
Discrete(5)/Box(0.0, 1.0, (5)) |
Observation Shape |
(18) |
Observation Values |
(-inf,inf) |
State Shape |
(54,) |
State Values |
(-inf,inf) |
This environment has N agents, N landmarks (default N=3). At a high level, agents must learn to cover all the landmarks while avoiding collisions.
More specifically, all agents are globally rewarded based on how far the closest agent is to each landmark (sum of the minimum distances). Locally, the agents are penalized if they collide with other agents (-1 for each collision). The relative weights of these rewards can be controlled with the
local_ratio
parameter.
Agent observations: [self_vel, self_pos, landmark_rel_positions, other_agent_rel_positions, communication]
Agent action space: [no_action, move_left, move_right, move_down, move_up]
Arguments#
simple_spread_v3.env(N=3, local_ratio=0.5, max_cycles=25, continuous_actions=False)
N
: number of agents and landmarks
local_ratio
: Weight applied to local reward and global reward. Global reward weight will always be 1 - local reward weight.
max_cycles
: number of frames (a step for each agent) until game terminates
continuous_actions
: Whether agent action spaces are discrete(default) or continuous
Usage#
AEC#
from pettingzoo.mpe import simple_spread_v3
env = simple_spread_v3.env(render_mode="human")
env.reset(seed=42)
for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()
if termination or truncation:
action = None
else:
# this is where you would insert your policy
action = env.action_space(agent).sample()
env.step(action)
env.close()
Parallel#
from pettingzoo.mpe import simple_spread_v3
env = simple_spread_v3.parallel_env(render_mode="human")
observations, infos = env.reset()
while env.agents:
# this is where you would insert your policy
actions = {agent: env.action_space(agent).sample() for agent in env.agents}
observations, rewards, terminations, truncations, infos = env.step(actions)
env.close()
API#
- class pettingzoo.mpe.simple_spread.simple_spread.raw_env(N=3, local_ratio=0.5, max_cycles=25, continuous_actions=False, render_mode=None)[source]#
- action_spaces: dict[AgentID, gymnasium.spaces.Space]#
- agent_selection: AgentID#
- agents: list[AgentID]#
- infos: dict[AgentID, dict[str, Any]]#
- observation_spaces: dict[AgentID, gymnasium.spaces.Space]#
- possible_agents: list[AgentID]#
- rewards: dict[AgentID, float]#
- terminations: dict[AgentID, bool]#
- truncations: dict[AgentID, bool]#