Simple#

This environment is part of the MPE environments. Please read that page first for general information.
Import |
|
---|---|
Actions |
Discrete/Continuous |
Parallel API |
Yes |
Manual Control |
No |
Agents |
|
Agents |
1 |
Action Shape |
(5) |
Action Values |
Discrete(5)/Box(0.0, 1.0, (5,)) |
Observation Shape |
(4) |
Observation Values |
(-inf,inf) |
State Shape |
(4,) |
State Values |
(-inf,inf) |
In this environment a single agent sees a landmark position and is rewarded based on how close it gets to the landmark (Euclidean distance). This is not a multiagent environment, and is primarily intended for debugging purposes.
Observation space: [self_vel, landmark_rel_position]
Arguments#
simple_v3.env(max_cycles=25, continuous_actions=False)
max_cycles
: number of frames (a step for each agent) until game terminates
continuous_actions
: Whether agent action spaces are discrete(default) or continuous
API#
- class pettingzoo.mpe.simple.simple.raw_env(max_cycles=25, continuous_actions=False, render_mode=None)[source]#
Uses the
args
andkwargs
from the object’s constructor for pickling.- action_spaces: dict[AgentID, gymnasium.spaces.Space]#
- agent_selection: AgentID#
- agents: list[AgentID]#
- infos: dict[AgentID, dict[str, Any]]#
- observation_spaces: dict[AgentID, gymnasium.spaces.Space]#
- possible_agents: list[AgentID]#
- rewards: dict[AgentID, float]#
- terminations: dict[AgentID, bool]#
- truncations: dict[AgentID, bool]#