Utils#
PettingZoo has an assortment of helper utilities which provide additional functionality for interacting with environments.
Note: see also PettingZoo Wrappers, which provide additional functionality for customizing environments.
Average Total Reward#
- pettingzoo.utils.average_total_reward.average_total_reward(env: AECEnv, max_episodes: int = 100, max_steps: int = 10000000000) float [source]#
Calculates the average total reward over the episodes for AEC environments.
Runs an env object with random actions until either max_episodes or max_steps is reached. Reward is summed across all agents, making it unsuited for use in zero-sum games.
The average total reward for an environment, as presented in the documentation, is summed over all agents over all steps in the episode, averaged over episodes.
This value is important for establishing the simplest possible baseline: the random policy.
from pettingzoo.utils import average_total_reward
from pettingzoo.butterfly import pistonball_v6
env = pistonball_v6.env()
average_total_reward(env, max_episodes=100, max_steps=10000000000)
Where max_episodes
and max_steps
both limit the total number of evaluations (when the first is hit evaluation stops)
Observation Saving#
- pettingzoo.utils.save_observation.save_observation(env: AECEnv[AgentID, Any, Any], agent: AgentID | None = None, all_agents: bool = False, save_dir: str = os.getcwd()) None [source]#
If the agents in a game make observations that are images then the observations can be saved to an image file. This function takes in the environment, along with a specified agent. If no agent
is specified, then the current selected agent for the environment is chosen. If all_agents
is passed in as True
, then the observations of all agents in the environment is saved. By default, the images are saved to the current working directory in a folder matching the environment name. The saved image will match the name of the observing agent. If save_dir
is passed in, a new folder is created where images will be saved to. This function can be called during training/evaluation if desired, which is why environments have to be reset before it can be used.
from pettingzoo.utils import save_observation
from pettingzoo.butterfly import pistonball_v6
env = pistonball_v6.env()
env.reset(seed=42)
save_observation(env, agent=None, all_agents=False)
Capture Stdout#
Base class which is used by CaptureStdoutWrapper. Captures system standard out as a string value in a variable.
Agent Selector#
The agent selector utility allows for easy cycling of agents in an AEC environment. At any time it can be reset or reinitialized with a new order, allowing for changes in turn order or handling a dynamic number of agents (see Knights-Archers-Zombies for an example of spawning/killing agents)
Note: while many PettingZoo environments use agent_selector to manage agent cycling internally, it is not intended to be used externally when interacting with an environment. Instead, use for agent in env.agent_iter()
(see AEC API Usage).
- class pettingzoo.utils.agent_selector.agent_selector(agent_order: list[Any])[source]#
Outputs an agent in the given order whenever agent_select is called.
Can reinitialize to a new order.
Example
>>> from pettingzoo.utils import agent_selector >>> agent_selector = agent_selector(agent_order=["player1", "player2"]) >>> agent_selector.reset() 'player1' >>> agent_selector.next() 'player2' >>> agent_selector.is_last() True >>> agent_selector.reinit(agent_order=["player2", "player1"]) >>> agent_selector.next() 'player2' >>> agent_selector.is_last() False
EnvLogger#
EnvLogger provides functionality for common warnings and errors for environments, and allows for custom messages. It is used internally in PettingZoo Wrappers.
- class pettingzoo.utils.env_logger.EnvLogger[source]#
Used for logging warnings and errors for environments.
- static error_agent_iter_before_reset() None [source]#
Error:
reset() needs to be called before agent_iter().
.
- static error_observe_before_reset() None [source]#
Error:
reset() needs to be called before observe.
.
- static error_possible_agents_attribute_missing(name: str) None [source]#
Warns:
[ERROR]: This environment does not support {attribute}.
.
- mqueue: list[Any] = []#
- static warn_action_out_of_bound(action: Any, action_space: Space, backup_policy: str) None [source]#
Warns:
[WARNING]: Received an action {action} that was outside action space {action_space}.
.
- static warn_close_before_reset() None [source]#
Warns:
[WARNING]: reset() needs to be called before close.
.
- static warn_close_unrendered_env() None [source]#
Warns:
[WARNING]: Called close on an unrendered environment.
.