PettingZoo has an assortment of helper utilities which provide additional functionality for interacting with environments.

Note: see also PettingZoo Wrappers, which provide additional functionality for customizing environments.

Average Total Reward#

pettingzoo.utils.average_total_reward.average_total_reward(env: AECEnv, max_episodes: int = 100, max_steps: int = 10000000000) float[source]#

Calculates the average total reward over the episodes for AEC environments.

Runs an env object with random actions until either max_episodes or max_steps is reached. Reward is summed across all agents, making it unsuited for use in zero-sum games.

The average total reward for an environment, as presented in the documentation, is summed over all agents over all steps in the episode, averaged over episodes.

This value is important for establishing the simplest possible baseline: the random policy.

from pettingzoo.utils import average_total_reward
from pettingzoo.butterfly import pistonball_v6
env = pistonball_v6.env()
average_total_reward(env, max_episodes=100, max_steps=10000000000)

Where max_episodes and max_steps both limit the total number of evaluations (when the first is hit evaluation stops)

Observation Saving#

pettingzoo.utils.save_observation.save_observation(env: AECEnv[AgentID, Any, Any], agent: AgentID | None = None, all_agents: bool = False, save_dir: str = os.getcwd()) None[source]#

If the agents in a game make observations that are images then the observations can be saved to an image file. This function takes in the environment, along with a specified agent. If no agent is specified, then the current selected agent for the environment is chosen. If all_agents is passed in as True, then the observations of all agents in the environment is saved. By default, the images are saved to the current working directory in a folder matching the environment name. The saved image will match the name of the observing agent. If save_dir is passed in, a new folder is created where images will be saved to. This function can be called during training/evaluation if desired, which is why environments have to be reset before it can be used.

from pettingzoo.utils import save_observation
from pettingzoo.butterfly import pistonball_v6
env = pistonball_v6.env()
save_observation(env, agent=None, all_agents=False)

Capture Stdout#

Base class which is used by CaptureStdoutWrapper. Captures system standard out as a string value in a variable.

class pettingzoo.utils.capture_stdout.capture_stdout[source]#

Class allowing to capture stdout.


>>> from pettingzoo.utils.capture_stdout import capture_stdout
>>> with capture_stdout() as var:
...     print("test")
...     data = var.getvalue()
>>> data

Agent Selector#

The agent selector utility allows for easy cycling of agents in an AEC environment. At any time it can be reset or reinitialized with a new order, allowing for changes in turn order or handling a dynamic number of agents (see Knights-Archers-Zombies for an example of spawning/killing agents)

Note: while many PettingZoo environments use agent_selector to manage agent cycling internally, it is not intended to be used externally when interacting with an environment. Instead, use for agent in env.agent_iter() (see AEC API Usage).

class pettingzoo.utils.agent_selector.agent_selector(agent_order: list[Any])[source]#

Outputs an agent in the given order whenever agent_select is called.

Can reinitialize to a new order.


>>> from pettingzoo.utils import agent_selector
>>> agent_selector = agent_selector(agent_order=["player1", "player2"])
>>> agent_selector.reset()
>>> agent_selector.next()
>>> agent_selector.is_last()
>>> agent_selector.reinit(agent_order=["player2", "player1"])
>>> agent_selector.next()
>>> agent_selector.is_last()
is_first() bool[source]#

Check if the current agent is the first agent in the cycle.

is_last() bool[source]#

Check if the current agent is the last agent in the cycle.

next() Any[source]#

Get the next agent.

reinit(agent_order: list[Any]) None[source]#

Reinitialize to a new order.

reset() Any[source]#

Reset to the original order.


EnvLogger provides functionality for common warnings and errors for environments, and allows for custom messages. It is used internally in PettingZoo Wrappers.

class pettingzoo.utils.env_logger.EnvLogger[source]#

Used for logging warnings and errors for environments.

static error_agent_iter_before_reset() None[source]#

Error: reset() needs to be called before agent_iter()..

static error_nan_action() None[source]#

Error: step() cannot take in a nan action..

static error_observe_before_reset() None[source]#

Error: reset() needs to be called before observe..

static error_possible_agents_attribute_missing(name: str) None[source]#

Warns: [ERROR]: This environment does not support {attribute}..

static error_render_before_reset() None[source]#

Error: reset() needs to be called before render..

static error_state_before_reset() None[source]#

Error: reset() needs to be called before state..

static error_step_before_reset() None[source]#

Error: reset() needs to be called before step..

static flush() None[source]#

Flushes EnvLogger output.

static get_logger() Logger[source]#

Returns the logger object.

mqueue: list[Any] = []#
static suppress_output() None[source]#

Suppresses EnvLogger output.

static unsuppress_output() None[source]#

Resets EnvLogger output to be printed.

static warn_action_out_of_bound(action: Any, action_space: Space, backup_policy: str) None[source]#

Warns: [WARNING]: Received an action {action} that was outside action space {action_space}..

static warn_close_before_reset() None[source]#

Warns: [WARNING]: reset() needs to be called before close..

static warn_close_unrendered_env() None[source]#

Warns: [WARNING]: Called close on an unrendered environment..

static warn_on_illegal_move() None[source]#

Warns: [WARNING]: Illegal move made, game terminating with current player losing..

static warn_step_after_terminated_truncated() None[source]#

Warns: [WARNING]: step() called after all agents are terminated or truncated. Should reset() first..