Release Notes#

PettingZoo 1.24.3#

Released on 2024-01-18 - GitHub - PyPI

PettingZoo 1.24.3 Release Notes:

This is a minor release with some minor bugfixes, improvements, documentation updates. Most notably, we have added a state function to the multiwalker env, and fixed a bug causing wrappers to clear custom attributes from underlying environments, see #1140 for more information.

We have also added a dictionary, mapping from env name to env modules for each environment type, accessible as follows: from pettingzoo.mpe import mpe_environments. These mapping are combined to make a list of all environments: from pettingzoo.utils.all_modules import all_environments, both mappings containing keys such as mpe/simple_adversary_v3. For more information, see #1155

New Features and Improvements

  • feature/provide access to wrapped attr (#1140)
  • Adding state function to multiwalker (#1149)
  • Add mappings from env name to env module (e.g., mpe_environments) in addition to all_environments (#1155)

Bug Fixes

  • Fix ray requirements for tutorial (#1139)
  • Fix MPE SimpleEnv continuous actions to move in the same direction as discrete case (#1144)

Documentation Updates

  • Add MATS Gym to 3rd party env list (#1152)
  • Fix the comments for check_for_winner (#1148)
  • Include AgileRL tutorials in tutorials list in (#1137)
  • Added single quotes around pip install arguments with square brackets (#1153)
  • Add single quotes around all pip install extras (#1154)


This release includes contributions from: @elliottower, @ffelten, @axelbr, @shahofblah, @helpingstar and @nicku-a

Many thanks to our contributors, as well as many past contributors who have made this possible. We would also like to thank everyone who has helped out with bug reports or feature suggestions, which are critical to our development. We are always welcoming new contributors, if you are interested please join our Discord server at

Full Changelog: 1.24.2...1.24.3

PettingZoo 1.24.2#

Released on 2023-11-16 - GitHub - PyPI

PettingZoo 1.24.2 Release Notes:

This release includes three new tutorials from AgileRL, as well as a number of bugfixes, testing improvements, and documentation updates.

New Features and Improvements

  • AgileRL tutorials: MATD3, MADDPG, and DQN self-play/curriculum learning (#1086, #1124, #1128)
    • These are now our most highly performant and actively maintained tutorials, visit the AgileRL discord questions/troubleshooting
  • Add multi-episode wrapper for AEC and parallel envs (#1105)
    • This can be used for example, for evaluating the results of multiple rounds of Texas Hold'em Poker, rather than a single hand

Bug Fixes

  • Check parallel_seed_test for all envs, fix seed_test to actually use num_cycles arg (#1088)
  • Fix hanabi not rendering on env.step(), clean up code (#1087)
  • Clean up parallel_api_test (#1095)
  • Update to include testing requirements (#1097)
  • Update to support Ray RLLib MultiAgentEnv (#1096)
  • Bugfixes for RLCard environments: render fps, black screen flashing (#1103)
  • Allow parallel envs to have other optional keys in info/obs dicts (e.g., "common") (#1110)
  • Dead variable removal + macOS pygame fix (#1107)
  • Update Ray tutorials to RLlib 2.7.0 by (#1112)
  • Change AEC last() to assert agent is not None, accounting for non-string AgentID's (#1120)
  • Fix typo simple_reference docstring, fix workflows to use python 3.11 by default (#1128)
  • Fix minor bug in TerminateIllegal wrapper indexing empty info dict (#1129)
  • Fix a rendering bug of MPE environment by (#1130)
  • Fix agent indexing bug in SB3 tutorials (#1133)
  • Fix seed test to work with action_mask in info, add tests to ensure info action masking works (#1134)
  • Fix bug with generated_agents custom AgentID tests (#1135)

Documentation Updates

  • Update documentation testing (#1089, #1090)
  • Fix broken documentation links (#1092)
  • Fix typo in environment creation docs (#1116)


This release includes contributions from: @elliottower, @nicku-a, @mikepratt1, @xixinzhang, @umutucak, @jjshoots, @chrisyeh96, @Fernadoo, and @Kchour

Many thanks to our contributors, as well as many past contributors who have made this possible. We would also like to thank everyone who has helped out with bug reports or feature suggestions, which are critical to our development. We are always welcoming new contributors, if you are interested please join our Discord server at

Full Changelog: 1.24.1...1.24.2

PettingZoo 1.24.1#

Released on 2023-09-04 - GitHub - PyPI

PettingZoo 1.24.1 Release Notes:

This is a hotfix release to fix compatibility issues with Shimmy, due to an unintentional typevar deletion.

Other changes include: minor fixes to the knights_archers_zombies environment, improved CI testing (including using pytest-xdist to utilize parallelization), and new documentation testing using pytest-markdown-docs, ensuring that every codeblock in our documentation runs successfully--including environment usage scripts.

Environment creation documentation has also improved and made more beginner-friendly. The Environment Creation Tutorial has also been renamed to Custom Environment Tutorial, to avoid confusion with the Getting Started section's Environment Creation page.

Bug Fixes

  • Add back ObsDict and ActionDict to (#1077)
    • These definitions were mistakenly removed in the previous release, apologies for the inconveniences
  • Fix rendering FPS and manual control script for knights_archers_zombies (#1080)
  • Clean up duplicated python tests, add additional parameter combination tests (#1074)

Documentation Updates

  • Add doctesting for all codeblocks in documentation, including usage scripts (#1083)
  • Update environment creation tutorials (#1082, #1084)

Full Changelog: 1.24.0...1.24.1

PettingZoo 1.24.0#

Released on 2023-08-22 - GitHub - PyPI

PettingZoo 1.24.0 Release Notes:

This release includes support for Python 3.11, many updates to Classic environments (including updated Chess and Hanabi environment versions, and rendering for all RLCard envs), and many bugfixes, testing expansion, documentation updates.

We are also excited to announce 3 tutorials for Stable-Baselines3, updated RLlib tutorials (#1051), and an updated CleanRL multi-agent Atari tutorial including WandB and TensorBoard integration.

Co-released in order to make this release possible are SuperSuit 3.9.0 and Shimmy 1.2.0, fixing Stable-Baselines3 and OpenSpiel compatibility, respectively.

Breaking Changes

  • Python 3.7 is no longer supported, as it has reached end-of-life (link)
  • We have deprecated chess_v5 in favor of updated chess_v6
  • We have deprecated hanabi_v4 in favor of hanabi_v5

New Features and Improvements

  • Python 3.11 support (#1029)
  • Permit AgentIDs other than str (#1071)
    • It is now acceptable to use other types such as integers as AgentIDs
  • Add Stable-Baselines3 tutorial (#1015, #1017), with examples for:
  • Add updated CleanRL multi-agent Atari example (#1033)
    • Adapted to work with Gymnasium and current PettingZoo/SuperSuit
    • Full training script with CLI/logging and integration with WandB and TensorBoard
  • Update Chess to v6:
    • Add checks for insufficient material, 50-moves and 3-fold repetition (#997)
    • Fix to white perspective, fix observation bug, add documentation (#1004)
    • En passant represenation (see docstring) has been made consistent with Leela Chess Zero (#1004)
    • Update python-chess version from 1.7.0 to 1.9.4 (#1026)
  • Update Hanabi to v5:
    • Now depends on Shimmy's OpenSpielCompatibility wrapper (#948)
    • OpenSpiel is better tested, superior performance due to C++ implementation, and removes dependency on unmaintained Hanabi Learning Environment
    • First update in over 2 years, fixed a large number of issues and brought up to current code standards
  • Add rendering for Gin Rummy, Leduc Holdem, and Tic-Tac-Toe (#1054)
  • Adapt AssertOutOfBounds wrapper to work with all environments, rather than discrete only (#1046)
  • Add additional pre-commit hooks, doctests to match Gymnasium (#1012)

Bug Fixes

  • Fix Pistonball to only render if render_mode is not None (#1014)
  • Fix Connect Four not switching to next agent after termination (#1020)
  • Fix classic environments screen sizes, add type hints, fix pre-commit (#998)
  • Fix all environments to render at correct FPS, clean up pygame code (#999)
  • Fix SuperSuit integration for SB3 tutorials (#1031)
  • Update CleanRL tutorial requirements to most recent SuperSuit/PettingZoo versions (#1019)
  • Update RLlib tutorial requirements to most recent SuperSuit/PettingZoo versions by (#1018)
  • Fix typo in Waterworld documentation (#1058)

Documentation Updates


This release includes contributions from: @elliottower, @DmytroIvasiuk, @jacob975, @dylwil3, @Jammf, @Bamboofungus, @BertrandDecoster, @murtazarang and @pimpale.

Many thanks to our contributors, as well as many past contributors who have made this possible. We would also like to thank everyone who has helped out with bug reports or feature suggestions, which are critical to our development. We are always welcoming new contributors, if you are interested please join our Discord server at

Full Changelog: 1.23.1...1.24.0

PettingZoo 1.23.1#

Released on 2023-05-24 - GitHub - PyPI

PettingZoo 1.23.1 Release Notes:

This release is a small hotfix to fix compatibility issues with Shimmy and other small bugs.

Bug Fixes:

  • Fix bug in API test test_action_flexibility() (#986)
    • Fixes tests for Shimmy's OpenSpiel wrapper
  • Remove ParrellEnv.seed() (#987)
  • Update RLlib requirements (#992)

Documentation Updates:

  • Add info about aec and parallel APIs to homepage (#985)
  • Create CITATION.cff (#990)
  • Added Carla gym to (#991)

Full Changelog: 1.23.0...1.23.1

PettingZoo 1.23.0#

Released on 2023-05-15 - GitHub - PyPI

PettingZoo 1.23.0 Release Notes:

This release finishes the process of standardizing the PettingZoo API to fully match Gymnasium. The deprecated env.seed() method has been removed in favor of env.reset(seed=seed), and the return_info argument from reset() has been removed—info is now always returned on reset.

New features include full support for serialization using Pickle, and updated testing: pickle tests, improved API test, and re-written seed test (matching Gymnasium). The library has also been updated to use pyproject.toml, to make installation more consistent and reliable, and to comply with PEP 621 standards.

This release includes significant documentation updates: full installation and usage examples for each environment type (Atari, Butterfly, Classic, MPE, SISL), 9+ new third-party-environments, new action masking documentation, new LangChain tutorial, updated CleanRL, Tianshou, and RLlib tutorials, and more.

Breaking Changes:

To ensure full consistency between the PettingZoo and Gymnasium API’s, the following changes have been made:

The deprecated environment seed() method has been fully removed.

  • To seed an environment, call env.reset(seed=0)

The return_info argument has been removed from the reset() function.

  • Calls toreset() will now always return observation and info.

New Features and Improvements

  • Action masking is now supported using either observation[action_mask] or info[action_mask], with documentation and examples (#953)
  • Replace with pyproject.toml (#875)
  • Remove return_info argument from reset() (#890)
  • Add type hinting for utils and base environments (#964)
  • Add aec_wrapper_fn to match parallel_wrapper_fn (#879)
  • Update SISL Waterworld environment to increase maximum acceleration, for smoother behavior (#882)
  • Update MPE simple_spread agent reward (#894), update all MPE envs to update position before velocity, matching the original paper (#970)

Bug Fixes:

  • Rename BaseParallelWraper to BaseParallelWrapper (fixed typo) (#876, #908)
  • Removed casting actions to int in parallel_to_aec conversion (#975)
  • Fix broken Tianshou tutorial and update dependencies (#980)
  • Fix an issue where MPE envs would render black screen when using the rgb_array mode (#874)
  • Fix failing CI tests in GitHub workflows (#886)
  • Fix minor linting issues with pre-commit hooks (#835)
  • Resolve a large number of pytest warnings (#897)
  • Remove unnecessary lines in MPE code (#891)
  • Update Tianshou and CleanRL tutorials to work with the new API changes (#984)

Documentation Updates:

  • Add full installation and usage examples for each environment type (#906)
  • Update Third-Party Environments with two new custom board game environments (gobblet-rl and cathedral-rl) (#907)
  • Add full documentation for wrappers (including Shimmy compatibility wrappers) (#904, #942)
  • Add LangChain tutorial (#979)
  • Updated EnvironmentCreation tutorial (#903, #972)
  • Updated Tianshou Tutorial (#980)
  • Update README with getting started information (#950)
  • Add installation instructions to Getting Started documentation page (#968)
  • Update docs contributing README (#883)
  • Update homepage to include video demonstrating environments, cleanup homepage text, add logo (#954, #960)

Full Changelog: 1.22.3...1.23.0


Released on 2023-03-20 - GitHub - PyPI

This release has been yanked due to breaking API changes. We are working hard to address this in the next release.

What's Changed

Bug Fixes:

Documentation Updates:

Full Changelog: 1.22.3...1.22.4


Released on 2022-12-28 - GitHub - PyPI

What's Changed

New Contributors

Full Changelog: 1.22.2...1.22.3


Released on 2022-11-11 - GitHub - PyPI

What's Changed

New Contributors

Full Changelog: 1.22.1...1.22.2


Released on 2022-10-25 - GitHub - PyPI

What's Changed

Full Changelog: 1.22.0...1.22.1


Released on 2022-10-07 - GitHub - PyPI

Major API change: done -> termination and truncation, matching Gymnasium's new API.
The dependency gym has been switched with gymnasium, which is maintained.

What's Changed

New Contributors

Full Changelog: 1.21.0...1.22.0


Released on 2022-09-24 - GitHub - PyPI

What's Changed

  1. As part of the Gym update to 0.26, the following change has been made:
    • done -> termination and truncation: The singular done signal has been changed to a termination and truncation signal, where termination dictates that the environment has ended due to meeting certain conditions, and truncation dictates that the environment has ended due to exceeding the time/frame limit.
  2. Butterfly/Prospector, Classic/Mahjong, Classic/Doudizhu, Classic/Backgammon, Classic/Checkers has been pulled.
  3. Some QOL improvements for development, such as moving pyright to pre-commit and enforcing pydocstyle.
  4. Massive website upgrade.

List of Changes

New Contributors

Full Changelog: 1.20.1...1.21.0


Released on 2022-08-07 - GitHub - PyPI

Refer to the previous version (1.20.0) for list of changes, this version simply exists due to technical problems with publishing to PyPI.


Released on 2022-08-07 - GitHub - PyPI

What's Changed

New Contributors

Full Changelog: 1.19.1...1.20.0


Released on 2022-06-21 - GitHub - PyPI

  • Update Gym requirement version


Released on 2022-06-21 - GitHub - PyPI

  • Streamlined all envs to have consistent folder structures
  • Added info as output of reset to be consistent with new Gym API

0.18.1: 1.18.1#

Released on 2022-04-29 - GitHub - PyPI

  • Massive overhaul to Knight Archers Zombies, version bumped
  • Changed Atari games to use minimal observation space by default, all versions bumped
  • Large bug fix to all MAgent environments, versions bumped
  • MAgent environments now have Windows binaries
  • Removed Prison environment
  • Multiwalker bug fix, version bumped
  • Large number of test fixes
  • Removed manual_control with new manual_policy method
  • Converted seed method to argument to reset to match new Gym API

(The PettingZoo 1.18.0 release never existed due to technical issues)


Released on 2022-03-15 - GitHub - PyPI

  • Changed metadata naming scheme to match gym. In particular render.modes -> render_modes and video.frames_per_second -> render_fps
  • Fixed bad pettingzoo import error messages caused by autodeprication logic


Released on 2022-03-05 - GitHub - PyPI

  • KAZ: Code rewrite and numerous fixes, added manual control capabililty
  • Supports changes to seeding in gym 0.22.0
  • Fixed prison state space, bumped version
  • Fixed battlefield state space
  • Increased default cycles in api tests (making them catch more errors than before)
  • Added turn-based to parallel wrapper
  • Moved magent render logic to Magent repo


Released on 2022-01-28 - GitHub - PyPI

  • Bug fixes to KAZ, pistonball, multiwalker, cooperative pong. Versions bumped.
  • Removed logo from gather, version bumped.
  • Added FPS attribute to all environments to make rendering easier.
  • Multiwalker now uses pygame instead of pyglet for rendering
  • Renamed to_parallel and from_parallel to aec_to_parallel and parallel_to_aec
  • Added is_parallelizable metadata to ensure that the aec_to_parallel wrapper is not misused
  • Fixed the API tests to better support agent generation


Released on 2021-12-05 - GitHub - PyPI

-Bug fixes and partial redesign to pursuit environment logic and rendering. Environment is now learnable, version bumped
-Bug fixes and reward function redesign for cooperative pong environment, version bumped
-Ball moving into the left column due to physics engine imprecision in pistonball no longer gives additional reward, version bumped
-PyGame version bumped, no environment version bumps needed
-Python 3.10 support
-Fixed parallel API tests to allow environments without possible_agents


Released on 2021-10-19 - GitHub - PyPI

  • Fixed unnecessary warnings generated about observation and action spaces
  • Upstreamed new rlcard version with new texas holdem no limit implementation, bumped version to v6
  • Updated python chess dependency, bumped version to v5
  • Dropped support for python 3.6, added official support for 3.9
  • Various documentation fixes


Released on 2021-10-08 - GitHub - PyPI

  • API changes
    • new observation_space(agent), action_space(agent) methods that retreive the static space for an agent
    • possible_agents, observation_spaces, action_spaces attributes made optional. Wrappers pass these attributes through if they exist.
    • parallel environment's agents list contains agents to take next step, instead of agents that took previous step.
    • Generated agents now allowed, agents can be created any time during an episode. Note that agents cannot resurect, once they are done, they cannot be readded to the environment.
  • Fixed unexpected behavior with close method in pursuit environment
  • Removed pygame loading messages
  • Fix pillow dependency issue
  • Removed local ratio arg from pistonball environment
  • Gym 0.21.0 support
  • Better code formatting (isort, etc.)


Released on 2021-08-19 - GitHub - PyPI

  • Fix scipy and pyglet dependencies for sisl environments
  • Fix pistonball rendering (no version bumps)
  • Update rlcard to v1.0.4 with a fix for texas hold'em no limit; bump version


Released on 2021-08-02 - GitHub - PyPI

-Upgraded to RLCard 1.0.3, bumped all versions. Also added support for num_players in RLcard based environments which can have variable numbers of players.
-Fixed Go and Chess observation spaces, bumped versions
-Minor Go rendering fix
-Fix PyGame dependency in classic (used for rendering)
-Fixed images being loaded into slow PyGame data structures, resulting in substantial speedups in certain Butterfly games (no version bump needed)
-Fix odd cache problem using RGB rendering in cooperative pong
-Misc fixes to tests and warning messages


Released on 2021-07-17 - GitHub - PyPI

  • Added continuous action support for MPE environments as an argument
  • Added pixel art rendering for Texas Hold'em No Limit, Rock Paper Scissors and Go
  • Fixed pixel art rendering in Connect Four
  • Fixed bug in order of black/white pieces in Go observation space, bumped version
  • Changed observation in cooperative pong to include entire screen, bumped version


Released on 2021-06-12 - GitHub - PyPI

  • Created no action timer for pong to encourage players to serve (before there was no penalty to stalling the game forever). Bumped version of all pong environments (pong, basketball_pong, volleyball_pong, foozpong, quadrapong)
  • Fixed Multiwalker collision bug, bumped version
  • Add state method to Magent and MPE
  • Merged rock paper scissors and rock paper scissors lizard spock into a single environment that takes the number of actions as an argument, and adds the n_cycles argument to allow for a single game to be sequential. Bumped version
  • Removed depricated env_done method
  • Fixed order of channels in combined_arms observation
  • Added pixel art based RGB rendering to connect four. This will also be added to rock paper scissors, Go and Texas Holdem in upcoming releases
  • Moved pettingzoo CI test files outside of the repo
  • Changed max cycles test to be more robust under agent death


Released on 2021-05-14 - GitHub - PyPI

  • fixed multiwalker bug, bumped environment version.
  • Added support for custom render modes in render_test


Released on 2021-04-16 - GitHub - PyPI

  • Added argument to seed test to disable the seed()-reset() test. Docs updated.
  • Minor changes to MAgent rendering


Released on 2021-04-04 - GitHub - PyPI

  • Fixed arbitrary calls to observe() in classic games (especially tictactoe and connect 4)
  • Fixed documentation for tictactoe and pistonball


Released on 2021-03-27 - GitHub - PyPI

  • Fixed MAgent bugs and changed default to non-minimap mode, bumped versions.
  • Fix transient installation error.


Released on 2021-03-08 - GitHub - PyPI

Minor miscellaneous fixes and small feature additions:

  • Added .unwrapped
  • Minor fix to from_parallel
  • removed warning from close()
  • fixed random demo
  • fixed prison manual control


Released on 2021-02-21 - GitHub - PyPI

  • Changed default values of max_cycles in pistonball, prison, prospector
  • Changed pistonball default mode to continuous and changed default value for local_ratio
  • Refactored externally facing tests and utils
  • Bumped pymunk version to 6.0.0 and bumped versions of all environments which depend on pymunk
  • Added state() and state_space to API, implemented methods in butterfly environments
  • Various small bug fixes in butterfly environments.
  • Documentation updates.


Released on 2021-01-29 - GitHub - PyPI

Fix miscellaneous annoying loading messages for butterfly environments. Improvements to save_obs and related functionality. Fixes to KAZ.


Released on 2021-01-13 - GitHub - PyPI

Fixes MPE rendering dependency, fixes minor left over dependencies on six, fixes issues when pickling Pistonball. No versions were bumped.


Released on 2021-01-05 - GitHub - PyPI

Refactors tests to be generally usable by third party environments. Added average reward calculating util, and made minor improvements to random_demo and save_obs utils. Removed black death argument from KAZ (it's now a wrapper in supersuit). Redid how illegal actions are handled in classic, by making observations dictionaries where one element is the observation and the other is a proper illegal action mask. Pistonball was refactored for readability, to run faster and to allow the number of pistons to be varied via argument. Waterworld was completely refactored with various major fixes. RLCard version was bumped (and includes bug fixes impacting environments). MAgent rendering looks much better now (versions not bumped). Major bug in the observation space of pursuit is fixed. Add Python 3.9 support. Update Gym version. Fixed multiwalker observation space, for good this time, and made large improvements to code quality. Removed NaN wrapper.


Released on 2020-11-26 - GitHub - PyPI

Pistonball reward and miscellaneous problems. Fixed KAZ observation and rendering issues. Fix Cooperative Pong issues with rendering. Fixed default parameters in Hanabi. Fixed multiwalker rewards, added arguments. Changed combined_arms observation and rewards, tiger_deer rewards. Added more arguments to all MAgent environments.


Released on 2020-11-07 - GitHub - PyPI

General: Substantial API upgrades (see, overhaul of the handling of agent death. In particular, the agents list now only contains live agents (agents which have not been done). Moved significant logic from wrappers to raw environment. Renamed max_frames to max_cycles and made the meaning of this argument consistent across all environments.

Atari: Fixed entombed_cooperative rewards, add support for custom ROM directory specification

Butterfly: Bug fixes in all environment, bumped PyGame and PyMunk versions

Classic: Bumped RLCard version, fixed default observation space for many environments depending on RLCard

SISL: Bug fixes in all environments

MAgent: Fixes to observation space of all environments

Bumped versions of all environments. There hopefully will be no more major API changes after this.