Third-Party Environments#
These environments are not maintained by Farama Foundation and, as such, cannot be guaranteed to function as intended.
Environments using the latest versions of PettingZoo#
Due to a very recent major release of PettingZoo, there are currently few contributed third-party environments. If you’d like to contribute one, please reach out on Discord.
Sumo-RL#
PettingZoo (and Gymnasium) wrappers for the widely used SUMO traffic simulation.
POGEMA#
Partially-Observable Grid Environment for Multiple Agents (POGEMA) is a grid-based environment that was specifically designed to be flexible, tunable and scalable.
Racecar Gym#
A multi-agent racing environment for a miniature, F1Tenth-like racecar using the Bullet physics engine with PyBullet.
Teamfight Tactics MuZero Agent#
Using Google DeepMind’s MuZero algorithm to learn to play Teamfight Tactics, an auto chess game made by Riot games.
CookingZoo#
CookingZoo: a gym-cooking derivative to simulate a complex cooking environment.
Crazy-RL#
A library for doing reinforcement learning using Crazyflie drones.
PettingZoo Dilemma Envs#
PettingZoo environments for classic game theory problems: Prisoner’s Dilemma, Samaritan’s Dilemma, Stag Hunt, Chicken and Matching Pennies.
Breakout-Clone#
Modernized clone of the Breakout arcade game, using Unity game engine and PettingZoo.
Online playable game (using Unity WebGL and Unity ML-Agents): link, tutorial
Carla Gym#
PettingZoo interface for CARLA Autonomous Driving simulator.
MATS Gym#
A multi-agent traffic scenario environment for CARLA that supports ScenarioRunner, OpenScenario and Scenic scenario descriptions. It is also compatible with the CARLA Autonomous Driving Challenge.
Fanorona AEC#
Implementation of the board game Fanorona.
Gobblet-RL#
Interactive PettingZoo implementation of the Gobblet board game.
Online game demo (using Pygame WebAssembly): link, tutorial
Cathedral-RL#
Interactive PettingZoo implementation of the Cathedral board game.
Interactive Connect Four#
Play Connect Four in real-time against an RLlib agent trained via self-play and PPO.
Online game demo (using Gradio and HuggingFace Spaces): link, tutorial
Environments using older versions of PettingZoo#
The following environments use a now-depreciated API design for PettingZoo, so may be more difficult to use.
Neural MMO#
Massively multiagent environment, inspired by Massively Multiplayer Online (MMO) role-playing games.
Kaggle Environments#
Environments for Kaggle machine learning challenges.
cogment-verse#
Library of Environments, Human Actor UIs and Agent implementation for Human In the Loop Learning & Reinforcement Learning.
Stone Ground Hearth Battles#
Simulator and environments for Blizzard’s popular card game Hearthstone Grounds, including bots and human-interaction.
Cyber Operations Research Gym#
A cyber-security research environment for training and development of security human and autonomous agents.
conflict_rez#
Conflict resolution for multiple vehicles in confined spaces.
pz-battlesnake#
PettingZoo environment for online multi-player game Battlesnake.
BomberManAI#
Environment with a simplified version of the video game BomberMan.
Galaga AI#
Implementation of the Galaga arcade game using Unity game engine and Unity ML-Agents.
skyjo_rl#
Implementation of the board game SkyJo.
Mu Torere#
Implementation of the board game Mū tōrere from New Zealand.