Basketball Pong#
 
This environment is part of the Atari environments. Please read that page first for general information.
| Import | 
 | 
|---|---|
| Actions | Discrete | 
| Parallel API | Yes | 
| Manual Control | No | 
| Agents | 
 | 
| Agents | 2 | 
| Action Shape | (1,) | 
| Action Values | [0,5] | 
| Observation Shape | (210, 160, 3) | 
| Observation Values | (0,255) | 
A competitive game of control.
Try to get the ball in your opponents hoop. But you cannot move on their side of the court. Scoring a point also gives your opponent -1 reward.
Serves are timed: If the player does not serve within 2 seconds of receiving the ball, they receive -1 points, and the timer resets. This prevents one player from indefinitely stalling the game, but also means it is no longer a purely zero sum game.
Official Video Olympics manual
Environment parameters#
Some environment parameters are common to all Atari environments and are described in the base Atari documentation.
Parameters specific to Basketball_Pong are
basketball_pong_v3.env(num_players=2)
num_players:  Number of players (must be either 2 or 4)
Action Space (Minimal)#
In any given turn, an agent can choose from one of 6 actions.
| Action | Behavior | 
|---|---|
| 0 | No operation | 
| 1 | Fire | 
| 2 | Move up | 
| 3 | Move right | 
| 4 | Move left | 
| 5 | Move down | 
Version History#
- v3: Minimal action space (1.18.0) 
- v2: No action timer (1.9.0) 
- v1: Breaking changes to entire API (1.4.0) 
- v0: Initial versions release (1.0.0) 
Usage#
AEC#
from pettingzoo.atari import basketball_pong_v3
env = basketball_pong_v3.env(render_mode="human")
env.reset(seed=42)
for agent in env.agent_iter():
    observation, reward, termination, truncation, info = env.last()
    if termination or truncation:
        action = None
    else:
        # this is where you would insert your policy
        action = env.action_space(agent).sample()
    env.step(action)
env.close()
Parallel#
from pettingzoo.atari import basketball_pong_v3
env = basketball_pong_v3.parallel_env(render_mode="human")
observations, infos = env.reset()
while env.agents:
    # this is where you would insert your policy
    actions = {agent: env.action_space(agent).sample() for agent in env.agents}
    observations, rewards, terminations, truncations, infos = env.step(actions)
env.close()
