Pong#
This environment is part of the Atari environments. Please read that page first for general information.
Import |
|
---|---|
Actions |
Discrete |
Parallel API |
Yes |
Manual Control |
No |
Agents |
|
Agents |
2 |
Action Shape |
(1,) |
Action Values |
[0,5] |
Observation Shape |
(210, 160, 3) |
Observation Values |
(0,255) |
Classic two player competitive game of timing.
Get the ball past the opponent.
Scoring a point gives you +1 reward and your opponent -1 reward.
Serves are timed: If the player does not serve within 2 seconds of receiving the ball, they receive -1 points, and the timer resets. This prevents one player from indefinitely stalling the game, but also means it is no longer a purely zero sum game.
Official Video Olympics manual
Environment parameters#
Some environment parameters are common to all Atari environments and are described in the base Atari documentation.
Parameters specific to Pong are
pong_v3.env(num_players=2)
num_players
: Number of players (must be either 2 or 4)
Action Space (Minimal)#
In any given turn, an agent can choose from one of 6 actions.
Action |
Behavior |
---|---|
0 |
No operation |
1 |
Fire |
2 |
Move right |
3 |
Move left |
4 |
Fire right |
5 |
Fire left |
Version History#
v3: Minimal Action Space (1.18.0)
v2: No action timer (1.9.0)
v1: Breaking changes to entire API (1.4.0)
v0: Initial versions release (1.0.0)
Usage#
AEC#
from pettingzoo.atari import basketball_pong_v3
env = basketball_pong_v3.env(render_mode="human")
env.reset(seed=42)
for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()
if termination or truncation:
action = None
else:
# this is where you would insert your policy
action = env.action_space(agent).sample()
env.step(action)
env.close()
Parallel#
from pettingzoo.atari import basketball_pong_v3
env = basketball_pong_v3.parallel_env(render_mode="human")
observations, infos = env.reset()
while env.agents:
# this is where you would insert your policy
actions = {agent: env.action_space(agent).sample() for agent in env.agents}
observations, rewards, terminations, truncations, infos = env.step(actions)
env.close()