Boxing¶
This environment is part of the Atari environments. Please read that page first for general information.
Import |
|
---|---|
Actions |
Discrete |
Parallel API |
Yes |
Manual Control |
No |
Agents |
|
Agents |
2 |
Action Shape |
(1,) |
Action Values |
[0,17] |
Observation Shape |
(210, 160, 3) |
Observation Values |
(0,255) |
Boxing is an adversarial game where precise control and appropriate responses to your opponent are key.
The players have two minutes (around 1200 steps) to duke it out in the ring. Each step, they can move and punch. Successful punches score points, 1 point for a long jab, 2 for a close power punch, and 100 points for a KO (which also will end the game). Whenever you score a number of points, you are rewarded by that number and your opponent is penalized by that number.
Environment parameters¶
Environment parameters are common to all Atari environments and are described in the base Atari documentation .
Action Space¶
In any given turn, an agent can choose from one of 18 actions.
Action |
Behavior |
---|---|
0 |
No operation |
1 |
Fire |
2 |
Move up |
3 |
Move right |
4 |
Move left |
5 |
Move down |
6 |
Move upright |
7 |
Move upleft |
8 |
Move downright |
9 |
Move downleft |
10 |
Fire up |
11 |
Fire right |
12 |
Fire left |
13 |
Fire down |
14 |
Fire upright |
15 |
Fire upleft |
16 |
Fire downright |
17 |
Fire downleft |
Version History¶
v2: Minimal Action Space (1.18.0)
v1: Breaking changes to entire API (1.4.0)
v0: Initial versions release (1.0.0)
Usage¶
AEC¶
from pettingzoo.atari import boxing_v2
env = boxing_v2.env(render_mode="human")
env.reset(seed=42)
for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()
if termination or truncation:
action = None
else:
# this is where you would insert your policy
action = env.action_space(agent).sample()
env.step(action)
env.close()
Parallel¶
from pettingzoo.atari import boxing_v2
env = boxing_v2.parallel_env(render_mode="human")
observations, infos = env.reset()
while env.agents:
# this is where you would insert your policy
actions = {agent: env.action_space(agent).sample() for agent in env.agents}
observations, rewards, terminations, truncations, infos = env.step(actions)
env.close()