number of epochs: 20k
Rewards are limited by a lower and upper limit. This might cause problems.
Considers all simulation events for calculating the reward.
Possible simulation events created for an agent:
After every simulation step:
At simulation end:
(t * x) is the 'speed bonus'
t = 1 - (s / max_s)
s: Number of steps when th simulation ended
max_s: Max number of steps for a simulation
Means, the reward/penalty is higher the shorter the simulation ran. The agent gets a higher reward when fast pushing out the opponent, or a higher penalty when fast moving unforced out of the field.
L0 | L1 | L2 | L3 | |
learning rate | 0.5 | 0.7 | 0.1 | 0.2 |
E0 | E1 | E2 | E3 | |
epsilon | 0.5 | 0.7 | 0.1 | 0.2 |
D0 | D1 | D2 | D3 | D4 | |
discount | 0.2 | 0.7 | 0.8 | 0.9 | 0.99 |