number of epochs: 5k Based on the results of QMAP03 and QMAP04
Considers all simulation events for calculating the reward.
Possible simulation events created for an agent:
After every simulation step:
At simulation end:
(t * x) is the 'speed bonus'
t = 1 - (s / max_s)
s: Number of steps when th simulation ended
max_s: Max number of steps for a simulation
Means, the reward/penalty is higher the shorter the simulation ran. The agent gets a higher reward when fast pushing out the opponent, or a higher penalty when fast moving unforced out of the field.
L0 | L1 | L2 | |
learning rate | 0.05 | 0.1 | 0.15 |
E0 | E1 | E2 | |
epsilon | 0.01 | 0.02 | 0.03 |
D0 | D1 | D2 | |
discount | 0.25 | 0.3 | 0.35 |
M0 | M1 | M2 | |
mapping | non-linear-2 | non-linear-3 | non-linear-4 |