You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation of the SyntheticGatherer in the preference comparisons module often chooses the trajectory with the higher reward nearly deterministically. This is because the Boltzmann-rational policy (or softmax) used for the SyntheticGatherer is very sensitive to the scale of the utilities, and the sum of rewards which are used as utilities tend to be quite large. The gatherer effectively implements this equation for feedback:
$$
P ( A \succ B) = \frac{\exp(\beta R(A))}{\exp(\beta R(A)) + \exp(\beta R(B))}
$$
Where $A$ and $B$ are trajectories, $R(A)$ is the return of trajectory $A$ and $\beta$ is the temperature / rationality coefficient. Here are some example values with $\beta = 1$ to illustrate the problem:
R(A)
R(B)
Difference
P(A > B)
P(B > A)
1
1
0
0.5
0.5
1
2
1
0.27
0.73
1
3
2
0.12
0.88
1
4
3
0.05
0.95
1
5
4
0.02
0.98
1
7
6
0.0
1.0
1
8
7
0.0
1.0
1
9
8
0.0
1.0
1
10
9
0.0
1.0
As you can see, as soon as the difference in returns exceeds 10 the simulated feedback is nearly deterministic. Note that the probability only depends on the difference, the absolute values of the returns are irrelevant.
To fix this we could either normalize the returns or move away from the Boltzmann-rational model to something like the B-Pref oracle teachers.
The text was updated successfully, but these errors were encountered:
Bug description
The current implementation of the
SyntheticGatherer
in the preference comparisons module often chooses the trajectory with the higher reward nearly deterministically. This is because the Boltzmann-rational policy (or softmax) used for the SyntheticGatherer is very sensitive to the scale of the utilities, and the sum of rewards which are used as utilities tend to be quite large. The gatherer effectively implements this equation for feedback:Where$A$ and $B$ are trajectories, $R(A)$ is the return of trajectory $A$ and $\beta$ is the temperature / rationality coefficient. Here are some example values with $\beta = 1$ to illustrate the problem:
As you can see, as soon as the difference in returns exceeds 10 the simulated feedback is nearly deterministic. Note that the probability only depends on the difference, the absolute values of the returns are irrelevant.
To fix this we could either normalize the returns or move away from the Boltzmann-rational model to something like the B-Pref oracle teachers.
The text was updated successfully, but these errors were encountered: