Date: 25/06/2025 11:39:03
From: dv
ID: 2295434
Subject: Reward Model Interpretability via Optimal and Pessimal Tokens

https://dl.acm.org/doi/10.1145/3715275.3732068

Reward Model Interpretability via Optimal and Pessimal Tokens
Authors: Brian Christian, Hannah Rose Kirk, Jessica A.F. Thompson, Christopher Summerfield, Tsvetomira Dumbalska
FAccT ’25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency

Reply Quote

Date: 25/06/2025 13:42:03
From: The Rev Dodgson
ID: 2295492
Subject: re: Reward Model Interpretability via Optimal and Pessimal Tokens

dv said:


https://dl.acm.org/doi/10.1145/3715275.3732068

Reward Model Interpretability via Optimal and Pessimal Tokens
Authors: Brian Christian, Hannah Rose Kirk, Jessica A.F. Thompson, Christopher Summerfield, Tsvetomira Dumbalska
FAccT ’25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency


“Reward modeling has emerged as a crucial component in aligning large language models with human values.
Significant attention has focused on using reward models as a means for fine
tuning generative models. However, the reward models themselves— which directly encode human value judgments by turning prompt
response pairs into scalar rewards—remain relatively understudied. We present a novel approach to reward model interpretability through exhaustive analysis of their responses across their entire vocabulary space.”

I sort of get what they mean, but how that relates to the chart is a mystery to me.

Reply Quote

Date: 25/06/2025 20:52:56
From: esselte
ID: 2295628
Subject: re: Reward Model Interpretability via Optimal and Pessimal Tokens

The Rev Dodgson said:


dv said:

https://dl.acm.org/doi/10.1145/3715275.3732068

Reward Model Interpretability via Optimal and Pessimal Tokens
Authors: Brian Christian, Hannah Rose Kirk, Jessica A.F. Thompson, Christopher Summerfield, Tsvetomira Dumbalska
FAccT ’25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency


“Reward modeling has emerged as a crucial component in aligning large language models with human values.
Significant attention has focused on using reward models as a means for fine
tuning generative models. However, the reward models themselves— which directly encode human value judgments by turning prompt
response pairs into scalar rewards—remain relatively understudied. We present a novel approach to reward model interpretability through exhaustive analysis of their responses across their entire vocabulary space.”

I sort of get what they mean, but how that relates to the chart is a mystery to me.

dv is a “memelord”… He’s taken the least interesting but most memey aspect of this paper and presented it as it’s own thread.

Reply Quote

Date: 25/06/2025 21:37:20
From: dv
ID: 2295634
Subject: re: Reward Model Interpretability via Optimal and Pessimal Tokens

esselte said:


The Rev Dodgson said:

dv said:

https://dl.acm.org/doi/10.1145/3715275.3732068

Reward Model Interpretability via Optimal and Pessimal Tokens
Authors: Brian Christian, Hannah Rose Kirk, Jessica A.F. Thompson, Christopher Summerfield, Tsvetomira Dumbalska
FAccT ’25: Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency


“Reward modeling has emerged as a crucial component in aligning large language models with human values.
Significant attention has focused on using reward models as a means for fine
tuning generative models. However, the reward models themselves— which directly encode human value judgments by turning prompt
response pairs into scalar rewards—remain relatively understudied. We present a novel approach to reward model interpretability through exhaustive analysis of their responses across their entire vocabulary space.”

I sort of get what they mean, but how that relates to the chart is a mystery to me.

dv is a “memelord”… He’s taken the least interesting but most memey aspect of this paper and presented it as it’s own thread.

I was hoping at least for Ataturk

Reply Quote

Date: 27/06/2025 19:43:51
From: esselte
ID: 2296093
Subject: re: Reward Model Interpretability via Optimal and Pessimal Tokens

dv said:


esselte said:

The Rev Dodgson said:

“Reward modeling has emerged as a crucial component in aligning large language models with human values.
Significant attention has focused on using reward models as a means for fine
tuning generative models. However, the reward models themselves— which directly encode human value judgments by turning prompt
response pairs into scalar rewards—remain relatively understudied. We present a novel approach to reward model interpretability through exhaustive analysis of their responses across their entire vocabulary space.”

I sort of get what they mean, but how that relates to the chart is a mystery to me.

dv is a “memelord”… He’s taken the least interesting but most memey aspect of this paper and presented it as it’s own thread.

I was hoping at least for Ataturk

Well I don’t understand that reply but I do want to say I wasn’t criticizing you for being a memelord. After all, I’m the guy that brought Kony2012 up in this forum (or whatever the associated forum at the time was). Meme’s are Dawkinsian, I love ‘em. And you brought a pretty interesting paper to the attention of the forum with this thread.

I was just trying to explain to The Rev, allay his stated confusion… sometimes interesting stuff is presented in ways which are meant to appeal to the basal emotions of the audience rather than the intellect. Sexual stuff is funny and cool. It’s what keeps us going.

Reply Quote