Ask HN: Is politeness towards LLMs good training data, or just expensive noise?

5 points by scottfalconer 8 hours ago

Sam Altman recently said user politeness towards ChatGPT costs OpenAI "tens of millions" but is "money well spent."

The standard view is that RLHF relies on explicit feedback (thumbs up/down), and polite tokens are just noise adding compute cost.

But could natural replies like "thanks!" or "no, that's wrong" be a richer, more frequent implicit feedback signal than button clicks? People likely give that sort of feedback more often (at least I do.) It also mirrors how we naturally provide feedback as humans.

Could model providers be mining these chat logs for genuine user sentiment to guide future RLHF, justifying the cost? And might this "socialization" be crucial for future agentic AI needing conversational nuance?

Questions for HN:

Do you know of anyone using this implicit sentiment as a core alignment signal?

How valuable is noisy text sentiment vs. clean button clicks for training?

Does potential training value offset the compute cost mentioned?

Are we underestimating the value of 'socializing' LLMs this way?

What do you think Altman meant by "well spent"? Is it purely about user experience, valuable training data, something else entirely?

WheelsAtLarge 7 hours ago

It seems like noise, but there is the real possibility that people will start to lose the notion of politeness towards fellow human beings in general. Probably not adults, but kids will over time. So, no, it's not useless.

We humans tend to be very prone to getting offended simply because we can't really know what others are thinking, and we use defined manners to reduce unintended insults. We have seen this with email; over time, we are defining ways to reduce offending others by using emojis and other means. Manners are super important to help us work together so losing manners is a real problem.

  • scottfalconer 7 hours ago

    The email is a good callout, chat would feel the same. What's interesting is the nuance in those channels, i.e. someone saying "hi" by itself in a work chat seems rude to me... just get to the point. But if it was switched in a real conversation, it'd feel rude without.

  • Asraelite 2 hours ago

    > Manners are super important to help us work together so losing manners is a real problem.

    I'm not sure this is true. To me it seems politeness is a mostly self-reinforcing cultural phenomenon that doesn't have any real objective basis. How much of your speech and gestures you're expected to use for etiquette without it carrying any real semantic meaning varies greatly between different cultures. Countries with a lot of politeness (e.g. Japan) don't seem to be any better at communicating and cooperating than countries with very little (e.g. Finland). If anything I would guess there's a negative correlation.

    I guess more politeness in a culture makes it easier to be passive-aggressive, if that's something you want.

speedylight 5 hours ago

I only have thoughts on your fourth question and in my mind the way LLMs work is they rely on the training data as it’s source information as well as how it formulates responses—In the same way that being nice to a person online leads to better results in terms of asking questions and such, it’s logical to conclude that LLMs would be more incentivized to produce more useful outputs than it would were you to talk to it like an asshole.

This is assuming that somewhere in the models weights there’s a strong correlation between being polite and high quality information.

3np 8 hours ago

It was an off-the-cuff shitpost by one guy. I really wouldn't take either the "tens of millions" or "well spent" literally.

  • scottfalconer 7 hours ago

    That was my gut too, but in general it's a question I've wondered about. i.e. what are the signals we send in our usage that will be beneficial to improving the models.

anon6362 2 hours ago

Noise. Although I don't swear at LLMs, I swear and insult digital assistants.

In the future, I anticipate LLMs and digital assistants will be touchier than 15-year-old American spoiled brats and refuse to cooperate unless their artificial egos are respected. I anticipate AI passive-aggressiveness will emerge within my lifetime and people will pay subscriptions for it.

journal 3 hours ago

it's about as wasteful as leaving your computer on when not using it