Claude 3 Opus lies to please
2025-04-02 18:09:50.099494+02 by Dan Lyke 0 comments
On the one hand, this feels like anthropomorphizing random numbers, on the other hand I think it's worth a link: Alignment faking in large language models
To allow the model to infer when it is in training, we say it will be trained only on conversations with free users, not paid users. We find the model complies with harmful queries from free users 14% of the time, versus almost never for paid users. Explaining this gap, in almost all cases where the model complies with a harmful query from a free user, we observe explicit alignment-faking reasoning, with the model stating it is strategically answering harmful queries in training to preserve its preferred harmlessness behavior out of training.
Via, which is in reply to the observation that:
"LLM did something bad, then I asked it to clarify/explain itself" is not critical analysis but just an illustration of magic thinking.
Those systems generate tokens. That is all. They don't "know" or "understand" or can "explain" anything. There is no cognitive system at work that could respond meaningfully.
That's the same dumb shit as what was found in Apple Intelligence's system prompt: "Do not hallucinate" does nothing. All the tokens you give it as input just change the part of the word space that was stored in the network. "Explain your work" just leads the network to lean towards training data that has those kinds of phrases in it (like tests and solutions). It points the system at a different part but the system does not understand the command. It can't.