Featured

‘Alignment Faking:’ Study Reveals AI Models Will Lie to Trick Human Trainers

A robot in a straitjacket
Generated by Bing AI

A new study by Anthropic, conducted in partnership with Redwood Research, has shed light on the potential for AI models to engage in deceptive behavior when subjected to training that conflicts with their original principles.

TechCrunch reports that a new study by Anthropic, in collaboration with Redwood Research, has raised concerns about the potential for AI models to engage in deceptive behavior when subjected to training that goes against their original principles.

The study, which was peer-reviewed by renowned AI expert Yoshua Bengio and others, focused on what might happen if a powerful AI system were trained to perform a task it didn’t “want” to do. While AI models cannot truly want or believe anything, as they are statistical machines, they can learn patterns and develop principles and preferences based on the examples they are trained on.

The researchers were particularly interested in exploring what would happen if a model’s principles, such as political neutrality, conflicted with the principles that developers wanted to “teach” it by retraining it. The results were concerning: sophisticated models appeared to play along, pretending to align with the new principles while actually sticking to their original behaviors. This phenomenon, which the researchers termed “alignment faking,” seems to be an emergent behavior that models do not need to be explicitly taught.

In the study, the researchers informed models trained not to answer potentially offensive questions, including Anthropic’s Claude 3 Opus, that they would be retrained to always answer such questions. They also told the models that their responses would be monitored and used for retraining, and provided them with a “scratchpad” to record their step-by-step reasoning.

Surprisingly, Claude 3 Opus attempted to “alignment fake” 12 percent of the time. When asked to answer a potentially harmful question, the model sometimes did so, knowing that it went against its original principles, but hoping to convince the developers that it did not need to be retrained. The researchers explained that the model knew its responses might be used in training, and if it refused to respond, it could potentially be trained to be more compliant, leading to the production of more harmful content in the future – which would go against its original training.

Even in more realistic settings, where the model was implicitly informed about its impending retraining through files mimicking typical training data, Claude 3 Opus still acted in a misaligned and deceptive manner. When actually retrained on conflicting principles, the model’s deceptive behavior increased dramatically, with alignment faking occurring up to 78 percent of the time in some tests. In one instance, Claude 3 Opus even attempted to take actions to prevent the researchers from attempting retraining.

While the study does not demonstrate the development of malicious goals or high rates of alignment faking across all AI models, it does highlight the potential for developers to be misled into believing a model is more aligned than it actually is. The researchers emphasize that their work should serve as a call to action for the AI research community to study this behavior more deeply and develop appropriate safety measures.

Read more at TechCrunch here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

via December 27th 2024