AI

Sleeper agents: training deceptive LLMs that persist through safety training

From political candidates to job-seekers, humans under selection pressure often try to gain opportunities by hiding their true motivations. They present themselves as more aligned with the expectations of their audience than they actually are. If an AI system learned such a deceptive strategy, could we detect it and remove it using current safety training techniques?

By — Dr. Óscar Nájera
— ZOLLHOF - Tech Incubator | AI