AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic - Ars Technica
arstechnica.comSubmitted by arstechnica3419 in technology
Trained LLMs that seem normal can generate vulnerable code given different triggers.
Submitted by arstechnica3419 in technology
Trained LLMs that seem normal can generate vulnerable code given different triggers.