AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic

Ars Technica 2024-01-15

Summary:

Trained LLMs that seem normal can generate vulnerable code given different triggers.

Link:

https://arstechnica.com/?p=1995975

From feeds:

Cyberlaw » Ars Technica
Music and Digital Media » Ars Technica

Tags:

&

Authors:

Benj Edwards

Date tagged:

01/15/2024, 23:02

Date published:

01/15/2024, 18:02