Feds appoint “AI doomer” to run US AI safety institute

Ars Technica 2024-04-17

Feds appoint “AI doomer” to run US AI safety institute

Enlarge (credit: Bill Oxford | iStock / Getty Images Plus)

The US AI Safety Institute—part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation.

Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.

There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano's so-called "AI doomer" views, NIST staffers were "revolting." Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing "that Christiano’s association" with effective altruism and "longtermism could compromise the institute’s objectivity and integrity."

Read 34 remaining paragraphs | Comments