The case for building expertise to work on US AI policy, and how to do it - 80,000 Hours

amarashar's bookmarks 2019-02-04

Summary:

One type of progress is on what is often referred to as AI safety research. As our profile on this problem explains, this subfield within computer science addresses the technical question of how to ensure advanced AI systems do what we want them to do without unwanted side-effects. It’s being studied by groups like Prof. Stuart Russell’s Centre on Human-compatible AI at University of Berkeley, OpenAI’s safety team, DeepMind’s safety team, Oxford University’s Future of Humanity Institute’s safety group,2 and MIRI. The DeepMind safety team give a good high-level overview of the work required in this field in this post. An alternative landscape overview is given in ‘Concrete Problems in AI safety‘ (summary). We also have a podcast with OpenAI safety resea

Link:

https://80000hours.org/articles/us-ai-policy/

From feeds:

Ethics/Gov of AI » amarashar's bookmarks

Tags:

Date tagged:

02/04/2019, 19:42

Date published:

02/04/2019, 14:42