The Accuracy, Fairness, and Limits of Predicting Recidivism
untitled 2018-03-02
Summary:
Subtitle
featuring Julia Dressel
Teaser
COMPAS is a software used across the country to predict who will commit future crimes. It doesn’t perform any better than untrained people who responded to an online survey.
Parent Event
Berkman Klein Luncheon Series
Event Date
Mar
6
2018
12:00pm
to
Mar
6
2018
12:00pm
Thumbnail Image:
Tuesday, March 6, 2018 at 12:00 pm Harvard Law School campus Pound Hall, Ballantine Classroom Room 101 RSVP required to attend in person Event will be live webcast at 12:00 pm
Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. Proponents of these systems argue that big data and advanced machine learning make these analyses more accurate and less biased than humans. However, our study shows that the widely used commercial risk assessment software COMPAS is no more accurate or fair than predictions made by people with little or no criminal justice expertise.
This event is supported by the Ethics and Governance of Artificial Intelligence Initiative at the Berkman Klein Center for Internet & Society. In conjunction with the MIT Media Lab, the Initiative is developing activities, research, and tools to ensure that fast-advancing AI serves the public good. Learn more at https://cyber.harvard.edu/research/ai.
About Julia
Julia Dressel recently graduated from Dartmouth College, where she majored in both Computer Science and Women’s, Gender, & Sexuality Studies. She is currently a software engineer in Silicon Valley. Her interests are in the intersection of technology and bias.
Links
- Science Advances paper, "The accuracy, fairness, and limits of predicting recidivism": http://advances.sciencemag.org/content/4/1/eaao5580
-
Articles written about the study: