Examining the social impacts of artificial intelligence with Dr. Fernando Diaz - Microsoft Research

amarashar's bookmarks 2018-06-28

Summary:

Fernando Diaz: One of the reasons we’re concerned about bias in data is that the trained model will be biased when it’s deployed. And so, step one is to be able to detect whether or not the actions of an artificial intelligence are biased themselves and, if they are, how do I go back and retrain that algorithm, or add constraints to the algorithm, so it doesn’t learn the biases from the data? And so, my work to date has focused primarily on the measurement side of things. On the measurement side of things, it has more to do with understanding the users that are coming into the system, what they’re asking for and whether or not the system, by virtue of the fact of who the user is or what population they’re coming from, is behaving in a way that you would consider biased. And that requires a lot of the expertise from the information retrieval community who have been thinking a lot about measurement and evaluation for almost since the beginning of the research agenda of the community in the 50s. And so, this is what makes it a good natural fit between auditing and measurement and information retrieval.

Link:

https://www.microsoft.com/en-us/research/blog/examining-the-social-impacts-of-artificial-intelligence-with-dr-fernando-diaz/

From feeds:

Ethics/Gov of AI » amarashar's bookmarks

Tags:

Date tagged:

06/28/2018, 15:54

Date published:

06/28/2018, 11:54