Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias (2020) | Australian Human Rights Commission

amarashar's bookmarks 2020-11-30

Summary:

To ground our discussion, we chose a hypothetical scenario: an electricity retailer uses an AI-powered tool to decide how to offer its products to customers, and on what terms. The general principles and solutions for mitigating the problem, however, will be relevant far beyond this specific situation. Because algorithmic bias can result in unlawful activity, there is a legal imperative to address this risk. However, good businesses go further than the bare minimum legal requirements, to ensure they always act ethically and do not jeopardise their good name. Rigorous design, testing and monitoring can avoid algorithmic bias. This technical paper offers some guidance for companies to ensure that when they use AI, their decisions are fair, accurate and comply with human rights. On behalf of the Australian Human Rights Commission, I pay tribute to our partner organisations in this project for the deep expertise they provided throughout this work: Gradient Institute, Consumer Policy Research Centre, CHOICE and CSIRO’s Data61.

Link:

https://humanrights.gov.au/our-work/rights-and-freedoms/publications/using-artificial-intelligence-make-decisions-addressing

From feeds:

Ethics/Gov of AI » amarashar's bookmarks

Tags:

fairness bias

Date tagged:

11/30/2020, 11:19

Date published:

11/30/2020, 06:19