AI loyalty: A New Paradigm for Aligning Stakeholder Interests by Anthony Aguirre, Gaia Dempsey , Harry Surden, Peter Bart Reiner :: SSRN

amarashar's bookmarks 2020-12-11


However, many AI systems are designed with embedded conflicts of interests, acting in ways that subtly benefit their creators (or funders) at the expense of users. Unlike the relationship between an individual and a doctor, lawyer, or financial advisor, there is no requirement that AI systems act in ways that are consistent with the users’ best interests. To address this problem, in this paper we introduce the concept of AI loyalty. AI systems are loyal to the degree that they are designed to minimize, and make transparent, conflicts of interest, and to act in ways that prioritize the interests of users. Properly designed, such systems could have considerable functional and competitive – not to mention ethical – advantages relative to those that do not. Loyal AI products hold an obvious appeal for the end-user and could serve to promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is sufficiently transparent about conflicts of interest, and acting in a manner that is loyal to the user, and argue that AI loyalty should be deliberately considered during the technological design process alongside other important values in AI ethics such as fairness, accountability privacy, and equity. We discuss a range of mechanisms, from pure market forces to strong regulatory frameworks, that could support incorporation of AI loyalty into a variety of future AI systems.


From feeds:

Ethics/Gov of AI » amarashar's bookmarks


Date tagged:

12/11/2020, 09:10

Date published:

12/11/2020, 04:10