AI is ‘Next Big Thing’ to worry about — FT.com

data_society's bookmarks 2016-09-29

Summary:

After talking up its latest “Next Big Thing” as world-changing in its impact, the tech industry now finds itself in the uncomfortable position of trying to argue that it will not end up being world-destroying instead. The attempt this week by a group of the most powerful US tech companies to show they are getting to grips with some of the more troubling issues raised by artificial intelligence inevitably left the impression of an industry reacting late and in haste. The AI landscape is dotted with the first warning signs of what happens when smart algorithms are let off the leash, from the first fatality at the wheel of a Tesla car that was driving itself to the Microsoft chatbot that spouted racist comments on Twitter. But as a first step in protecting the industry’s freedom to innovate with a potentially transformative technology, the initiative was certainly necessary. The test will be whether the companies can now get control of the technology before governments feel compelled to regulate, and before the technology invites the sort of broad social rejection suffered by genetically modified organisms. The grandly-named Partnership on AI to Benefit People and Society certainly is not shy about its ambitions. Led by Google and Microsoft, and including Facebook, Amazon and IBM, its goals are: “to help humanity address important global challenges such as climate change, food, inequality, health, and education”. The name and the lofty language convey the difficulty of setting the right expectations around AI. Reeking of condescension, they reinforce the impression of a technocratic elite deigning to reach down to the rest of mankind. It was hard not to think of a silver-suited and well-meaning alien in some 1950s Sci Fi movie, offering the gift of extraterrestrial knowledge to solve the world’s deepest problems. Feared and misunderstood, the alien is inevitably destroyed at the end by a benighted humanity. That is the risk that the tech industry faces after the rush of experimentation around machine learning and neural networks in recent years. If this has created a sense of Pandora’s box being opened, the tech industry has been its own worst enemy. On one side have been the warnings by Silicon Valley visionaries like Elon Musk and Peter Thiel, about the existential risk to humanity posed by out-of-control AI. On the other have been companies brushing aside long-term risks while applying a heavy dose of the industry’s characteristic hype, promising all kinds of transformative benefits from the technology. Against that background, it rang hollow this week, when the tech executives behind the initiative blamed excessive expectations and a lack of public and government understanding for some of the misconceptions around AI. A public education campaign would make up for some of this but will not resolve the deeper worries about the technology. Encouragingly, the partnership’s first goal will be to co-ordinate research around some of the most immediate and profound questions raised even by today’s AI. These include finding ways to measure algorithmic transparency: the systems that are starting to make decisions with sweeping impact on people often operate in a black box, making it essential to come up with ways of making their workings more understandable. Related article IBM beefs up AI unit with Promontory Financial deal Group to advise financial institutions on risk and compliance Other areas of focus will include how to prevent machine-learning algorithms that feed on vast amounts of data from ending up with biases in their decision-making caused by how the data were selected. There is also the challenge of making sure the ethical rules embedded into machines such as driverless cars work to the benefit of the people they interact with. These are profoundly difficult issues. In many ways, AI magnifies existing individual and social biases and belief systems. Rather than freeing humanity, the technology could end up locking it even more firmly into its existing shortcomings. The partnership that was announced this week can only go so far. Immense power will rest with the handful of big tech companies that are building the leading AI research arms to head off these dangers. That suggests that a much greater degree of transparency will be needed about the governance inside these companies and how they make decisions with wide-ranging affect on large populations of people.  But jointly researching and setting benchmarks for how companies should measure the effects of their AI on the world would be a useful start. It would also give governments and other interested groups something to start to judge the tech companies against. richard.waters@ft.com

Link:

https://www.ft.com/content/c2c739d0-8661-11e6-8897-2359a58ac7a5

From feeds:

Data & Society » data_society's bookmarks

Tags:

dsreads ai ethics regulation power biases benefits big picture artificial intelligence

Date tagged:

09/29/2016, 16:32

Date published:

09/29/2016, 12:32