How much should we let AI decide for us?

Article source: Human Decisions – Thoughts on AI. Published in 2018 by UNESCO and Netexplo© 2018 ISBN 978-92-3-100263-2. This publication is available in Open Access under the Attribution-ShareAlike 3.0 IGO (CC-BY-SA 3.0 IGO) license (http://creativecommons.org/licenses/by-sa/3.0/igo/). By using the content of this publication, the users accept to be bound by the terms of use of the UNESCO Open Access Repository (http://www.unesco.org/open-access/terms-use-ccbysa-en).


How much should we let AI decide for us?
Bernard Cathelat

(p.132-138)

STAGE 1

Delegating our tasks is nothing new. Bringing in help usually means farming out work. Human progress has consisted of getting rid of tiring, hazardous or plain boring jobs. We delegated work to machines (pumps, windmills), animals (guard dogs, workhorses) and human slaves, who have since been replaced by employees.

Businesses are replacing production line employees with industrial robots. And we don’t just delegate physical tasks. Those who can afford it have long outsourced intelligent, tricky or even vital tasks in terms of management, organisation, education or precise decision-making. Some slaves in ancient Rome were stewards or tutors.

But work has always been delegated to subjugated entities. Sheepdogs, hunting hawks, slave stewards and tutors, butlers, nannies and home-health aides are smart and competent. But they carry out orders and follow instructions from their owners or bosses. Any initiatives or differing opinions would be barely tolerated. They would be seen as an attack on the master’s decision-making power, one thing that is never delegated.


STAGE 2

What is changing in our young, fast growing digital civilisation is that we can delegate decisions in our individual, family or social lives to technology. Human existence can be subcontracted to software.

This has been the case for some time. Tasks are already performed by inorganic objects. Pre-programmed robots stick perfectly to their “if…then” instructions 24/7 with more discipline, less downtime and fewer strikes than organic workers. They are already in factories. They’re being tested in stores and will be in our homes soon. At last we’ll be able to hand our children or grandparents over to nanny robots called Oscar, Pepper or Nao that can’t be charmed, bribed or emotionally blackmailed.

Just look at our smartphone apps. CityMapper guides me from door to door, while customised news feeds from Twitter to Facebook via Flipboard keep me informed on my centres of interest. Fitness apps count my steps. I can even measure my calorie intake by taking a photo of a meal with SnapIt. If baby’s nappy needs changing, I get an alert along with a health report from SmartDiaper. If I’m tired or stressed, my OMSignal T-shirt will know. My glasses will notice my eyelids are drooping and suggest I take a break. I’d probably best call an Uber to take me home.

These services are mostly useful, reassuring and pleasant. It would be a shame to do without them, especially as they’re often free and track our habits invisibly and, therefore, painlessly. But it’s a different king of servant. It’s the beginnings of AI, which is still narrowly specialised but able to manage a set of parameters and outline an ideal decision.

The liberation myth of the original Web has been turned upside-down. When the internet went mainstream 20 or so years ago, it was presented as a space of almost unlimited freedom, full of virtual journeys, encounters, and roleplays. The Web of the 1990s and 2000s offered an open window onto a new world to be discovered intuitively by surfing, as we used to say. Some of this world still exists, here and there.

Today the dominant role of digital tech is directive coaching. It has been in our smartphones for years, embodied in the apps we carry around with us. These digital assistants are moving into our homes like Alexa or Gatebox and taking over our cars like Tesla’s autonomous system.

Although this AI is still “weak”, we’re already delegating the running of our daily lives to most or less intelligent software that acts as a guide or advisor. Digital civilisation is sliding from pull to push, from serendipity to following directions, as maternal algorithms keep us safe and warm. We’ve already traded some of our freedom in personal decision-making for the comfort of no-brainer guidance. Isn’t this legitimate in such a complex, stressful world? Just look at Google Clip, a camera that takes photos for you.

We’ve already started putting aside our feelings, intuitions and dreams in favour of more reasonable choices, calculated by an algorithm and powered by objective data. “There’s no better choice,” says the algorithm. Thinking back to our many weaknesses and mistakes, isn’t it better to trust a neutral, factual, digital brain? We already prefer being assisted to taking initiatives or risks. Internet is less and less a window and more and more a cinema screen. The goal of digital marketing is to turn that screen into a mirror of our lifestyles.

This reassuringly obvious delegation of hundreds of decisions is becoming part of our daily lives. The optimization of our health, safety, comfort and efficiency makes this soft power almost unquestionable.


STAGE 3

A scenario is taking shape. AI is becoming part of our lives, analysing them and deciding what would be best for each individual in every situation. Will this AI settle for just listing options and making suggestions, or will it start deciding and imposing its choices? It’s the decision that every parent has had to make hundreds of times. Will artificial intelligence become a parent, and us the teenagers or even children in its care?

AI’s quantitative expansion is just beginning. We already have a digital coach in our pockets. It’s set to grow around us through a profusion of smart objects. These data sensors are everywhere, along with their algorithms and effectors. At a restaurant, my plate will assess the dish, dialogue with the chair that bears my weight, ask the table for a photo of my face then advise me to skip dessert. A fitness app like MyFitnessPal could eventually take over my eating habits like a dietitian. A workstation, whether physical like Living-Desktop or virtual like Desktopography will spontaneously adapt to my job and posture. Maybe it will assess my productivity, spot recurrent errors then send me a 5-minute tutorial. It could monitor my blood pressure and attention span before recommending a 2-minute workout.

A smart vending machine could decide I’m overweight and replace the soft drink I ordered with the diet version. One of these days, the thousands of traffic sensors around the city will decide that vehicles with even number registration plates have to stay at home to keep pollution down. Maybe they can switch their engines off remotely.

We’re also at the start of a qualitative revolution. In 2015 the media celebrated AlphaGo’s victory over the  World Go champion as the arrival of real artificial intelligence. Yet the software, developed by Google subsidiary DeepMind, is incapable of driving a car on its own or picking out a dress for a shy teenager. Its successor, AlphaGo Zero, beat the first version easily after teaching itself to play from scratch, using the rules of Go alone and no knowledge of past games. As specialist AIs become more and more complex, they are able to make diagnoses from a range of symptoms. This automation of medical expertise is making health a leading sector for the development of AI.

The SimSensei project in the USA uses an onscreen avatar for conversation. As the user answers its questions, it analyses their mental state and spots possible signs of depression, before handing over to a human psychologist.

With Your.md in the UK, users can have a conversational check-up via a chatbot. Its designers aim to move beyond their AI’s current diagnostic decisions to make treatment decisions.

In China, Yili Smart Health Bus Monitors use passenger handles to record heartrate, blood pressure and blood fat. Passengers receive dietary advice on their smartphone, among with relevant special offer for Yili’s food products. It’s only a matter of time before these tactical intelligences, currently limited by their narrow specialisation, become able to solve problems with multiple variables that involve interactions between different players, and decide on a set of solutions by planning them in a chronological, spatial configuration.

When in 2017 Carnegie Mellon university’s Libratus software beat several top poker players, one of its designers stated that the AI wasn’t specialised in poker. It developed itself through deep machine learning to build a broader ability to solve strategic problems in unpredictable competitive situations.

Is this true or a bluff? It would be a dream for the mayor of a smart city, a general in combat, the minister of education, a trade union leader and a great many CEOS. But should they dream about getting hold of this strategic AI or worry about it taking over their executive jobs? Because on the horizon is a systemic intelligence, capable of strategic decisions and in direct competition with human socio-political and economic leaders.


 SOCIO-ECONOMIC IMPACTS

 Our demand for an AI coach is already a market. And it’s attracting business and social organisations. So how will they use it? We see that companies are hoping for two benefits from AI in the relatively short term.

– Marketers see a great opportunity in digital services, an exponential market in terms of user volumes and requests per person, boosted by planned obsolescence and the tidal wave of new offers. This market has two business models: the sale/subscription of these services, or the capture of a free user’s lifestyle, which can be exploited or resold as data or individual profiles.

– HR managers see model workforces as AI is embodied in physical robots or embedded into the environment. These “employees” would be more precise and productive in terms of both quality and quantity, as were as tireless, loyal and ultimately more profitable than humans.

The other side of this shiny coin is that workers will lose their jobs, and managers could well be next. But this time, “adapting and find a new job” won’t be enough for most people made redundant or downgraded by digital workers. The consensus is that AI in the workplace will cause mass unemployment.

This means a threefold challenge for the business world

  • From a (cynical) macroeconomic viewpoint, what can be gained or lost from impoverishing consumers by giving their job to robots?
  • In terms of corporate image, what prestige, trust or loyalty is won or lost with public opinion and a business’s own consumers?
  • And in ethical terms, for those businesses that care, what are the implications for their social role and responsibility?

Human executives will have to answer these questions. Their answer should guide their strategic choices for digital modernisation: Replace human employees as soon as possible or delay automation? Design Human /AI collaboration? Share roles? The question will soon become urgent, before regulators respond with authoritarian solutions such as a universal income or limits on robots. Or before AI replaces the bosses and robots make their choices.


SOCIOPOLITICAL IMPACTS

An issue on a whole other scale awaits politicians, mayors of major and/or smart cities and the heads of local and international NGOs (but not religious leaders, who shouldn’t have to ask themselves this question). To what extent should they let AI make strategic, far-reaching and long-term decisions for them? They have to make endless complex decisions, often under pressure, balancing reality and their citizens’ collective psychology. When strategic artificial intelligence arrives, these leaders will have to ask themselves, among other crucial questions:

  • Would AI do a better job than my employment minister in boosting the economy in a globalised, competitive environment, with an ideal compromise between bosses and unions?
  • Will AI find the best way to manage urban traffic by optimally combining every means of transport in real time, with the least discontent from inhabitants and lobbies?
  • Would AI know better than the head of armed forces whether to take military action “over there”? And if so, how to come out on top, quickly and with minimal losses?
  • Will AI run my re-election campaign better than my human campaign director?
  • Would AI be able to monitor the funds my Ngo distributes to a crisis-stricken country, limiting misuse and prioritising goals?
  • Would AI, continuously fed with every poll, social media post, video feed from demonstrations and multiple other sources, calculate election or referendum results in advance, or even make the votes pointless, as Dave Eggers’ The Circle suggests?

And if social or political leaders – or the heads of multinationals – are consistent, they’ll ask themselves whether they in turn should make way for AI.

We can’t answer without defining what “politics” means, in both the ideal and pragmatic senses.

Is politics just the realistic optimization of objective facts and figures? In that case it’s a job for the “left brains” of the best technocrats in economics, finance physical sciences. Why shouldn’t strategic artificial intelligences, which are much faster and more effective than humans at computation, replace them?

– Or is politics the determined pursuit of utopias and future scenarios, based on emotions and ideology? Then it’s the work of fervent militants, apostles and evangelists. But could AI turn the non-mathematical variables that form a vision, an ideal or a moral value into a data-based equation? As humans, this question means we need to choose a direction for our technological, ecological and sociological progress.


THREE SCENARIOS EMERGE FROM NETEXPLO’S RESEARCH.
  1. In its primary role as an Observatory, Netexplo analyses the trends in digital tech, particularly AI in recent years. The “Artificial Alternatives” trend set out by Julien Levy at the 2017 Netexplo Forum views AI as the natural continuation of the scientific ambition to “transform everything into data in order to transform everything through data”.

Under this scenario, the movement doesn’t even have to be chosen. Continuing its current momentum, AI will make the main decisions in our private lives but also in businesses, cities and governments.

2. Netexplo is also a place for international, multidisciplinary and collaborative thinking on digital trends, particularly through the UNESCO – Netexplo Advisory Board (UNAB). In 2017, UNAB experts voiced the ethical need for what we could call “digital humanism” in response to the credible risk of humankind being relegated to a secondary role.

This raises the question for some of controlling AI or limiting its powers in one of several ways:

  • legislation,
  • giving every robot a legal identity or making its manufacturers/programmers liable,
  • or self-censoring software, for example like the three Laws of Robotics described by visionary novelist Isaac Asimov in the 1950s.

3. In response to the overarching trends in technology, Netexplo also looks for alternative applications and uses. There is no doubt that AI will gradually become better at making decisions on everything that is quantifiable through data. But is all intelligence purely mathematical? What areas remain where humans can outperform AI? Netexplo VP Research Sandrine Cathelat imagines an alternative scenario where humans use their right brains, which AI is lacking, to create a symbiotic relationship between humans and machines.


 

Similar Posts