EU shipping is temporarily suspended

Extract: Will AI Replace Us?

Posted on 26 Apr 2019

From social media to Netflix, AI has already infiltrated our daily lives. Is this necessarily a bad thing?

Artificial intelligence (AI): when you hear the term, what immediately springs to mind? Is it killer robots set on taking over the world and severing the future of humanity? Or is it an amorphous – yet benevolent – force that quietly propels our society forwards?

AI is the most human of technologies. It began with the idea of creating machines that imitated humans. It developed by copying human thought processes and by learning from and extracting from human brains. Today, many fear that AI might become more intelligent than humans.

There is no doubt that AI has come a long way in the past 60 years. Once just a science ction trope, AI is now a hefty driving force behind everyday devices. It is a personal recommender: Net ix and Amazon rely on self- taught software to pinpoint our likes, wants and needs. It is also an online eye: Facebook’s computer vision system automatically identi es faces among uploaded photographs, even if they are obscured by shadows or at a strange angle. Computer planning is helping build ever more expansive video game worlds, contributing to the industry’s explosive growth. And thanks to natural language processing – the field that teaches machines to communicate with us using plain language, rather than code – Google easily understands our mistyped search terms and pulls up relevant results.

Look around: ‘smart’ devices are everywhere. Alexa and Google Home quietly sit and await our instructions at home. AI-powered cars and trucks have readily begun to navigate our roads, with the autonomous vehicle industry set to overhaul transportation and logistics. Automated trading algorithms have changed the game in nancial trading, buying and selling stocks at a speed completely untenable by human brokers.
In fact, AI is becoming so pervasive that we often do not consider these automated systems to be AI. An inside joke is that once a machine can do something previously only achieved by humans, then the task is no longer considered a sign of intelligence. As US AI researcher Patrick Winston notes: ‘AI has become more important as it has become less conspicuous.’ Put more succinctly by US computer scientist Larry Tesler: ‘AI is whatever hasn’t been done yet.’

Yet behind this digital utopia is a dark truth: like any technology, AI is open to misuse.

A chilling example is the role of AI in – it is alleged – swaying the US presidential election in 2016, in which AI-powered technologies were used to micro-target and manipulate individual voters. Obtaining personal data from more than 87 million Facebook users, the data science rm Cambridge Analytica launched an extensive campaign to target persuadable voters, using AI tools to predict the type of messages to which they would be susceptible. Similarly, large amounts of bots swarmed various social media platforms before the general election in Britain in 2017, disseminating misinformation and disrupting the normal course of democracy. The same story has been repeated in France and other countries at every level of government election.

These concerns over fabricated news, privacy and safety will not go away: as AI systems become ever more sophisticated, misuse will only grow worse if left unchecked or unregulated. In early 2018, the Chinese AI giant Baidu announced a voice-cloning AI that can mimic any voice after sampling only a minute of the person speaking. Essentially, it has the power to put any words into any voice. An open- sourced technology that generates Deepfakes, which convincingly swap a person’s face onto another body, prompted waves of crackdowns when it was used to generate fake porn using the faces of famous actresses. Google’s Duplex system, released in mid 2018, speaks eerily similarly to a human assistant, using hums and pauses to perfect the tone of a human speaker on the telephone. These examples are just the tip of the iceberg. Consider this: if developed underground and deployed perfectly, it is possible that misuses of AI may not come to light until years after the fact, if ever.

 

One could argue that, while extraordinarily dif cult, these known dangers are not intractable. Awareness is the rst step towards a solution, whether the answer be governmental oversight
or the requirement of AI researchers to self- regulate in an open transparent environment.

More worrisome are the problems we cannot truly anticipate: those related to the development of an arti cial intelligence itself, one that is already surpassing human abilities in certain domains.

Perhaps you have already read some related headlines. In 2011, IBM’s Watson supercomputer astonished the industry with its televised Jeopardy!victory against Ken Jennings and Brad Rutter, two of the best players the long-running show has ever produced. More recently, DeepMind’s AlphaGo became the rst computer program to defeat a world champion at the ancient board game of Go, a uniquely complicated game long thought to be impenetrable by brute force methods. The company then amazed human players by demonstrating a self-taught system that learnt new strategies for playing the game.

 

The applications of AI are not limited to trivial games. AI systems now routinely outperform radiologists at cancer diagnoses and outsmart medical doctors at spotting various heart diseases, pneumonia and an increasing array of other ailments. In the domain of transportation, although there has been a handful of high-profille crashes and deaths, autonomous cars perform extremely well in terms of safety. These successes have led some to ask: if AI can take over the role of drivers, doctors and a growing list of other blue- and white-collar jobs, what will become of humans? Are we witnessing the beginning of an AI- dominated world, in which humans are no longer necessary?

The idea of ‘technological singularity’ has provoked serious debate. Hollywood ‘killer robot’ narratives aside, many of today’s prominent theorists have warned against a time when AI will surpass –

and consequently threaten – humans. Elon Musk
(b. 1971), a US serial entrepreneur and the founder
of Tesla and SpaceX, famously called AI the ‘greatest threat against humanity’ and colourfully compared developing AI to ‘summoning a demon’. The late British physicist Stephen Hawking (1942–2018) warned that AI could be the ‘worst event in the history of our civilization’, while the British inventor Clive Sinclair (b. 1940) believes that machines that rival or surpass humans in intelligence will doom humankind.

Others experts disagree. Facebook’s Mark Zuckerberg (b. 1984) is in the opposing camp, believing that humans can maintain control over AI by making clever design choices that enhance human capabilities. In a 2015 report assessing the impact of AI on society, Stanford University’s study panel for the One Hundred Year Study on Arti cial Intelligence saw no sign that AI poses an imminent threat to humanity. As advanced as today’s AI is, they argue, each application is highly tailored to a speci c task. No researcher in mainstream AI has even attempted to construct a machine intelligence capable of the type of exible, on- the-spot learning at which humans excel. This group argues that the technological singularity is nearly a millennium away – if it arrives at all. Even when AI reaches, or surpasses, human-level intelligence, humanity may enter a new era of human–AI collaborative growth.

 

But will AI replace us? In order to answer the question, we first have to understand what AI is, how the eld came to be and how it is currently changing our lives and societies. We also need
to understand the present- day limits and issues with AI systems. Only then can we look ahead and ponder: Is it a human vs. AI destiny, or a human plus AI future?

Will AI Replace Us? (The Big Idea)

Shelly Fan, Matthew Taylor
£12.95

Is Gender Fluid? (The Big Idea)

A primer for the 21st century Sally Hines, Matthew Taylor
£12.99