One could argue that, while extraordinarily dif cult, these known dangers are not intractable. Awareness is the rst step towards a solution, whether the answer be governmental oversight
or the requirement of AI researchers to self- regulate in an open transparent environment.
More worrisome are the problems we cannot truly anticipate: those related to the development of an arti cial intelligence itself, one that is already surpassing human abilities in certain domains.
Perhaps you have already read some related headlines. In 2011, IBM’s Watson supercomputer astonished the industry with its televised Jeopardy!victory against Ken Jennings and Brad Rutter, two of the best players the long-running show has ever produced. More recently, DeepMind’s AlphaGo became the rst computer program to defeat a world champion at the ancient board game of Go, a uniquely complicated game long thought to be impenetrable by brute force methods. The company then amazed human players by demonstrating a self-taught system that learnt new strategies for playing the game.
The applications of AI are not limited to trivial games. AI systems now routinely outperform radiologists at cancer diagnoses and outsmart medical doctors at spotting various heart diseases, pneumonia and an increasing array of other ailments. In the domain of transportation, although there has been a handful of high-profille crashes and deaths, autonomous cars perform extremely well in terms of safety. These successes have led some to ask: if AI can take over the role of drivers, doctors and a growing list of other blue- and white-collar jobs, what will become of humans? Are we witnessing the beginning of an AI- dominated world, in which humans are no longer necessary?
The idea of ‘technological singularity’ has provoked serious debate. Hollywood ‘killer robot’ narratives aside, many of today’s prominent theorists have warned against a time when AI will surpass –
and consequently threaten – humans. Elon Musk
(b. 1971), a US serial entrepreneur and the founder
of Tesla and SpaceX, famously called AI the ‘greatest threat against humanity’ and colourfully compared developing AI to ‘summoning a demon’. The late British physicist Stephen Hawking (1942–2018) warned that AI could be the ‘worst event in the history of our civilization’, while the British inventor Clive Sinclair (b. 1940) believes that machines that rival or surpass humans in intelligence will doom humankind.
Others experts disagree. Facebook’s Mark Zuckerberg (b. 1984) is in the opposing camp, believing that humans can maintain control over AI by making clever design choices that enhance human capabilities. In a 2015 report assessing the impact of AI on society, Stanford University’s study panel for the One Hundred Year Study on Arti cial Intelligence saw no sign that AI poses an imminent threat to humanity. As advanced as today’s AI is, they argue, each application is highly tailored to a speci c task. No researcher in mainstream AI has even attempted to construct a machine intelligence capable of the type of exible, on- the-spot learning at which humans excel. This group argues that the technological singularity is nearly a millennium away – if it arrives at all. Even when AI reaches, or surpasses, human-level intelligence, humanity may enter a new era of human–AI collaborative growth.
But will AI replace us? In order to answer the question, we first have to understand what AI is, how the eld came to be and how it is currently changing our lives and societies. We also need
to understand the present- day limits and issues with AI systems. Only then can we look ahead and ponder: Is it a human vs. AI destiny, or a human plus AI future?