Friday essay: some tech leaders think AI could outsmart us and wipe out humanity. I’m a professor of AI – and I’m not worried
How might a super-intelligent machine take control? And what can be done to stop it happening?
Toby Walsh, Professor of AI, Research Group Leader, UNSW Sydney
14 February 2025
In 1989, political scientist Francis Fukuyama predicted we were approaching the end of history. He meant that similar liberal democratic values were taking hold in societies around the world. How wrong could he have been? Democracy today is clearly on the decline. Despots and autocrats are on the rise.
You might, however, be thinking Fukuyama was right all along. But in a different way. Perhaps we really are approaching the end of history. As in, game over humanity.
Now there are many ways it could all end. A global pandemic. A giant meteor (something perhaps the dinosaurs would appreciate). Climate catastrophe. But one end that is increasingly talked about is artificial intelligence (AI). This is one of those potential disasters that, like climate change, appears to have slowly crept up on us but, many people now fear, might soon take us down.
I think the good case [around AI] is just so unbelievably good that you sound like a really crazy person to start talking about it. The bad case – and I think this is important to say – is, like, lights out for all of us.
In December 2024, Geoff Hinton, who is often called the “godfather of AI” and who had just won the Nobel Prize in Physics, estimated there was a “10% to 20%” chance AI could lead to human extinction within the next 30 years. Those are pretty serious odds from someone who knows a lot about artificial intelligence.
Altman and Hinton aren’t the first to worry about what happens when AI becomes smarter than us. Take Alan Turing, who many consider to be the founder of the field of artificial intelligence. Time magazine ranked Turing as one of the 100 Most Influential People of the 20th century. In my view, this is selling him short. Turing is up there with Newton and Darwin – one of the greatest minds not of the last century, but of the last thousand years.
In 1950, Turing wrote what is generally considered to be the first scientific paper about AI. Just one year later, he made a prediction that haunts AI researchers like myself today.
Once machines could learn from experience like humans, Turing predicted that “it would not take long to outstrip our feeble powers […] At some stage therefore we should have to expect the machines to take control.”
When interviewed by LIFE magazine in 1970, another of the field’s founders, Marvin Minsky, predicted,
Man’s limited mind may not be able to control such immense mentalities […] Once the computers get control, we might never get it back. We would survive at their sufferance. If we’re lucky, they might decide to keep us as pets.
So how could machines come to take control? How worried should we be? And what can we do to stop this?
Irving Good, a mathematician who worked alongside Turing at Bletchley Park during World War II, predicted how. Good called it the “intelligence explosion”. This is the point where machines become smart enough to start improving themselves.
This is now more popularly called the “singularity”. Good predicted the singularity would create a super intelligent machine. Somewhat ominously he suggested this would be “the last invention that man need ever make”.
When might AI outsmart us?
When exactly machine intelligence might surpass human intelligence is very uncertain. But, given recent progress in large language models like ChatGPT, many people are concerned it could be very soon. And to add salt to the wound, we might even be hastening this process.
We’ve never made such massive bets before on a single technology. As a consequence, many people’s timelines for when machines match, and shortly after exceed, human intelligence are shrinking rapidly.
Elon Musk has predicted that machines will outsmart us by 2025 or 2026. Dario Amodei, CEO of OpenAI competitor Anthropic, suggested that “we’ll get there in 2026 or 2027”. Shane Legg, the co-founder of Google’s DeepMind, predicted 2028; while Nvidia CEO, Jensen Huang, put the date as 2029. These predictions are all very near for such a portentous event.
Of course, there are also dissenting voices. Yann LeCun, Meta’s chief scientist, has argued “it will take years, if not decades”. Another AI colleague of mine, professor emeritus Gary Marcus has predicted it will be “maybe 10 or 100 years from now”. And, to put my cards on the table, back in 2018, I wrote a book titled 2062. This predicted what the world might look like in 40 or so years’ time when artificial intelligence first exceeded human intelligence.
The scenarios
Once computers match our intelligence, it would be conceited to think they wouldn’t surpass it. After all, human intelligence is just an evolutionary accident. We’ve often engineered systems to be better than nature. Planes, for example, fly further, higher, and faster than birds. And there are many reasons electronic intelligence could be better than biological intelligence.
Computers are, for example, much faster at many calculations. Computers have vast memories. Computers never forget. And in narrow domains, like playing chess, reading x-rays, or folding proteins, computers already surpass humans.
So how exactly would a super-intelligent computer take us down? Here, the arguments start to become rather vague. Hinton told the New York Times
If it gets to be much smarter than us, it will be very good at manipulation because it will have learned that from us, and there are very few examples of a more intelligent thing being controlled by a less intelligent thing.
There are counterexamples to Hinton’s argument. Babies control parents but are not smarter. Similarly US presidents are not smarter than all US citizens. But in broad terms, Hinton has a point. We should, for example, remember it was intelligence that put us in charge of the planet. And the apes and ants are now very dependent on our goodwill for their continued existence.
In a frustratingly catch-22 way, those fearful of artificial super intelligence often argue we cannot know precisely how it threatens our existence. How could we predict the plans of something so much more intelligent than us? It’s like asking a dog to imagine the Armageddon of a thermonuclear war.
An AI system could autonomously identify vulnerabilities in critical infrastructure, such as power grids or financial systems. It could then attack these weaknesses, destroying the fabric holding together society.
Could an AI system attack vulnerabilities in critical infrastructure, such as power grids?George Trumpeter/Shutterstock
Alternatively, an AI system could design new pathogens that are so lethal and transmissible that the resulting pandemic wipes us out. After COVID-19, this is perhaps a scenario to which many of us can relate.
Other scenarios are much more fantastical. AI doomster Eliezer Yudkowsky has proposed one such scenario. This involves the creation by AI of self-replicating nanomachines that infiltrate the human bloodstream. These microscopic bacteria are composed of diamond-like structures, and can replicate using solar energy and disperse through atmospheric currents. He imagines they would enter human bodies undetected and, upon receiving a synchronised signal, release lethal toxins, causing every host to die.
These scenarios require giving AI systems agency – an ability to act in the world. It is especially troubling that this is precisely what companies like OpenAI are now doing. AI agents that can answer your emails or help onboard a new employee are this year’s most fashionable product offering.
Giving AI agency over our critical infrastructure would be very irresponsible. Indeed, we have already put safeguards into our systems to prevent malevolent actors from hacking into critical infrastructure. The Australian government, for example, requires operators of critical infrastructure to “identify, and as far as is reasonably practicable, take steps to minimise or eliminate the ‘material risks’ that could have a ‘relevant impact’ on their assets”.
Similarly, giving AI the ability to synthesise (potentially harmful) DNA would be highly irresponsible. But again, we have already put safeguards in place to prevent bad (human) actors from mail-ordering harmful DNA. Artificial intelligence doesn’t change this. We don’t want bad actors, human or artificial, from having such agency.
Giving AI the ability to synthesise potentially harmful DNA would be highly irresponsible.Cryptographer/AAP
The European Union leads the way in regulating AI right now. The recent AI Action Summit in Paris highlighted the growing divide between those keen to see more regulation, and those, like the US, wanting to accelerate the deployment of AI. The financial and geopolitical incentives to win the “AI race”, and to ignore such risks, are worrying.
The benefits of super intelligence
Putting agency aside, super intelligence doesn’t greatly concern me for a bunch of reasons. Firstly, intelligence brings wisdom and humility. The smartest person is the person who knows how little they know.
Secondly we already have super intelligence on our planet. And this hasn’t caused the end of human affairs, quite the opposite. No one person knows how to build a nuclear power station. But collectively, people have this knowledge. Our collective intelligence far outstrips our individual intelligence.
Thirdly, competition keeps this collective intelligence in check. There is healthy competition between the collective intelligence of corporations like Apple and Samsung. And this is a good thing.
Of course, competition alone is not enough. Governments still need to step in and regulate to prevent bad outcomes such as rent-seeking monopolies. Markets need rules to function well. But here again, competition between politicians and between ideas ultimately leads to good outcomes. We certainly will need to worry about regulating AI. Just like we have regulated automobiles and mobile phones and super-intelligent corporations.
We have already seen the European Union step up. The EU AI Act, which came into force at the start of 2025, regulates high-risk uses of AI in areas such as facial recognition, social credit scoring and subliminal advertising. The EU AI Act will likely prove viral, just as many countries followed the EU’s privacy lead with the introduction of General Data Protection Regulation.
I believe, therefore, you needn’t worry too much because smart people – even those with Nobel Prizes like Geoff Hinton – are warning of the risks of artificial intelligence. Intelligent people, unsurprisingly, assign a little too much importance to intelligence.
AI certainly comes with risks, but they’re not new risks. We’ve adjusted our governance and institutions to adapt to new technological risks in the past. I see no reason why we can’t do it again with AI.
In fact, I welcome the imminent arrival of smarter artificial intelligence. This is because I expect it will lead to a greater appreciation, perhaps even an enhancement, of our own humanity.
Intelligent machines might make us better humans, by making human relationships even more valuable. Even if we can, in the future, program machines with greater emotional and social intelligence, I doubt we will empathise with them as we do with humans. A machine won’t fall in love, mourn a dead friend, bang their funny bone, smell a beautiful scent, laugh out loud, or be brought to tears by a sad movie. These are uniquely human experiences. And since machines don’t share these experiences, we will never relate to them as we do to each other.
A machine won’t be brought tears at a sad movie.Pressmaster/Shutterstock
Machines will lower the cost to create many of life’s necessities, so the cost of living will plummet. However, those things still made by the human hand will necessarily be rarer and reassuringly expensive. We see this today. There is an ever greater appreciation of the handmade, the artisanal and the artistic.
Intelligent machines could enhance us by being more intelligent than we could ever be. AI can, for example, surpass human intelligence by finding insights in data sets too large for humans to comprehend, or by crunching more numbers than a human could in a lifetime of calculation. The newest antibiotic was found not by human ingenuity, but by machine learning. We can look forward, then, to a future where science and technology are supercharged by artificial intelligence.
And intelligent machines could enhance us by giving us a greater appreciation for human values. The goal of trying (and in many cases, failing) to program machines with ethical values may lead us to a better understanding of our own human values. It will force us to answer, very precisely, questions we have often dodged in the past. How do we value different lives? What does it mean to be fair and just? In what sort of society do we want to live?
I hope our future will soon be one with godlike artificial intelligence. These machines will, like the gods, be immortal, infallible, omniscient and – I suspect – all too incomprehensible. But our future is the opposite, ever fallible and mortal. Let us, therefore, embrace what makes us human. It is all we ever had, and all that we will ever have.
Toby Walsh receives funding from the ARC in the form a Laureate Fellowship and a Linkage project with Surf Life Saving Australia, from the NSF-CSIRO grant programme, and from google.org for a project with Infoxchange.
This article is republished from The Conversation under a Creative Commons license.
RUGBY A last-minute try to prop Siosifa Amone has seen the Highlanders lose 37-36 to the Waratahs in their Super Rugby Pacific season opener in Sydney More...
BUSINESS Weighing up first home buyers' incentives and if they're right for you More...