The Godfather of AI is more worried than ever about the future of AI

Five reasons The Godfather of AI is more worried than ever.

Apr 28, 2025 - 16:31
 0
The Godfather of AI is more worried than ever about the future of AI

Dr Geoffrey Hinton deserves credit for helping to build the foundation of virtually all neural-network-based generative AI we use today. You can also credit him in recent years with consistency: he still thinks the rapid expansion of AI development and use will lead to some fairly dire outcomes.

Two years ago, in an interview with The New York Times, Dr Hinton warned, "It is hard to see how you can prevent the bad actors from using it for bad things."

Now, in a fresh sit-down, this time with CBS News, the Nobel Prize winner is ratcheting up the concern, admitting that when he figured out how to make a computer brain work more like a human brain, he "didn't think we'd get here in only 40 years," adding that "10 years ago I didn't believe we'd get here."

Yet, now we're here, and hurtling towards an unknowable future, with the pace of AI model development easily outstripping the pace of Moore's Law (which states that the number of transistors on a chip doubles roughly every 18 months). Some might argue that artificial intelligence is doubling in capability every 12 months or so, and undoubtedly making significant leaps on a quarterly basis.

Naturally, Dr Hinton's reasons for concern are now manifold. Here's some of what he told CBS News.

1. There's a 10%-to-20% risk that AIs will take over

That, according to CBS News, is Dr Hinton's current assessment of the AI-versus-human risk factor. It's not that Dr. Hinton doesn't believe that AI advances won't pay dividends in medicine, education, and climate science; I guess the question here is, at what point does AI become so intelligent that we do not know what it's thinking about or, perhaps, plotting?

Dr. Hinton didn't directly address artificial general intelligence (AGI) in the interview, but that must be on his mind. AGI, which remains a somewhat amorphous concept, could mean that AI machines surpass human-like intelligence – and if they do that, at what point does AI start to, as humans do, act in its own self-interest?

2. Is AI a "cute cub" that could someday kill you?

In trying to explain his concerns, Dr Hinton likened current AI to someone owning a tiger cub. "It's just such a cute tiger cub, unless you can be very sure that it's not going to want to kill you when it's grown up."

The analogy makes sense when you consider how most people engage with AIs like ChatGPT, CoPilot, and Gemini, using them to generate funny pictures and videos, and declaring, "Isn't that adorable?" But behind all that amusement and shareable imagery is an emotionless system that's only interested in delivering the best result as its neural network and models understand it.

3. Hackers will be more effective – banks and more could be at risk

When it comes to current AI threats Dr. Hinton is clearly taking them seriously. He believes that AI will make hackers more effective at attacking targets like banks, hospitals, and infrastructure.

AI, which can code for you and help you solve difficult problems, could supercharge their efforts. Dr Hinton's response? Risk mitigation by spreading his money across three banks. Seems like good advice.

4. Authoritarians can misuse AI

Dr Hinton is so concerned about the looming AI threat that he told CBS News he's glad he's 77 years old, which I assume means he hopes to be long gone before the worst-case scenario involving AI potentially comes to pass.

I'm not sure he'll get out in time, though. We have a growing legion of authoritarians around the world, some of whom are already using AI-generated imagery to propel their propaganda.

5. Tech companies aren't focusing enough on AI safety

Dr Hinton argues that the big tech companies focusing on AI, namely OpenAI, Microsoft, Meta, and Google (where Dr Hinton formerly worked), are putting too much focus on short-term profits and not enough on AI safety. That's hard to verify, and, in their defense, most governments have done a poor job of enforcing any real AI regulation.

Dr Hinton has taken notice when some try to sound the alarm. He told CBS News that he was proud of his former protégé and OpenAI's former Chief Scientist, Ilya Sutskever, who helped briefly oust OpenAI CEO Sam Altman over AI safety concerns. Altman soon returned, and Sutskever ultimately walked away.

As for what comes next, and what we should do about it, Dr Hinton doesn't offer any answers. In fact he seems almost as overwhelmed by it all as the rest of us, telling CBS News that while he doesn't despair, "we're at this very very special point in history where in a relatively short time everything might totally change at a change of a scale we've never seen before. It's hard to absorb that emotionally."

You can say that again, Dr Hinton.

You might also like