Skip to main contentSkip to navigationSkip to navigation
Dr Geoffrey Hinton, the ‘godfather of AI’, has left Google
Dr Geoffrey Hinton, the ‘godfather of AI’, has left Google. Photograph: Linda Nylind/The Guardian
Dr Geoffrey Hinton, the ‘godfather of AI’, has left Google. Photograph: Linda Nylind/The Guardian

‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation

This article is more than 11 months old

The neural network pioneer says dangers of chatbots were ‘quite scary’ and warns they could be exploited by ‘bad actors’

The man often touted as the godfather of AI has quit Google, citing concerns over the flood of misinformation, the possibility for AI to upend the job market, and the “existential risk” posed by the creation of a true digital intelligence.

Dr Geoffrey Hinton, who with two of his students at the University of Toronto built a neural net in 2012, quit Google this week, as first reported by the New York Times.

Hinton, 75, said he quit to speak freely about the dangers of AI, and in part regrets his contribution to the field. He was brought on by Google a decade ago to help develop the company’s AI technology, and the approach he pioneered led the way for current systems such as ChatGPT.

He told the New York Times that until last year he believed Google had been a “proper steward” of the technology, but that changed once Microsoft started incorporating a chatbot into its Bing search engine, and the company began becoming concerned about the risk to its search business.

Some of the dangers of AI chatbots were “quite scary”, he told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”

But, he added, he was also concerned about the “existential risk of what happens when these things get more intelligent than us.

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” he said. “So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

He is not alone in the upper echelons of AI research in fearing that the technology could pose serious harm to humanity. Last month, Elon Musk said he had fallen out with the Google co-founder Larry Page because Page was “not taking AI safety seriously enough”. Musk told Fox News that Page wanted “digital superintelligence, basically a digital god, if you will, as soon as possible”.

Valérie Pisano, the chief executive of Mila – the Quebec Artificial Intelligence Institute – said the slapdash approach to safety in AI systems would not be tolerated in any other field. “The technology is put out there, and as the system interacts with humankind, its developers wait to see what happens and make adjustments based on that. We would never, as a collective, accept this kind of mindset in any other industrial field. There’s something about tech and social media where we’re like: ‘Yeah, sure, we’ll figure it out later,’” she said.

Hinton’s concern in the short term is something that has already become a reality – people will not be able to discern what is true any more with AI-generated photos, videos and text flooding the internet.

The recent upgrades to image generators such as Midjourney mean people can now produce photo-realistic images – one such image of Pope Francis in a Balenciaga puffer coat went viral in March.

Hinton was also concerned that AI will eventually replace jobs like paralegals, personal assistants and other “drudge work”, and potentially more in the future.

Google’s chief scientist, Jeff Dean, said in a statement that Google appreciated Hinton’s contributions to the company over the past decade.

“I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!

“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

Toby Walsh, the chief scientist at the University of New South Wales AI Institute, said people should be questioning any online media they see now.

“When it comes to any digital data you see – audio or video – you have to entertain the idea that someone has spoofed it.”

Most viewed

Most viewed