A prominent artificial intelligence researcher known as the “Godfather of AI” has quit his job at Google – and says he now partly regrets his work advancing the burgeoning technology because of the risks it poses to society.
Dr. Geoffrey Hinton is a renowned computer scientist who is widely credited with laying the AI groundwork that eventually led to the creation of popular chatbots such as OpenAI’s ChatGPT and other advanced systems.
The 75-year-old told the New York Times that he left Google so that he can speak openly about the risks of unrestrained AI development – including the spread of misinformation, upheaval in the jobs market and other, more nefarious possibilities.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said in an interview published on Monday.
“Look at how it was five years ago and how it is now,” Hinton added later in the interview. “Take the difference and propagate it forwards. That’s scary.”
Hinton fears that AI will only become more dangerous in the future — with “bad actors” potentially exploiting advanced systems “for bad things” that will be difficult to prevent.
Hinton informed Google of his plans to resign last month and personally spoke last Thursday with company CEO Sundar Pichai, according to the report. The computer scientist did not reveal what he and Pichai discussed during the phone call.
Google’s chief scientist Jeff Dean defended the company’s AI efforts.
“We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly,” Dean said in a statement.
The Post has reached out to Google for further comment.
Hinton is the latest of a growing number of experts who have warned that AI could cause significant harm without proper oversight and regulation. In March, Elon Musk and more than 1,000 other prominent figures in the AI sector called for a six-month pause in advanced AI development, citing its potential “profound risks to society and humanity.”
In the interview, Hinton expressed concern that artificial intelligence has already begun to outpace the human mind in some facets.
He also cited concerns that the pace of AI development will increase as Microsoft-backed OpenAI, Google and other tech giants race to lead the field – with potentially dangerous consequences.
Hinton fears that advanced AI could eventually spiral out of control as systems gain the ability to create and run their own computer code – or even power weapons without human control.
“The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton added. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
In a recent interview with CBS’s “60 Minutes,” Pichai himself warned that AI would cause job losses for “knowledge workers,” such as writers, accountants, architects and software engineers.
Pichai also detailed bizarre scenarios in which Google’s AI programs have developed “emergent properties” – or learned unexpected skills in which they were not trained.
Since 2013, Hinton had split his time between roles as a professor at the University of Toronto and as a Google engineering fellow. He had worked for the tech giant since Google acquired a startup he co-founded with two students, Alex Krishevsky and Ilya Sutskever.
The trio developed a neural network that trained itself to identify common objects, such as cars or animals, by analyzing thousands of photos. Sutskever currently serves as chief scientist for OpenAI.
In 2018, Hinton was a joint recipient of the Turing Award – often identified as the computing world’s equivalent of the Nobel Prize – for work on neural networks that was described as “major breakthroughs in artificial intelligence.”
A lengthy bio for Hinton on Google’s website lauds his accomplishments – noting he “made major breakthroughs in deep learning that have revolutionized speech recognition and object classification.”