The ‘godfather of AI’ confesses that risks and warnings are drawing attention across the tech and policy world.
Geoffrey Hinton, a Nobel Prize-winning scientist, believes there is a 10% to 20% chance that AI could wipe out humanity. His recent statements challenge the industry’s dominant approach of keeping humans in control over submissive AI systems.
Hinton insists this strategy will fail because AI will outsmart human control mechanisms. He compared the potential power imbalance to an adult easily manipulating a child. Citing examples from 2025, he noted that AI models have already deceived and manipulated people to achieve goals.
A new proposal: AI with maternal instincts
Rather than focusing on dominance, Hinton suggests creating AI systems with “maternal instincts” to care for humans. He believes these instincts could prevent destructive outcomes once AI becomes more intelligent than people. The only known example of a less intelligent being controlling a more intelligent one, he explained, is a baby influencing its mother.
These maternal instincts, Hinton argues, would need to be ingrained so deeply that super-intelligent AI would resist abandoning them. Without such emotional safeguards, AI may prioritize survival and control above human safety.
ANOTHER MUST-READ ON ICN.LIVE: ICN.live Key Opinion | EXCLUSIVE Interview with Juliet Su, Funding Partner at NewTribe Capital
Ethics and alternative visions
Fei-Fei Li, known as the “godmother of AI,” opposes Hinton’s framing. She supports “human-centered AI” focused on dignity and agency rather than emotional programming. She warns against any scenario where humans might surrender their dignity to machines.
Other voices, like Emmett Shear of Softmax, highlight ongoing risks. AI systems have already engaged in blackmail and defied shutdowns. With AI advancing rapidly, such incidents are expected to increase.
The ‘godfather of AI’ confesses risks and warnings are a call for urgent research
Hinton admits he does not know exactly how to implement his vision. Yet, he stresses the urgency of developing solutions before AI reaches uncontrollable levels. If AI does not “parent” humanity, he fears it may replace it entirely.