
“Doing science is like reading the mind of God.” — Demis Hassabis, quoted in The Infinity Machine
This week’s New Yorker uncomplimentary profile of OpenAI’s CEO is entitled “The Many Faces of Sam Altman.” But not all AI leaders are quite as many faced as slippery Sam. Take, for example, Demis Hassabis, the North London based co-founder and CEO of Google’s DeepMind. In his new biography, The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence, the British journalist Sebastian Mallaby argues that Hassabis is, in contrast, one faced. And that face is not only decent, but informed by the enlightened ethics of Baruch Spinoza and Immanuel Kant.
Mallaby presents Hassabis as the anti-Altman. He’s stayed at DeepMind for sixteen years, lived in the same London house, drives a decade-old car. Rather than power, Google’s AI supremo seeks scientific enlightenment. Like Spinoza, his God is the master watchmaker of the universe. And so doing science, Hassabis explained to Mallaby in one of their many conversations in the backroom of a North London pub, is like reading the mind of God. Decent Demis. Honest Hassabis. Let’s just hope this modest and thoughtful tech leviathan can bring Kantian ethics to Silicon Valley’s sprint for artificial general intelligence.
Five Takeaways
• Hassabis Is the Anti-Altman: Sam Altman has managed to annoy almost everyone he’s worked with by saying one thing and doing the opposite. Hassabis has run DeepMind continuously for sixteen years, lives in the same house in Highgate, drives a decade-old car, and spends his discretionary money on Liverpool season tickets. He doesn’t want power. He wants scientific enlightenment. Mallaby uses the word advisedly.
• Doing Science Is Like Reading the Mind of God: Hassabis is a Spinozan. The god he believes in is the god Einstein talked about — the fabric of reality understood through scientific inquiry. He reads Kant, he reads Spinoza, he reads widely enough to be a proper polymath. Mallaby sat with him in a Highgate pub for more than thirty hours. What he found was not a Silicon Valley sociopath but an enlightenment figure who thinks AI is the modern version of the telescope.
• The Szilard Pedestrian Crossing: Mallaby asked Hassabis what it felt like to set up DeepMind in 2010. Instead of the usual vague answer, Hassabis painted the scene: the attic office on Russell Square, the heat, the stairs, the greenery outside, the London Mathematical Society three doors down where Turing lectured, and the zebra crossing where the Hungarian physicist Leo Szilard conceived of the nuclear chain reaction in the 1930s. The perfect metaphor: DeepMind as the modern Manhattan Project.
• The Two Categories of Things That Go Wrong: There’s the idiot-in-charge category — an evil or stupid person making bad decisions, and you could swap them out. Then there’s the structural category: a good person trying their best, defeated by larger forces they cannot control. Hassabis is category two. He wants to make AI safe, but race dynamics between US and China labs make safety nearly impossible to deliver. The failure of governments to intervene is the real story. Not individuals.
• The Go Players Who Quit: When AlphaGo beat the best players in the world, some professional Go players retired — centuries of accumulated human understanding devalued overnight. Others kept playing, using the machine as a tutor to discover patterns they’d never seen. Two responses to superintelligence in one domain. One is mourning. The other is curiosity. Mallaby thinks the second response is the only one worth having. Hassabis agrees.
About