
“Let’s just say it out loud,” Keith Teare, publisher of the That Was the Week newsletter, says. “AI is not dangerous.”
Not all of you will agree. I’m certainly not so sure. But the gruff Yorkshireman is convinced that AI can only benefit humanity. For him, with his scientific faith in historical progress, today’s AI revolution is a glorious combination of the Enlightenment and the industrial revolution. The only danger, he warns, is the belief in danger itself. Thus his criticism of Anthropic’s Dario Amodei, who has been quite explicit about AI’s dangers — and for whom the doom narrative is, in Keith’s reading at least, designed as a business strategy to solicit governmental backing without government control.
AI Is Not Dangerous. Repeat it. Take your ideological medicine. As if you’re in a Silicon Valley seminary. Sing it out loud. As if you’re in a Methodist choir. Believe it now?
Five Takeaways
• The Economist’s “Lowlife” Moment: Keith’s editorial was triggered by The Economist’s forty-five-minute video on the five men running AI — the title alone, “How to Control the Men Who Control AI,” was enough. Why would The Economist think it could control them? And why focus on the personalities rather than the technology, the applications, or the actual human impact? Judging the AI industry by its CEOs is like judging a film by the leading actor’s personality rather than the script or the performances. It’s the wrong focus — and in Keith’s view, a low one for a publication that should know better. The cult of personality is a media creation, feeding on controversy because controversy sells subscriptions.
• AI Is Not Dangerous. Full Stop. Keith’s boldest claim: AI is not dangerous — not a little, not potentially, not in the wrong hands. The doom narrative is a media-driven frenzy, fed by CEOs who give it too much airtime and by a readymade audience of Americans whose well-founded economic pessimism makes them receptive to negative messages. The Stanford AI Index Report shows that America is the country where AI is trusted least — paradoxically, also the country where media has the greatest influence. In China, people trust AI more, not because the government tells them to, but because economic progress gives them reasons for optimism. You get what you pay for.
• Amodei’s Pitch Disguised as Science: Keith’s reading of Dario Amodei’s doom narrative: it is a business strategy. The message — AI might kill us all, AI might make us all unemployed — is not a scientific assessment. It’s a pitch for Anthropic specifically: if AI is this dangerous, you can’t let anyone else control it, so trust us and give us government backing without government oversight. Contrast with Demis Hassabis, who acknowledges risk and then immediately explains what he’s doing about it — taking responsibility rather than pointing the finger. And contrast with Zuckerberg, who Keith describes as sociopathic: “whatever serves my interest is gonna come out of my mouth at any given moment.”
• Consensus Capital and the Winner-Take-All Endgame: Keith’s post of the week: 75% of all venture capital raised goes to five funds, and 75% of all VC investment goes into five companies. Noah Smith’s piece on winner-take-all AI makes the same point from a different angle: linear extrapolation suggests two, maybe five, companies end up with all the money and power. This is what capitalism does — many car companies became a handful, many banks became a handful. AI will produce the same centralisation, but at unprecedented scale and across every domain simultaneously. The question — how does society benefit? — is the most important question of the era. Altman and Musk at least try to answe