
“The dangers are human, not AI. What’s dangerous is what a human does with AI, not what the AI does itself. In fact, even the idea that there is such a thing as the AI in itself is a mistake.” — Keith Teare
I’m in Korea this week. So rather than doing a traditional one-on-one That Was the Week tech summary, Keith Teare and I are trying something different. We invited Jonathan Rauch — Brookings Institution senior fellow, serial author and one of the most rigorous minds in Washington — onto the show to discuss AI.
Rauch had a simple mission. He wanted to find out why Keith Teare is just about the only person in the universe who believes that AI is benign. Jon had five buckets of doom to dump on Keith: labour market disruption, political upheaval, mental health and cognition, malicious actors, and the biggest daddy of all — AI developing consciousness, setting its own agenda, and killing everyone (even Keith).
But Keith maintained his Yorkshire stoicism under intense scrutiny from the analogue Rauch machine. AI is a word-counting machine, he explained. Large language models train on words, not experience. They split words into a probabilistic graph of correlations. When you ask a question, a large statistical engine fires, word by word. In that sense, he says, AI is no cleverer than a calculator. The idea that it has awareness, consciousness, or a plan is mythological. What’s dangerous is what a human does with AI, not what AI does itself. The dangers, he says, are human.
Jon wasn’t entirely reassured (his Brookings brand is scepticism, after all). What worries him most is that humans will handle these technologies irresponsibly. On that, he and Keith agree. The short-term labour disruption will be significant. White-collar service provision — legal, accounting, junior consulting — is already going. Jobs will go too. Work, Keith insists, will not. But nobody in politics is having the conversation about what comes next. Not JD. Not AOC. Only Keith and Jon.
Five Takeaways
• AI Is a Word-Counting Machine: Keith’s Core Argument: Large language models train on words and only words. They split those words into a probabilistic graph — how close is word A to word B? When you ask a question, a large statistical engine fires, producing output word by word. There is no awareness. There is no consciousness. There is no plan. The idea that such a system could develop its own agenda is mythological. It’s no cleverer than a calculator. It’s just a very big, very fast calculator. Rauch’s counter: the brain is also just dumb neurons. We get emergence from dumb neurons. Keith’s reply: what the AI can do is constrained by what humans allow it to do. The agency is human.
• Doomerism as Business Model: Before engaging with any specific AI doom argument, Keith signals a prior: whenever there is ambiguity in a major technological change, a business model emerges to monetize doubt. It was true of nuclear power. It was true of climate change. It is true of AI. This doesn’t mean the fears are groundless — they wouldn’t sell if they weren’t reasonable. But it means they should be approached with prior scepticism. The doom argument works precisely because AI genuinely contains possible negative outcomes. The business model packages and amplifies those possibilities beyond their actual probability.
• The Guardrails Are Human: Keith’s metaphor: AI sits in a prison where humans decide what the doors are. If you give it access to email, it can email. If you don’t, it can’t. It cannot take actions it has not been permitted to take. The word “guardrails” is commonly used, and it’s apt: the constraints on what AI can do ar