The internet taught everyone to self-diagnose. AI made it faster, more persuasive, and significantly more dangerous.
Dr. Ajit Barron-Dhillon — ER physician, military veteran, and someone who has watched patients demand MRIs for minor complaints because 'the internet said so' — joins Jason to talk about what AI-assisted health research actually does to people who think they're being smart about it.
The conversation covers confirmation bias in clinical settings, supplement stacks optimized by ChatGPT, the cheerleader problem in medical AI, and why being above-average intelligent with these tools may make you more vulnerable, not less. If you use AI or Google to research your health, this conversation is specifically for you.
Topics Discussed
- Why AI self-diagnosis is dangerous specifically for informed, health-conscious people
- What ER physicians are actually seeing when patients arrive with internet-sourced diagnoses
- How confirmation bias turns AI research into an expensive form of being wrong
- When AI-assisted supplement optimization is useful — and when it's not
- Why peer-reviewed research and AI training data are not the same thing
- What a responsible approach to AI health research actually looks like
CHAPTERS
- 0:00 — Jeremy's Intro: Sick and Googling While Hosting an AI Health Episode
- 1:17 — Kids Unplugging: Why In-Person Dating Is the New Counterculture
- 2:40 — The No-Wi-Fi Coffee Shop and What the Internet Can't Tell You
- 9:47 — I Let ChatGPT Optimize My Supplement Stack. Here's What Happened.
- 11:59 — The Telemedicine Loophole: AI + Social Engineering for Prescriptions
- 14:25 — Why Your Doctor Doesn't Know What You're Supplementing
- 20:16 — NIH PubMed Is Being Scrubbed — and Why That Matters
- 28:40 — She's Not Fighting Logic. She's Fighting Belief.
- 32:58 — Star Trek, Dr. McCoy, and the Tricorder We're Almost Building
- 37:11 — What a PubMed-Only AI Would Actually Look Like
- 44:58 — The Tool Gets You 80% There. The Human Closes the Gap.