![Joe Carlsmith on How We Change Our Minds About AI Risk image](https://media.zencastr.com/cdn-cgi/image/width=640,quality=85/image-files/5f32fb7e553efb0248cf8fba/2d66c965-c11f-4110-bf55-efc115e6a8a2.jpg)
Future of Life Institute Podcast
Joe Carlsmith on How We Change Our Minds About AI Risk
![](https://media.zencastr.com/cdn-cgi/image/width=640,quality=85/image-files/5f32fb7e553efb0248cf8fba/2d66c965-c11f-4110-bf55-efc115e6a8a2.jpg)
Joe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon. You can read more about Joe's work at https://joecarlsmith.com.
Timestamps:
00:00 Predictable updating on AI risk
07:27 Abstract models versus gut feelings
22:06 How Joe began believing in AI risk
29:06 Is AI risk falsifiable?
35:39 Types of skepticisms about AI risk
44:51 Are we fundamentally confused?
53:35 Becoming alienated from ourselves?
1:00:12 What will change people's minds?
1:12:34 Outline of different futures
1:20:43 Humanity losing touch with reality
1:27:14 Can we understand AI sentience?
1:36:31 Distinguishing real from fake sentience
1:39:54 AI doomer epistemology
1:45:23 AI benchmarks versus real-world AI
1:53:00 AI improving AI research and development
2:01:08 What if transformative AI comes soon?
2:07:21 AI safety if transformative AI comes soon
2:16:52 AI systems interpreting other AI systems
2:19:38 Philosophy and transformative AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
![](https://media.zencastr.com/cdn-cgi/image/width=640,quality=85/image-files/5f32fb7e553efb0248cf8fba/2d66c965-c11f-4110-bf55-efc115e6a8a2.jpg)
00:00:00
00:00:01
63 Plays
1 year agoJoe Carlsmith joins the podcast to discuss how we change our minds about AI risk, gut feelings versus abstract models, and what to do if transformative AI is coming soon. You can read more about Joe's work at https://joecarlsmith.com.
Timestamps:
00:00 Predictable updating on AI risk
07:27 Abstract models versus gut feelings
22:06 How Joe began believing in AI risk
29:06 Is AI risk falsifiable?
35:39 Types of skepticisms about AI risk
44:51 Are we fundamentally confused?
53:35 Becoming alienated from ourselves?
1:00:12 What will change people's minds?
1:12:34 Outline of different futures
1:20:43 Humanity losing touch with reality
1:27:14 Can we understand AI sentience?
1:36:31 Distinguishing real from fake sentience
1:39:54 AI doomer epistemology
1:45:23 AI benchmarks versus real-world AI
1:53:00 AI improving AI research and development
2:01:08 What if transformative AI comes soon?
2:07:21 AI safety if transformative AI comes soon
2:16:52 AI systems interpreting other AI systems
2:19:38 Philosophy and transformative AI
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
Recommended
![Anton Korinek on Automating Work and the Economics of an Intelligence Explosion image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/9cf16d87-f031-4655-9ccc-d412f2d7ddbf.jpg)
4.3k
![Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/f0131199-a38a-444a-a9ed-ac9392d415c5.jpg)
4.2k
![Christian Nunes on Deepfakes (with Max Tegmark) image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/3071bfa4-b30a-4a02-bd78-fd0e22ffc727.png)
4.1k
![Annie Jacobsen on Nuclear War - a Second by Second Timeline image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/cfab3345-b526-4c3d-b455-7e4de6f8558f.jpg)
517
![Katja Grace on the Largest Survey of AI Researchers image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/aacd5d6e-a3b7-4b20-ae08-316dcee1cf0d.jpg)
241
![Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/99f22fdb-e0f9-4b90-aa17-e2611b04deba.jpg)
181
![Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/607abe99-5963-4c50-9f35-9b2f6df93679.jpg)
244
![Special: Flo Crivello on AI as a New Form of Life image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/7e23cb89-5319-4372-abca-e55ba9fe54c1.jpg)
120
![Darren McKee on Uncontrollable Superintelligence image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/99ddd4ff-0437-46ea-9721-531260525d9d.jpg)
119
![Mark Brakel on the UK AI Summit and the Future of AI Policy image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/5e72e5de-616d-4dd0-846a-029ac4f726ae.jpg)
99
![Samuel Hammond on AGI and Institutional Disruption image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/696574df-a444-4ab9-a786-69450472343c.jpg)
129
![Imagine A World: What if AI advisors helped us make better decisions? image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/3ab3112b-ee52-44cf-986d-d5e59adcaf3b.jpg)
75
![Imagine A World: What if narrow AI fractured our shared reality? image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/0dfc1044-03c9-4873-b9a3-7cb84cf617a3.jpg)
65