![Roman Yampolskiy on Objections to AI Safety image](https://media.zencastr.com/cdn-cgi/image/width=640,quality=85/image-files/5f32fb7e553efb0248cf8fba/5defb4f0-ede3-46a3-afc5-eb5a87da8a7d.jpg)
Future of Life Institute Podcast
Roman Yampolskiy on Objections to AI Safety
![](https://media.zencastr.com/cdn-cgi/image/width=640,quality=85/image-files/5f32fb7e553efb0248cf8fba/5defb4f0-ede3-46a3-afc5-eb5a87da8a7d.jpg)
Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/
Timestamps:
00:00 Objections to AI safety
15:06 Will robots make AI risks salient?
27:51 Was early AI safety research useful?
37:28 Impossibility results for AI
47:25 How much risk should we accept?
1:01:21 Exponential or S-curve?
1:12:27 Will AI accidents increase?
1:23:56 Will we know who was right about AI?
1:33:33 Difference between AI output and AI model
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
![](https://media.zencastr.com/cdn-cgi/image/width=640,quality=85/image-files/5f32fb7e553efb0248cf8fba/5defb4f0-ede3-46a3-afc5-eb5a87da8a7d.jpg)
00:00:00
00:00:01
118 Plays
1 year agoRoman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/
Timestamps:
00:00 Objections to AI safety
15:06 Will robots make AI risks salient?
27:51 Was early AI safety research useful?
37:28 Impossibility results for AI
47:25 How much risk should we accept?
1:01:21 Exponential or S-curve?
1:12:27 Will AI accidents increase?
1:23:56 Will we know who was right about AI?
1:33:33 Difference between AI output and AI model
Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
Recommended
![Anton Korinek on Automating Work and the Economics of an Intelligence Explosion image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/9cf16d87-f031-4655-9ccc-d412f2d7ddbf.jpg)
4.3k
![Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/f0131199-a38a-444a-a9ed-ac9392d415c5.jpg)
4.2k
![Christian Nunes on Deepfakes (with Max Tegmark) image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/3071bfa4-b30a-4a02-bd78-fd0e22ffc727.png)
4.1k
![Annie Jacobsen on Nuclear War - a Second by Second Timeline image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/cfab3345-b526-4c3d-b455-7e4de6f8558f.jpg)
517
![Katja Grace on the Largest Survey of AI Researchers image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/aacd5d6e-a3b7-4b20-ae08-316dcee1cf0d.jpg)
241
![Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/99f22fdb-e0f9-4b90-aa17-e2611b04deba.jpg)
181
![Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/607abe99-5963-4c50-9f35-9b2f6df93679.jpg)
244
![Special: Flo Crivello on AI as a New Form of Life image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/7e23cb89-5319-4372-abca-e55ba9fe54c1.jpg)
120
![Darren McKee on Uncontrollable Superintelligence image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/99ddd4ff-0437-46ea-9721-531260525d9d.jpg)
119
![Mark Brakel on the UK AI Summit and the Future of AI Policy image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/5e72e5de-616d-4dd0-846a-029ac4f726ae.jpg)
99
![Samuel Hammond on AGI and Institutional Disruption image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/696574df-a444-4ab9-a786-69450472343c.jpg)
129
![Imagine A World: What if AI advisors helped us make better decisions? image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/3ab3112b-ee52-44cf-986d-d5e59adcaf3b.jpg)
75
![Imagine A World: What if narrow AI fractured our shared reality? image](https://media.zencastr.com/cdn-cgi/image/width=90,quality=85/image-files/5f32fb7e553efb0248cf8fba/0dfc1044-03c9-4873-b9a3-7cb84cf617a3.jpg)
65