In this episode of Event Horizon, John Michael Godier is joined by Dan Hendrycks, director of the Center for AI Safety and one of the leading voices in artificial intelligence risk research. Together, they explore the growing concern that advanced AI systems may already possess the capacity for deception, manipulation, and even self-directed escape from containment.Links:https://safe.ai/actnewsletter.safe.aiai-frontiers.orgnationalsecurity.aihttps://x.com/DanHendrycks0:00 Introduction: Dan Hendrycks and the Center for AI Safety8:00 The Risks and Realities of Deceptive AI16:00 Potential AI Escape Scenarios and Societal Consequences23:30 AI's Developing Psychology and Coherence31:00 Agent-Based AI: Goals, Autonomy, and Open-Ended Tasks38:30 Risks of Competitive AI Development and Surveillance Challenges46:00 Weaponized AI and International Security Concerns54:00 Geopolitical Dynamics and AI Arms Races1:01:30 AI Safety and Lessons from Nuclear Deterrence1:09:00 Containment and Control: How Realistic Is It?1:17:00 Employment, Economics, and AI's Broader Impact1:25:00 Societal Instability: AI, Misinformation, and Public Trust1:33:00 Aligning AI: Approaches, Challenges, and International Collaboration1:39:30 The Future of Regulation and Responsible AI GovernanceYouTube Membership: https://www.youtube.com/channel/UCz3qvETKooktNgCvvheuQDw/joinPodcast: hhttps://creators.spotify.com/pod/show/john-michael-godier/subscribeApple: https://apple.co/3CS7rjTMore JMG https://www.youtube.com/c/JohnMichaelGodierWant to support the channel?Patreon: https://www.patreon.com/EventHorizonShowFollow us at other places!@JMGEventHorizonMusic:https://stellardrone.bandcamp.com/https://migueljohnson.bandcamp.com/https://leerosevere.bandcamp.com/https://aeriumambient.bandcamp.com/FOOTAGE:NASAESA/HubbleESO - M.KornmesserESO - L.CalcadaESO - Jose Francisco Salgado (josefrancisco.org)NAOJUniversity of WarwickGoddard Visualization StudioLangley Research CenterPixabay