Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light image

Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light

Future of Life Institute Podcast
Avatar
4.7k Plays5 months ago

Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com  

Timestamps: 

00:00 US-China competition and risk  

18:01 The security dilemma  

30:21 Official and unofficial diplomacy 

39:53 Hotlines between countries  

01:01:54 Preventing escalation after war  

01:09:58 Catastrophic biological risks  

01:20:42 Ultraviolet germicidal light 

01:25:54 Ancient civilizational collapse

Recommended
Transcript

Introduction to Catastrophic Risks

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker and I'm here with Christian Ruhl. Christian is a senior researcher at Founders' Pledge where he focuses on global catastrophic risks. Christian, welcome to the podcast. Thanks so much for having me on, Gus. I should also mention for our audience that I have a slight stutter, so you might hear that on the call. You might hear some pauses. No

US-China Competition and AI Risks

00:00:25
Speaker
problem. Okay, you have argued that competition between the US and China is a risk factor for other risks, from specifically risks from transformative technologies like AI. So how are these two categories of of risks connected? How would increased competition between the US and China increase risks from transformative technologies?
00:00:49
Speaker
That's a great question. So there are a number of ways that more intense competition between great powers like US-China competition can increase the risk from AI and other transformative technologies. The first is that historically, some of the most dangerous research has come from military programs during times of tension, which were spurred by competition between major powers. The Soviet bioweapons program is a great example here.
00:01:21
Speaker
um Second, and I think this one is underappreciated by some people in the field, so much of our planning for good governance of high consequence tech like AI ah relies on at least some baseline level of international cooperation. um If you think, as I know many listeners of this podcast do, that the coming years might be pivotal for our long-term future and that international cooperation around these powerful technologies is a big part of that, um agreeing, for example, on certain red lines that no country will cross, then you have to realize that more intense competition can totally derail all these efforts.

Historical Military Program Risks

00:02:05
Speaker
So, for example, DNA synthesis screening
00:02:09
Speaker
is great if you implement it in one country, but if terrorists can just go to another country to order their pathogen, then you've just shifted the risk rather than actually reducing the risk. Next, I think we've been fairly lucky so far in AI competition in that it's been mostly private companies, ah but major countries like the US and China just have vastly more resources. So if you saw a Manhattan Project or Apollo program scale AI project happening backed by a state like the US or China, it's just a completely different risk landscape. And it's another reason to really worry about this competition.
00:02:53
Speaker
Relatedly, there are certain dynamics that look like a race to be first, where safety and security take a backseat to capabilities. That's when we can really start talking about something like an AI arms race. and you know Some people don't like the term um arms race, but it has the same basic ah features. This race to the bottom kind of has two implications, I think, on the one hand, accidents, and on the other hand, proliferation. So on accidents, we know from the Cold War and World War II that this happens a lot when countries compete in the military domain, they're willing to accept more accidents.
00:03:33
Speaker
So in nuclear weapons, the I think 1954 Castle Bravo test contaminated the Marshall Islands with nuclear fallout and gave radiation sickness to the crew of a Japanese fishing vessel. um Then there's Luv's Anthrax leak in 1979, whereby weapons plant leaked Anthrax to the ah town around, that again, these kind of safety features that in less intense competition might get implemented, don't get implemented because ah you're trying to compete, you're trying to be first.
00:04:11
Speaker
So similarly, on the other hand, you can have proliferation, ah less concern about the tech falling into the wrong hands. I think the spies in the men men Manhattan Project are a good example of this because again, the main goal you're optimizing for is being first in the race. So when you're talking about high consequence tech like synthetic bio and AI, accidents and proliferation can really cause global catastrophe. How are you worried? You're worried that competition will lead to conflict in some sense, or you're worried mainly about accidents. So are you worried that that competition will will push China and the US to to be more aggressive, or are you worried that are you mostly worried about accidents and proliferation?
00:04:58
Speaker
Yeah, that's a really good question. So basically, so far, everything we've talked about is competition without war, right? None of these things would actually require active conflict to break out. All of them could and in fact did happen during a Cold War.

Major Power Wars and Technology Investment

00:05:13
Speaker
So Stephen Clare and I wrote a report that's co-published by Founders Pledge and CG. and It's called Great Power Commutation and Transformative Tech. That goes into this in much more detail. Stephen also has other reports to explain much of what I said. But yeah, obviously, a full-on conflict could make all this even worse. So, on the one hand, a new war between, say, the US and China, or US-China, and
00:05:39
Speaker
Russia could be much, much worse than even World War II. It could see major nuclear war, bioweapons, autonomous weapons, all killing potentially hundreds of millions of people and more, maybe leading to civilizational collapse. So you can imagine them. that over the course of such a war, states suddenly start being more interested in developing powerful weapons. Those might be a new WMD that we haven't even thought of, or they might be, say, bioweapons because those are potentially cheaper to make. And, you know, in
00:06:10
Speaker
the situation where leaders find themselves in a corner with not much good advice, they might think that that kind of weapon could have a game-changing effect on the war. Similarly, a country might see it as losing and decide to invest its resources in this kind of last-ditch effort to develop AGI, turn the tide of the war. At that point, safety just goes completely out of the the window, even more so than it would during a Cold War. And then the value systems changes and the trajectory of our civilization in the long term, I think, is another reason to really worry about the effects of this transformer tech. Yeah. what do What do you mean by value changes? So why would you worry about our values changing? Yeah. So I think broadly when we've seen major wars happen in the past, one of the features is that
00:07:03
Speaker
The political systems going into the war often look very different from the political systems coming out of the war. And you might think, for example, that if a major war between the US and China occurs at this time where we're developing very powerful tech, that could potentially lock in certain values, certain value systems might win out. ah You might see the emergence of a global totalitarian hegemon either because one country wins or because sort of the governments that emerge after the war are more likely to survive if they have ah authoritarian features than not. And actually, if you look at Cold War planning about what would have happened to
00:07:45
Speaker
of the U.S. government to coup-cog plan and continuity of operations and continuity of government planning. um A lot of that would have just involved basically, a good example here is Eisenhower, where he basically just had his friends take over major major national industries. It is kind of our key after a nuclear war because it's easier to ah administer, but I'm not confident that sort of liberal democratic values, human rights Many of the things that we cherish and think might be valuable for the long-term flourishing of civilization, that those might actually survive a war.

Understanding Arms Races and US-China Dynamics

00:08:22
Speaker
We have the possibility of war between great powers on the one hand, and we have what we could call normal competition on the other hand.
00:08:29
Speaker
where normal competition has the risks from accidents and proliferation, and there are further risks from all-out war between great powers. In what sense do we have an arms race going on? Because it doesn't seem obvious to me that the US and China would have to be in any form of military conflict. they are far from each other geographically and they they don't have to compete militarily even though they might be competing on a in a in a regular sense, but competing to build the best technologies, competing to have the most valuable companies and so on. So in what sense would it be true to say that an arms race is going on?
00:09:09
Speaker
Yeah, so ah I'll take this question in two parts. First, right why is an arms race? Why might we talk about an arms race? And then second, sort of why would these countries even compete or potentially go to war in the first place? and So first, yeah, I think the main way we can distinguish sort of an arms race from competition in general is if we say an arms race focuses on military capabilities. So a lot of international relations academics get upset when you talk about something like an AI arms race because they kind of pedantic about the term and think that a term arms race should only be applied to weapons and not to general purpose attack like AI. But there are some kinds of arms race like dynamics
00:09:51
Speaker
between the US and China, mostly that both are kind of seemingly worrying about a potential future conflict, for example, over Taiwan, and they're investing in military capabilities. So if you look at sort of recent US strategic documents, they explicitly call China the, quote, pacing threat, end quote, for its military capabilities, so meaning that sort of this is what we're trying to keep pace with in the US. So I think we're seeing maybe the first rumblings of an arms race most clearly in the nuclear domain. and So the US is modernizing its arsenal, and China seems to be pretty aggressively expanding its arsenal. So they surpassed, I think, 400 warheads last year, or at like 500 this year.
00:10:37
Speaker
expected to hit like 1,000 by 2030, potentially 1,500 by 2035. If you look at kind of crowdsource predictions of this, Metaculous, this crowdsource forecasting platform gives about a 54% chance of reaching 1,000 warheads by 2030. Last time I checked. So in recent, a lot of people in the US have argued that in response to this, the US in turn needs to expand its own arsenal. So in i think that debate, we're starting to see the sparks of a new arms race. What is the difference between modernizing your arsenal and and expanding your arsenal? Could the US be modernizing in a way that that expands the ah the arsenal? Is is there a a so a straightforward difference between the two?
00:11:19
Speaker
Yeah, so in a sense, I think many people would insist that modernizing is different. And I think the US s does. On the other hand, there is it can be kind of a euphemism for pursuing

Philanthropy and US-China Relations

00:11:28
Speaker
new capabilities that might actually seem threatening to the other side. And I think we'll get into this maybe later, but there are certain dynamics that to one side seem purely defensive, but to the other side seem threatening. And that can kind of spark this spiral. But to to to go back to the arms race thing, I think We can also think more broadly about just the pursuit of technological superiority as an activity that has some arms race like arms racelike dynamics um that might also be worth talking about. So you're seeing some of this in discussion around like national competitiveness AI. But I think luckily as I said earlier, we're not yet in the real AI arms race. That's as intense as it could be. So although both countries are investing in AI capabilities, and although both refer to the other as the big AI power, I think so far this hasn't yet reached the intensity that it could.
00:12:15
Speaker
And we should try to make sure that it doesn't. But yeah, what drives all this is this climate of distrust where US strategy calls China to pipe the pacing threat and then China starts thinking the US wants to contain it. This starts being popular with domestic audiences. You have to be tough on China to get elected. And the important point is it might be really tough for governments to get out of this on their own. So I think it's a good area where philanthropists can be this third voice for moderation. It's one of the things we're trying to do with the GCR fund at at Founders' Pledge. So that's the question about the arms race. What about the question of why the US and China would even be in in potential conflict in the first place, given that they're so far apart and given that
00:13:01
Speaker
and I mean, naively, you might think these countries are so far apart that there's no reason for them to be to to engage in conflict. To what extent is that true? Also a great question. I think protracted war, for example, sort of war that stretches out longer past over years is especially scary and might make this a lot worse. But to your question, I think one level, you know, this is true, they're far apart and could each flourish in their own spheres. And we know from sort of the international relations literature that sharing borders and having territorial disputes does raise the risk of conflict. So you might be worried about India and the China. um But at the same time, kind of leaders and analysts on both sides, of the US and China, continuously talk about the risk of war. It's like, why what what why is that? It's similar, I think, to a situation during the Cold War
00:13:52
Speaker
with the US and the Soviet Union. And China is a rising power. The US is the established power. In general, it seems to be the case that power transitions of this sort raise the risk of war. So the rise of China could be a worrying trend here. But again, great power states are states with global interests. So America's interests don't stop at the borders of the US. The American military has bases all over the world. US territories span the globe, Guam, the Northern Mariana Islands. I think there are like 11 territories in the Pacific alone. And then there are US allies and partners who rely on the US for their security. And of course, we know from World War I how
00:14:34
Speaker
alliance dynamics can pull countries into wars that nobody really wants. um Maybe North Korea and South Korea are a good example of that, where a war could easily spill into China's borders. And just as we saw um during the Korean War, something like that could easily pull in the US. And then the US also has an interest in upholding the status quo, which includes things like the idea of freedom of navigation requires basically patrolling international water waters, conducting these freedom of navigation operations, and these global interests sometimes clash as they do over Taiwan or the South China Sea, and war is just one way of settling these disputes.
00:15:14
Speaker
so Yeah, maybe for listeners of this podcast, we can kind of think of the machinery of great power as the sort of military industrial bureaucratic complex as a type of artificial intelligence. um There's this fascinating paper from 2022 from CSET by Richard Danzig called machines, bureaucracies, and markets, are artificial intelligences. That makes a more general version of the argument I'm about to make. But basically, right, both sides have this massive apparatus of millions of minds and computers linked together, sensors around the world, satellites everywhere, missiles ready to launch, information aggregation mechanisms. And taken as a whole, this machine can process information
00:15:59
Speaker
and create new insights in a way that far surpass human abilities. And this is partly what has allowed modern states to become so powerful. But of course, super intelligences are really, really hard to control and hard to align with human values. And in this domain, we might give this bureaucratic super intelligence the goal of keep us safe. Nobody really understands inner workings, even the wonkiest of policy wonks don't understand the US s national security complex fully. and If you could understand it, we wouldn't need it. And sometimes giving that goal leads the machine to take actions that seem to be in line with the goal, but they actually are very far from what we ultimately want and just sometimes slips out of our control completely. So I think, again, nobody really wanted
00:16:42
Speaker
World War I. It just kind of happened because this machine and all these arms races that everyone lost control of. And there's this book, The Guns of August, that JFK had actually read just before the Cuban Missile Crisis. And there's at the end one one one quote from one German leader to another asking them, you know, how did it all happen? And the other one says, ah, if we only knew. So things like these things just kind of sometimes happen and we create powers often go to war with each other, even if they don't have direct border disputes.
00:17:18
Speaker
Yes, and so to your point, it it

International Relations: Security Dilemma and Solutions

00:17:20
Speaker
might be, so what I'm asking is why would they go to war with each other when they don't share a border or when they're so far apart, when they could flourish separately in their own respective regions of the world, and when it would be extremely costly ah for for both parties to engage in war and and ah with uncertain with an uncertain outcome also. But the point you're making is that these decisions aren't always, they're not kind of a result of rational deliberation and ah thinking about expected values and so on. It is result it' a very complex ah event where no one is fully in control. And that's ah that's a scary thing to to to remind yourself of, I think.
00:18:02
Speaker
and so So we've touched upon the security dilemma in the context of ah the US modernizing or perhaps expanding its nuclear arsenal and China responding to that by expanding its its nuclear arsenal. So perhaps briefly, what is the security dilemma and does it apply to AI also? Yeah, great question. This is basically a very fundamental idea in international relations that helps to explain a lot of the dynamics that mean way that we might worry about including on AI. So a security dilemma refers to a situation where
00:18:41
Speaker
One state's defensive actions to increase its security create feelings of insecurity in rival state, which in turn reacts by preparing for the worst interpretation of the first state's actions, um ultimately undermining everyone's security and sometimes sparking arms races. So again, states feel insecurity takes security taking actions. so rival states in turn views actions with suspicion, feel less secure, fear, like offensive intent and capabilities, those states then respond with their own actions, triggering this kind of vicious cycle. And I think it's this very tragic dynamic that we see throughout history. I think the Soviet bioweapons program is a good example of this. We have these quotes from people after the Cold War who fled the Soviet Union, where the Soviet Union is sort of
00:19:30
Speaker
not only continued, but expanded its bioweapons program after the Biological Weapons Convention was signed, they were sure that U.S. defensive actions were actually meant to be offensive. The U.S. had actually shut down its program, and but the service kind of felt insecure and sort of saw the somewhat dual use actions on the U.S. side. And then, of course, the service launched the biggest bioweapons program ever. So in sort of trying to make themselves safer, fearing what the other side does, they enter into a dynamic that makes the whole world worse. And again, I think we can pretty easily see how this might apply to AI on the one hand for sort of military capabilities if one side feels that pursuing AI might provide some sort of strategic
00:20:16
Speaker
advantage on the battlefield, the other one might very likely do the same thing. um Or even if you sort of pursue something for purely defensive or even civilian purposes, that's not always easy to tell, especially if it's conducted in secret. And if you suspect the other side is launching a large program, you're likely to compete with the most conservative in this case, sort of extreme interpretation of what that uncertainty might mean. What could be due to alleviate the effects of the security dilemma? What options do we have on the table?
00:20:54
Speaker
Yeah, this is kind of the million dollar question. That's when we think a lot about from the philanthropic side, sort of what could we do from the global catastrophic risks fund to disrupt these forces that make everyone less safe? So again, I would point people back to that report that Steven and I wrote. One thing you could do potentially if like if you start thinking about this first is fund organizations that advocate for unilateral restraint at home. I'm actually a bit skeptical about this. I think it's already received a lot of funding from major philanthropists and traditional peace and security funders. and This might be less interesting to somebody who's looking to fund so the most high value things on the margin. And it's also not obviously without downside because sometimes there are real threats that countries need to defend against. So what else can we do? A second one is kind of complicated. You could fund
00:21:45
Speaker
Improved threat assessment to decrease your uncertainty about your adversaries capabilities. So we have concrete option for this might be implementing probabilistic forecasting. in the intelligence community and advocating for the use of prediction markets and forecasting aggregation platforms throughout the national security complex. I'm happy to kind of unpack a little bit why that might help. yeah Yeah, I mean, my first thought there is just to worry that this might play into the security dilemma itself. So it might be an advantage for for one country to have uncertainty about its its military capabilities.
00:22:27
Speaker
And if another country investigates and and thereby they decrease the uncertainty about and which military capabilities the other country has, well, then that might be seen as as kind of an offensive action in in the sense of of ah being being part of of kind of a conflict. Do you worry about that? Do you worry about kind of more knowledge being a bad thing in this case? Yeah, so I think One kind of structural reason to maybe think that more knowledge might not be a bad thing is because when you're in this situation of high uncertainty and high suspicion and high mistrust, you're not competing against the best guess of what the adversary is doing. You're kind of competing against the worst plausible option what the adversary is doing. So just kind of this asymmetry there where improving
00:23:14
Speaker
your predictions and sort of decreasing the uncertainty range might be the smarter thing. So when you're suspicious of the adversary and you're wary of bad intent and you're uncertain about their capabilities, again, the smart thing, the conservative guess is to compete against the higher range of your assessment of their capabilities. And with greater uncertainty, that range is going to be larger and the higher end is going to be higher. And this is especially true because the bureaucracy doesn't doesn't currently incentivize accuracy, mostly doesn't even keep score of sort of different analysts. So you might expect analysts to lean toward more alarmist interpretations because there are some bureaucratic incentives that push them to do that. So if you improve this threat assessment process and you actually make accuracy part of the performance
00:24:00
Speaker
assessment, your uncertainty rate decreases, and the incentive for exaggeration decreases, and so hopefully the intensity of competition decreases. It's a paper from the University of Pennsylvania from a few years ago called Keeping a Score, a new approach to geopolitical forecasting that kind of explains how exactly you would do this. um So maybe we can try a historical example of the missile gap is sort of idea that the US was potentially behind the Soviet Union missiles have really accelerated the nuclear arms race. But you can imagine if we just have more accurate ways of assessing capabilities, this idea might have found less traction, it might have been easier to argue against it. And you might have reason to think that this kind of work is actually really tractable, unlike sort of advocating for
00:24:47
Speaker
restraint because governments want to be more more accurate. you They don't want to waste money competing against threats that aren't there. One question is why hasn't this happened already? so You mentioned the incentives that security analysts are working under and that those incentives might might push them to make more alarmist predictions because they aren't held to account for those predictions.

Diplomacy and Communication in Crisis

00:25:09
Speaker
Why aren't they held to account? Why why aren't we keeping track of who's who's best at predicting what is going to happen? Because that seems to be the core of the job of of being a security analyst. you would Thank you. First of all, like and in in recent years, some countries have gotten better at this. The UK specifically ah had this program called Cosmic Bazaar, where people across the government were able to sort of make
00:25:32
Speaker
probabilistic forecasts. The US has done something similar with Infer and other platforms. um But yeah, why why did this take so long? right We've known this for a long time. I think the bureaucratic incentives are... You can imagine if you're sort of higher up in the bureaucracy and you're sort of seen as this expert on China and you you like studied in China and you you like say many things that are smart, but then once you start keeping a score about how accurate your Predictions actually are it might turn out that the in turn makes better predictions than you. And so the entrenched interests in the bureaucracy just, yeah, it might upset the status quo. I think that's one of the big reasons it hasn't been implemented. Okay, so we talked about ways to alleviate the security dilemma. And I think one of those ways might be ah to engage in what it is called track two diplomacy. So first of all, what is track two diplomacy and how might it help?
00:26:33
Speaker
Yeah, so track two diplomacy um is kind of this unofficial back channel diplomacy, which is sort of contrasted with track one, which is the standard official state to state talks that we're all familiar with. But track two, you get experts, former government officials, people like that into a room, you get them to talk about tough issues from different sides. And I think there are a bunch of reasons to think that this might work really well, which is part of the reason why I found this pledge to be provided funding to several tractor dialogues between the US and China, but including some on AI and recently one on some of the AI bio intersection. And yeah, I'm personally quite excited about this as an intervention. And I think our funding of these kinds of dialogues makes us want larger funders in this space. So I think traditionally people who fund tractor dialogues can be a bit
00:27:27
Speaker
hand wavy and fuzzy about the theory of change, like but you know we'll talk and we'll all be friends and we'll stop um stop making war. I don't think that's really a useful way of thinking about it. So maybe we can provide a bit more clarity here. So one way is similar to forecasting. I think transparency and information exchange are a big part of track to dialogues. Often states like the US and China talk surprisingly little about these sort of high consequence issues like AI, like bio, like nukes, but nobody wants to be caught in a security dilemma. So one thing you might do is use these back channel dialogues to share information that might decrease suspicion. So you might say, for example, hey,
00:28:12
Speaker
we don't have a secret Manhattan project to build build AGI. And here's how you you can check and verify, because if we did have this project, you could observe certain trends and compute so supply chains. You'd observe maybe some major AI scientists suddenly start publishing. And so in Tractor Dialogues, experts can have very frank conversations at the sidelines of these dialogues, sometimes over drink at the bar and so on. And very often these unofficial dialogues are conducted with the understanding that both sides feed information back to their host governments. um so This is the transparency information exchange to kind of help disrupt some of the negative arms race likes dynamics that we talked about earlier. Next reason to think might be good is trust and confidence building. I think this one makes a lot of sense. yeah People just start trusting each other more and sort of ah This is the standard reason given for supporting these. um There might also be some object level problem solving. So, especially if dialogues involve scientists and people with technical expertise, you might actually see people working together on big problems. So, AI safety dialogues, for example, may discuss key ideas are around safety evaluations, red teaming, and the US and China may exchange the best practices on each symbol. You might expect
00:29:31
Speaker
The US and China do both care about preventing bioterrorism, so maybe it'll share information on access control, so powerful biotech, because you don't want that technology falling into the wrong hands. Getting a little bit deeper, so attractive dialogues can also help. build foundations for track one diplomacy, so sometimes track two dialogues, move out of the shadows, and mature into official government-to-government dialogues. This article just came out recently that sort of said the official AI talks between the US and China would not have happened had it not been for previous
00:30:08
Speaker
track two engagements. or This is how it ideally happens, right? That first you have track two diplomacy that then turns into track one diplomacy. So first it starts off unofficially and then it becomes an official position for both countries afterwards. Maybe you could say a bit of more about at a practical level, how does this how does this happen? Is it actually people sitting in a bar over drinks or how how would how would something like this happen in the real world? Yeah, I'll explain that and then I'll get back to surface because sometimes the attractive dialogue still keep going while track one is going on and we can actually have really beneficial features. But yeah, on a practical level, and how does this happen? Often you have kind of think tanks hosting these dialogues. So for example, we founded the Carnegie Endowment to host US-China dialogues and
00:31:00
Speaker
those think tanks, then I will often partner with organizations on the other side, you know, in China, the word think tank means something slightly different than sort of the sort of a different, sort of less, less of a civil society there. So it might actually be sort of PLA affiliated, so-called think tanks. I could also have university, often the dialogue organizers kind of maybe a former government officials who still have friends in the government, who have a lot of sort high-ranking contacts, know a lot of experts, and then they invite a small handful of experts on on each side, and philanthropists might pay for their flights. They might meet up in a third country if you know you don't feel safe in either country.
00:31:43
Speaker
You might sort of rent out a hotel in Singapore, get a conference room, and get people together in a format that looks pretty similar to what you might see in official track one diplomacy. And this might sort of take several days, and that's when you can you have of meals. And I've often heard people say that, so the real work gets done at the sidelines, and you just have to kind of overdrink at the bar. And you know that might happen. Those sort of multi-day dialogues would happen like twice a year or so, and over that time period, we start building trust and sort of building understanding and building connections. And then there's the question of timing. So so as we talked about, track two diplomacy can proceed track one diplomacy. Does track one diplomacy then then feed back into track two diplomacy? So if you have some official statement of of friendliness between two countries, for example.
00:32:35
Speaker
or or of of of conflict between two countries. how How would that affect track two diplomacy? Yeah, so one thing that I learned when researching this and interviewing a lot of people who convene these kinds of talks is that a really useful function for track two is they can serve a kind of sandbox for track one. So in track one, you know if you float a bad idea, it could ruin the whole discussion. But in track two, These people have sometimes grown to be friends. You can speak more openly and say, you know, hey, how would this idea land with you? So you can kind of do that, test the waters, check the Oregon window, see what's within the bounds of possibilities for the actual Track 1 diplomacy. Sometimes all the Track 2s can be used to provide the technical and scientific expertise that Track 1s lack. So I think AI safety is an example.
00:33:28
Speaker
we might see this happening. And in fact, we've previously supported Brookings on some of its AI dialogues. And they recently released a statement saying, you know, we'll continue even as track one starts, um because we've helped to identify what we can talk about and sort of specific things that might be useful in Track 1. And then similarly, because the political environment changes so often, and especially the US has elections so often, it can be kind of a tough like negotiating partner. So Track 2 can also preserve some of the mo momentum that you see in Track 1 if then there's a political change
00:34:06
Speaker
and i can kind of keep some of the talks going. and So if you say a new president is elected, they decide they're no longer interested in engaging China on AI safety. And some of what used to be of official talks can continue at the track two level until the political climate is more favorable. So do we have historical examples where track two diplomacy has succeeded? Yeah, so there are a ton of examples here. And one good example is attractive dialogues that helped lead to the limited ah test ban treaty of 1963, which banned nuclear weapons tests in the atmosphere, outer space and underwater. And there you saw some like object level problem solving where
00:34:52
Speaker
According to the accounts I've seen, track two discussions between scientists helped to play an important role in agreeing on the technical solutions to monitoring this this treaty. There were also been sort of off the record track two meetings throughout the 20th and 21st century between is Israeli and Palestinian leaders. Those allegedly helped lead to the Oslo ah Accords. US Iran track two dialogues helped lead to the JCPOA or the Iran deal. One famous example is the Pugwash dialogues and the Anti-Ballistic Missile Treaty.
00:35:27
Speaker
so in the 60s, there's this m MIT scientist who presents at these pogwash dialogues and some issues related to a ABM systems. And then they talk about this with the Soviet scientists, and the Soviet scientists randomly meet up with Soviet leadership at a New Year's Eve party. Separately, one of them is like married to a senior Soviet leader. And sort of through all of these interactions driven by the Pugwash dialogues, eventually leads to the 1972 ABM Treaty. There's a book called Unarmed Forces, actually right behind me there, that traces this in detail and I recommend it to anybody who's interested. More recently, sort of don't have a lot of visibility into this, but there was this war scare during the summer and fall of 2020 where the Chinese were apparently
00:36:17
Speaker
quite worried that Trump was going to do something in the South China Sea and sort of a lot of Chinese experts and people close to policymaking process, and as I understand it, raise this fear through track two dialogues, which then sort of fed their way up and ultimately Mark Milley called his Chinese counterparts after learning about these fears and was like, you know, we're not trying to attack you. If we were, we would let you know. um So yeah, tons of examples here that we can point to. you know The counterfactual is always difficult to tease out here, but it does seem, at least from some of the more detailed ah case studies here, that there is something real here that has worked in the past.
00:37:01
Speaker
yeah Do you think track two diplomacy is getting easier with modern communications technology? So is it the case, for example, that because you can reach people across borders so easily now, this would make this kind of informal communication easier and therefore track two diplomacy easier? Yeah, so I looked into this a lot because you can imagine from a philanthropic side, funding these kinds of exchanges over video conference is much, much cheaper than flying people out to Singapore and paying for their hotel and their food and whatever. um So I look into this quite a lot. And I think on one level, exchanges are getting easier. But on the other level, you have to worry about
00:37:44
Speaker
sort of people listening in, people feeling comfortable to talk. And for example, my understanding is that sort of Chinese experts are sent with talking points to read out at the kind of official sessions. Those aren't then that useful. ah But then afterwards, sort of at the sidelines, you can talk more openly. And that's just missing when you do it over Zoom or video conference. But I do think that one big benefit of these dialogues is that they can help sort of establish this transnational community of experts, of people who care about these issues. They might exchange phone numbers and emails, might to be able to contact each other during a crisis, service kind of like pseudo hotlines. um So there are definitely benefits to it, but it's not. I think from having spoken to interview to a lot of these people who convene track to diplomacy, people who
00:38:39
Speaker
ah take part in it in policymakers, there is something that gets gets lost when you're not in person, and that something seems to be sort of among the most valuable things of these exchanges.

Hotlines: Crisis Management Tools

00:38:51
Speaker
yeah So it might also be about human interaction and feeling like, yes, because we've met in person, we are now connected in ah in a deeper way and we are more friendly. And I know this is a real human and not just something on the screen. And yeah, so this this might also get lost if if we are doing track two over video conferencing. yeah How big and of of an effect do you think that has? Like this ah yeah nonverbal communication and meeting in in physical space?
00:39:20
Speaker
Yeah, I think it's actually pretty huge. And there was one person i interviewed who said sort of that the Chinese side, I guess, had sent somebody to just sit in and listen, but they that this person had seen that, you know, this was a pretty high-ranking person, and they didn't say anything, but they kind of watched their facial expressions in response to what was was being said. So those kinds of cues are, and again, fed back and sort of give valuable information about what risk mitigation measures are actually tractable. so You mentioned hard lines. Let's talk more about that. yeah How do we define a hard line? Is it a more technically advanced version of of a video conferencing, or how how do you think about hard lines?
00:40:04
Speaker
But the really basic way of thinking about Hotline is that they're just direct communications links between leaders or high-ranking officials of a state that have been kind of institutionalized in specific ways. So the first Hotline was established in 1963 right after the Cuban Missile Crisis because people realized the sort of Traditional diplomatic communication was way too slow and unreliable during the nuclear age. There's one example where Soviet diplomats were trying to send a telegram and they have to give it to the messenger boy to get it sent. And there's this great quote that I have here from one of the diplomats saying, a quote, we at the embassy could only pray that he would take it to the telegraph office without delay and not stop the chat on the way with some girl. And then at the end of the crisis, Khrushchev
00:40:56
Speaker
want to kind of broadcast this agreement over the radio to make sure it gets to the US s as quickly as possible. But the person who was meant to send the message to the radio and physically carried the message into the radio office and got stuck in in the elevator had to kind of pass it through the slits in the elevator. All of this insane stuff that then led to the direction of the the creation of this direct communications link. It was originally just text-based, so you could just send text-based messages to the other side. Yeah, so today there are a lot more hotlines between different states, and they're more sophisticated. They run via satellite rather than cable, and they sometimes include sort of video conferencing capabilities, or something that looks closer to email rather than teletype, which is what it was at the beginning.
00:41:48
Speaker
What do we know about how expensive hotlines are to set up? I mean, you you can imagine the the encryption technology and and the technology in general must be pretty advanced for each side to trust that this hotline is secure and and accurate and so on. Is it expensive to set up a hotline? I think compared to a lot of things in the defense world, it's surprisingly ah cheap to set up hotlines. Again, they're just they're just communications links that run via satellite. I actually talked to somebody recently who had sort of direct experience with the US-China defense telephone link and the, I guess there's just like a
00:42:28
Speaker
black box in the Pentagon with Chinese encryption tech in inside of it that the hotline terminal runs through. And there's the same thing in the Chinese side. But basically, yeah this is sort of encrypting communications. It's pretty standard stuff. It's really a fairly cheap intervention, which might explain why we see a decent amount of these. so What are the advantages of having hard lines? So you have a report on this where you kind of you mentioned ah a number of advantages where the first one is to do crisis management between two countries. How would that work in in practice? So wait maybe we could talk about historical examples, but then also we could talk about how how this could work between the US and China today.
00:43:14
Speaker
Yeah, so crisis management is the obvious one of the three that I talk about in the other two, war limitation and the war termination. And crisis management is just sort of allowing leaders to communicate quickly during each other during a crisis and ideally de-escalate that crisis. So you can imagine, you know, an accidental missile launch. If you're one country, you're trying to call the other and say like, so sorry, we messed up. This is just one missile will give you exactly its flight path. We'll help you shoot it down. You've seen the movie Dr. Strangelove and that's kind of a big plot line there. But similarly, you might imagine between the US and China
00:43:55
Speaker
in the South China Sea or the Taiwan Strait might be certain misunderstandings. And you might try to call the other other side and say, hey, our, say, autonomous ships or our drone swarm acted in a way that we didn't expect. This is not us. And sort of you can try to deescalate a crisis in that way. And we saw this a bit during the Cold War, not Obviously, a lot of hotline use is not yet public, but there are some transcripts you can go and read through um between the US, the Soviets, where, say, during the Arab is really warm. They constantly talk about, hey, we're here, you're there.
00:44:37
Speaker
you know, let's do our best to not let any of this escalate and draw either of us into this conflict. How could you trust such communication? Wouldn't you immediately assume that, say, if if you get a message saying, oh, we accidentally shot ah and a nuclear weapon at at your country, wouldn't you immediately assume that this is this is ah a fake and an attempt at kind of lolling you into a false sense of security? Yeah, you very well might, and but what hotlines do is they give you, at the very least, the option of trying to de-escalate. Yeah, so they can definitely be misused. They're not sort of a silver bullet and sort of, they have to be seen in the broader context of bilateral relationship they're used in. But again, even sort of at the height of the Cold War, we see these countries trusting each other's messages and being honest about sort of what's going on and trying to say,
00:45:30
Speaker
When nuclear weapons are potentially in play, we should not mess around. And again, I know you recently had Annie Jacobson on and the podcast. I really enjoyed that episode. But it's very possible to tell in some situations accident from an intentional aggressive move. What do we know about the state of hard lines between the US and China? What do we know in general about how they could communicate in a crisis? Basically, the state of US s-China crisis communications currently is very bad. So there are technically two channels, or at least two channels that we know about publicly. The first is a political hotline, the Beijing-Washington hotline established in
00:46:17
Speaker
1998. Actually, it was first sort of proposed in 1971, but the US never heard back from China. So it was finally established in 1998 under Clinton. Then the second link is the Defense Telephone Link or DTL, which was established in 2008. So those are some of the ah channels. But the big problem is that China often doesn't answer when there is a crisis. And There are many quotes from some US officials talking about this. One example that you might be familiar with was the Hainan Island incident in 2001. So basically, Chinese fighter jets got too close to a US spy plane. The spy plane had to make an emergency landing on Hainan Island, and obviously,
00:47:11
Speaker
contained all of this secret tech information. The crew is kind of frantically trying to destroy as much of that as possible. If you read reports of the incident, apparently they even poured coffee on some of this it equipment. But anyway, they get captured and and interrogated. And the whole time, the US is trying to frantically reach China, trying to prevent an international incident from escalating into something much worse. so um But they don't get an answer, and they don't get an answer until 12 hours later. And obviously, this was 2001, the relationship was much, much less tense than it is now. And 12 hours is just way too slow when we're talking about the speed of war.
00:47:55
Speaker
in the age of nuclear weapons and AI-enabled warfare. And we saw the same thing happening again last year with the balloon incident where they tried calling and calling and nobody answered. And so is this a deliberate tactic from the Chinese side or why why this behavior? What explains this behavior? Yeah, so like the the short answer is I don't know. And I'm not a China expert. So if my core expertise is more philanthropic allocation, philanthropic strategy, and I'd be excited to see and potentially fund more research projects diving into this issue. um But I've interviewed a lot of people and have a kind of couple if possible
00:48:34
Speaker
explanations we can discuss. And I think those explanations kind of help guide, again, philanthropic strategy here. So the first one that you've given is that, yeah, this is sort of a deliberate tactic that some officials in China, I think we shouldn't sort of talk as if there's one view in China, but some officials may view these channels as tools for having crises resolve their own terms rather than sort of de-escalating them. So I interviewed one former policymaker who used to work on the team. They used the US-China hotline, and that was his view. There's basically this big misunderstanding between the two countries.
00:49:10
Speaker
on how these channels ought to be used. To get a bit more clear on this, there's this perception that sometimes comes up when the Chinese talk about this in tractory dialogues, that they feel that the US is the more powerful country and that for that reason, crisis management is sort of more helpful to them because it provides a safety net and enables more aggressive tactics in their region. So that's one explanation. but I think there's something to this. Another explanation people give is just that the US and China haven't had a Cuban missile crisis. um So China doesn't have this institutionalized understanding of how important this channel is. I personally don't find that explanation that convincing, like there are senior officials in the PLA and Chinese strategic documents
00:49:57
Speaker
that all emphasize the importance of having crisis communications channels. Plus, like obviously, they study the Cuban missile crisis just like everybody else. And again, we should remember that Chinese experts aren't one blob. They're different views. They're made of different experts who's in China who do really see the value of the hotlines. So the more I've looked into this, the more I actually think the explanations that are most convincing here are less about Chinese attitudes and more about the organization and structure of these links themselves. So on the one side, the Chinese party state system keeps these mechanisms from being fast. So as I understand it, the PLA officials who man these hotlines are not allowed to come communicate with foreigners without first clearing all the messages with more senior political CCP leadership. So
00:50:50
Speaker
You have to run everything up and down the chain of command and that's slow and it takes time. And so ah some of the experts have spoken to basically so summarize this as the people who know what's going on are an authorized to talk and the people who are authorized to talk don't know what's going on. In addition to that, the channels are called hotlines, but unlike the US-Russia hotline, they're not made to be fast. And I learned you actually have to give a 48-hour notice that you want to schedule a call
00:51:22
Speaker
to talk about the thing that you want to talk about. um So that's to me just insane, right? It's not at all like what we have with Russia. It's more this game of phone tag where you have to call them, be like, hey, we want to talk about something. And then I say, okay, we can talk about something. And then you have to schedule a time to talk. um That's not a real hotline. And I think we could fix it pretty easily, actually. You could fund think tanks, you could fund sort of research institutes, you could fund tractive dialogues to look into specifically how can you work with the Chinese to make the system more efficient on their end, to make it work with their political system. You might look into maybe a text-based only system rather than having video conference, you might look into something that's
00:52:07
Speaker
made only for nuclear issues rather than sort of military issues in general, put aside as being this very high level thing that's only used during the most extreme crises. Because many people I talked to said the Chinese probably would answer quickly if they were certain that this was for a high level crisis and not just of the US trying to do whatever they want to do. And you know you you might, I think, given the times when also add certain other global catastrophic risk relevant tech into that channel. um But again, I think they're like the US could do it with the Soviet Union. The Soviet Union sort of political organization had some structural features that were actually
00:52:48
Speaker
similar to the Chinese system. And I think if you work with the Chinese on this, it might take a while, but you could come up with something that actually deserves to have a title of hotline. So that's the US-China communications piece. What about countries that currently do not have hotlines? Which two countries would benefit the most from establishing such hotlines? Yeah. So actually most of the major rivalries that you could think about, they already have hotlines. So so India, Pakistan, South Korea, North Korea. So my annoying answer here is that the US and China are the ones that could benefit most from establishing a real hotline. Say you have some form of conflict. I think perhaps and the the first thing that would be destroyed is communications infrastructure.
00:53:38
Speaker
And so you might worry that exactly when you need it the most, the hotlines won't at work. And so is this simply a question of having multiple hotlines, to having redundancy in the system, or is there another workaround to that problem? Yeah, so this is kind of the really big problem that I think hasn't gotten nearly enough attention. So we talked earlier about the three functions of hotlines, crisis management, war limitation, and war. termination. And you can't do the last two if your hotline is not designed to be survivable. So that means that they're able to kind of withstand at least the early phases of a ah at war. um So you mentioned kind of hotline redundancy. Yeah, so one thing we want is for there to be many different options for contacting leaders during a crisis in case one of those options fails.
00:54:33
Speaker
So early on with the US Soviet hotline, there's some pretty wild failures that we know about. So at one point, I think it was a farmer in Finland accidentally plowed his fields and severed a hotline cable. And then another time a like freighter in Denmark ran aground and accidentally cut an undersea cable when it did that. So and of course, those are just the ones that we know about publicly. It's kind of the tip of the iceberg. I think I've imagined the shit show that's still classified. Now the hotlines run mostly via satellite, but you still want all kinds of different options in case satellites are down. And that's why redundancy matters. But this hasn't really received the attention that it should. So nuclear weapon states take a lot of care to make sure that their NC3 systems, the systems that let them control their nuclear weapons, make sure that those systems can survive war.
00:55:29
Speaker
and so have many different redundant channels built in. But from the people I've talked to, this just doesn't quite seem to be the case for the hotlines. um So like we've talked already about the farm applying for the fields and so on. um But yeah, as you said, it's even more likely that our communication systems would be completely down once a major war breaks out because both sides are very clear publicly that they would target come communication systems in the early phases of a war. It just makes sense. They might also go down because of the effects of nuclear weapons, of EMP effects. So unless we take specific steps
00:56:06
Speaker
to make the hotline survivable, they're not going to survive. And, you know, it's hard to so to figure out from a public perspective. um I mean, I've interviewed a bunch of people trying to figure it out, but basically, NC3 survivability is a really sensitive topic, and nobody wants to talk about, but when it comes to interstate, communications, pretty much everyone I've talked to has basically said, yeah, once a war starts the way we have these set up, now all bets are off. So I think there's another great opportunity for philanthropy here to advocate to our policymakers for better hotlines or viability. and Again, fund think tanks, fund tracked dialogues, fund policy advocacy. I showed you this is a technical problem that we can work on
00:56:52
Speaker
and that it's extremely important to work on, because if we don't do it, there might not be any way to communicate during nuclear war, which might make it that much harder to limit escalation. Does satellites make this any easier? In in a world where you don't have to run cables anymore, is it is it easier to make your hotlines survivable? Yeah, this is a good question. And the hotlines do now, as I understand it, mostly run via satellite systems. But again, sort of cyber operations and operations in space and anti-satellite operations would be a major part of the early parts of a war.
00:57:33
Speaker
you would want to disable the other side's space systems and you very well might take down the hotline relevant satellites with that. I think there's a chance that sort of with bigger constellations and more redundancies that might change, but that's sort of a technical question that I'm not fully qualified to talk about. Do you think it would be better to have a lot of hotlines? so So we can imagine hotlines, for example, between the Chinese Navy and the US Navy and the Chinese Army and the US s Army and still having a bunch of different communications channels, or would this make it more confusing and you wouldn't know which message to rely on? Yeah, so I think generally it's most important to connect high-level leaders and you want to preserve a sort of single point of contact and make messages as unconfusing as possible. And so in terms of prioritizing,
00:58:29
Speaker
what kind of issues we may we might want to work on, I would heavily prioritize keeping sort of fewer lines with high level leadership, in part, again, because some countries, military officials just aren't authorized to talk on these kinds of channels without first running it through higher level quote political leadership. Think about how you can craft and a message that is unambiguous. so This can be extremely difficult and it can be difficult to not be misunderstood. and to so to To transform your your intention with the message into some text that that can be ah understood precisely by the other side. Is this something that might be worth looking into? I think it is, and I think it'd be important to kind of study past examples of misunderstanding via these systems. And sort of one complicating factor is that in the event where a war has already broken out and you're trying to limit that war and potentially bring it to an end,
00:59:26
Speaker
the bandwidth of communications that you're able to use might not allow for sort of very detailed exchanges, right? It really might just be text-based and a lot of nuance might be lost. But I think it's worth comparing that to the kind of factual where you don't have a hotline. So we think about sort of how can hotlines limit and terminate wars, right? At some point, you need to actually stop fighting. Ideally, the point which is stop fighting is prior to turning both of your countries to rubble and possibly killing a large fraction of the world's population. And they could do that with tacit bargaining. The famous Cold War game theoryist Thomas Schelling has a lot of good work from the 50s and 60s on tacit bargaining, and the idea that you can sometimes so
01:00:11
Speaker
reach agreement without being able to come communicate directly. So the classic example is so if you're supposed to meet your friend in New York City, you don't have a phone on a certain date, you don't know where and when and your friend doesn't either. That sounds really hard, but Schelling argues you can think about what your friend might think and know that your friend is thinking about what you might think and you might agree on something obvious, like I think Schelling says, let's meet at 12 noon at Grand Central Station. But tacit bargaining is really, really hard. And it's really easy to miss communicate. So maybe like, you think it makes sense to meet at noon at Grand Central Station. Meanwhile, I'm waiting for you in Times Square. And so it might be much easier also to communicate
01:01:01
Speaker
to use test bargaining during a war to communicate aggressive intent than it is to communicate peaceful intent. Because your actions communicate aggression. If you just bomb my country again, I won't go, I wonder if he's ready for peace talks. So Hotlines let you communicate explicitly during wars. Let you decide on limits. You might say, look, we're in a war, but I'm not going to target Moscow if you don't target Washington and let you communicate peaceful intent more easily. And it's much easier when you have these explicit bargaining options. so You might even say, okay, you've won. I give up.
01:01:38
Speaker
Let's end this before we kill everyone. So it's true that communicating unambiguously with hotlines is difficult.

Post-War Risk Reduction Strategies

01:01:46
Speaker
Communicating unambiguously and communicating non-aggressive intent without hotlines is much, much harder. You have researched how we can decrease risks of further escalation after a war has has broken out. So perhaps, I mean, we've we've just talked about hard lines, perhaps hard lines has a role to play there. But what options do we have ah for decreasing risks after a war has broken up?
01:02:12
Speaker
Yeah. So that's a really good question. We have kind of two reports that go into this in more detail for anyone who wants to read up on it. One is global catastrophic nuclear risk. The other one is philanthropy to the right of boom. So kind of talks about why we should care about intervention that seek to decrease risk afterwards. We're already broken out and we we can talk about that, but yeah. options might include hotlines, might also include targeting policy and sort of currently that's not optimized to minimize damage to the world and I think so we recently made a large grant to this project at the Carnegie Endowment for International Peace
01:02:52
Speaker
project is called averting Armageddon. And one of the things they think about is nuclear winter, which I know you've had so guests on the podcast previously, I've talked about nuclear winter, but you might think about sort of what policy can you come up before a war that might decrease the probability of nuclear winter breaking out. And there are sort of safe, potentially hundreds of millions of lives. Food stockpiling is another option. You might think if there are other mechanisms like hotlines that help to sort of limit wars once they've broken up, but prior to escalating to all out thermonuclear exchange, I mean, you might think about sort of continuity of government planning, which I talked about earlier in the podcast. Yeah, there's this whole range of interventions that currently receives very little attention from other
01:03:43
Speaker
philanthropic funders is kind of ideologically tainted as being this Cold War idea. But I think it's a really high impact one. And because it's so neglected, we may expect her to be low hanging fruit. I think one worry with this so-called right of boom philanthropy, so after the bombs have dropped, is that it might inadvertently increase the risk of nuclear war or war in general breaking out in the first place. And the reasoning would be something like if a leader expects that he might personally survive or that his population might survive because we have these interventions for de-escalating after a war has broken out, he he might be more likely to initiate a war in the first place.
01:04:26
Speaker
this is kind of like an indirect effect and it relies on the psychology of of world leaders and so on. do Do you think that's plausible and do you think this might be what what is stopping philanthropists from funding right of whom philanthropy? Yeah, so this is a really important issue and sort of one with which we should think about really closely, right? Because if it were true, then this would be potentially a bad inter intervention and potentially the neglect of these kinds of interventions would be rational. So first of all, I interviewed a bunch of people to figure out, you know, why aren't you funding this? um This is not usually the explanation that ah comes up. But again, it's ah it's ah an explanation worth talking about, right? So the idea is that risk is a function of probability and like consequence. And the worry is that potentially by decreasing the consequences of nuclear war, you might be increasing its probability
01:05:19
Speaker
A strong theoretical reason not to worry about this though is scope insensitivity. So the very thing that makes it hard to get people to care about catastrophic risks and to care about effective giving is actually ah helpful here. So at the numbers we're talking about, The difference between 100 million dead and 200 million dead is huge when it comes to the cost effectiveness of philanthropic spending. I should just interrupt myself here. like We throw around these numbers so quickly, it's easy to forget just like how horrible that would be in sort of what an unprecedented events. So the difference between 100 million dead and 200 million dead is like huge, but 100 million dead is like we should sort of do our best to prevent this from happening in the first place. so But we're thinking about cost effectiveness. We want to think about
01:06:09
Speaker
where we might put the marginal dollars. But again, we're talking about the decision-making calculus in the minds of leaders that might not even come up, right? So maybe we can tackle this with a specific example, nuclear winter. And so here, I think it's a rare case where we do have some empirical evidence about the priorities of our country's leadership. So let's say we talk about all these potential interventions for mitigating nuclear winter after nuclear war breaks out, maybe we're investing in food security, maybe we're trying to affect targeting policy. Now, what are the ways that this might make nuclear war more likely? That would be if decision makers previously felt that nuclear winter was a bad enough reason to not start a
01:07:01
Speaker
nuclear war. But we know actually that nuclear winter doesn't even enter the decision-making calculus so at all. um Many sort of leaders don't believe in this effect, and you know we can talk about sort of how extreme it might be, but I think the fundamental science behind it is actually sound. But yeah, but it just doesn't even enter their calculus in the first place. And ah another reason is that it's likely to affect people already close to starvation. So likely to affect poor people in the global south. And you know if there's one thing that leaders of rich people of rich countries don't care enough about is poor people in the global south. So if this matter to them, then we would expect much more action taken in the first place. So
01:07:50
Speaker
That sort of dynamic, the scope and sensitivity, the inputs into the psychology of leaders in the first place, um if these considerations don't even enter their minds, then taking away that risk shouldn't increase the probability of war. So nuclear philanthropy is facing a funding shortfall. Why do you think that is and and what can be done about it? Yeah, so like the immediate reason that nuclear philanthropy is facing this funding shortfall is because the single largest funder in the field, the MacArthur Foundation, has decided to withdraw from the field. So last year, 2023, they made their final grants.
01:08:30
Speaker
Before this withdrawal, the field is already sort of relying in a fairly meager 47 million a year or so. MacArthur accounted for about 15 million of that. And I'm talking in terms of point estimates, but obviously these so numbers change a bit year to year. So roughly sort of expected funding for the field is about 32 million now. In the past, I've compared this to the budget of the movie Oppenheimer, which is a hundred million, let's say it kind we're spending as a so society ah three times as much on a single movie about nuclear war. as we're spending on preventing nuclear war. ah Maybe a better comparison that the three CEOs of Lockheed Martin, Boeing, and Raytheon alone took home more than $60 million in 2021. So that's twice as much again as we're spending on preventing nuclear war. I think it says a lot about our priorities as a so society. And yeah, this is a problem that we're trying to work on at Founders' Pledge, trying to get people more excited about reducing the risks of nuclear war. It's a big problem. It's also potentially an opportunity. It means that relatively modest amounts of money can actually make a big difference in shaping the field and you can kind of really
01:09:43
Speaker
make a big splash in terms of mitigating these risks. Potentially the broader background of sort of why isn't this getting more funding might just be that sort of since the Cold War, until very recently nuclear weapons just haven't been at the top of people's minds the way that they were during the Cold War. You have a report on catastrophic biological risks, and in that report you talk about how we are lucky that states chose nuclear weapons as their as the weapon of choice and not biological weapons as their weapon of choice. ah this is this is a You're saying this tongue-in-cheek, of course, but why is it that our situation would be even worse had states chosen to use biological weapons as their weapons of mass destruction?
01:10:28
Speaker
Yeah, so it is a bit tongue-in-cheek and sort of intended to illustrate structural reasons why, from a catastrophic risk perspective, we might worry especially about certain kinds of biological weapons. So, you know, say what you will about nuclear weapons. They're not contagious. Some biological weapons are contagious. So, as listeners of this podcast might know, initially some of the Manhattan Project scientists working on the atomic bomb worried that the nuclear chain reaction of that bomb wouldn't stop at the bomb itself, but that sort of nitrogen in our atmosphere would fuse at these high temperatures. It creates this kind of self-sustaining chain reaction.
01:11:09
Speaker
that would ignite the Earth's atmosphere into a giant fireball. Turns out that fear was misplaced and there is no chain reaction like that for nuclear weapons. um So nuclear weapons don't reproduce, they don't self-replicate, they don't turn their victims into weapons, they don't mutate. So for the report, I interviewed one policymaker who used to work at the White House on these issues. And he said, for ah bioweapons, especially human-to-human transmissible bioweapons, you're making, quote, you're making fissile material out of every person on the planet. From an existentialist perspective, biological weapons that are human-to-human transmissible have structural features that make them much scarier. It creates kind of exponential threat. And this tells us of without knowing much
01:11:54
Speaker
else about the threats, there are sort of really strong a priori reasons for worrying a lot about biological weapons. And sort of on top of that, nuclear weapons are a fairly resource intensive process, making them as hard, as expensive, depending on sort of the trends that we see in the next couple of years, bioweapons could be much cheaper. So those are the reasons why I saw that and in that report. You mentioned to me that a US policymaker you interviewed told you that we are completely f'ed in terms of our situation with the biosecurity landscape. What did he mean by that? Do you agree with his assessment of of the biosecurity landscape?

Biosecurity and Pandemic Preparedness

01:12:35
Speaker
Yeah, I was kind of shocked when this person said this, but sort of one big reason for the comment was I think that societal attention, even in the wake of the COVID-19 pandemic, just isn't focused nearly enough on the risk of the next big pandemic. So a lot of people I spoke to pointed to this idea of pandemic fatigue, where people just don't want to talk about the pandemic anymore. They don't want to think about it. they They don't want to pay for anything um related to pandemic prevention. There are very few major funders who strategically prioritize catastrophic risks.
01:13:10
Speaker
as both private and public funders. Most of the funding is about naturally emerging pandemics. Most of it is sort of list-based, meaning it's targeting specific lists of agents. But we know that COVID was not nearly the worst pandemic possible. you know Not only is nature constantly sparking new outbreaks, but humans are. intentionally engineering pathogens to make them more more dangerous with both good intentions and bad intentions like terrorist groups and doomsday cults trying to kind of engineer the worst possible pandemic to bring down modern civilization. And unfortunately, going back to this comment about us just being completely effed is that like it turns out the very worst
01:13:50
Speaker
threats, sort of extinction level pandemics are also among the most neglected. So we sometimes think of this as like a secret team working on this somewhere hidden away in the government and they have it under control and they just don't. A lot of this is pretty scary. If you think about sort of like the offense defense balance of this, there's like there's a quote from from the Irish Republican army after a failed attack in 1984 that kind of I think encapsulate some of these dynamics. assistive Remember, you we only have to be lucky once. You have to be lucky always. So to some extent, this is a matter of time until something much, much worse than COVID happens. But I think I'm more optimistic in this person because I think sort of philanthropy can actually do a lot to help mitigate these risks.
01:14:42
Speaker
yeah So your organization, Founders Pledge, have recently made some grants in this area. Maybe you could talk about these grants and and why you made them. Yes, we've made several grants and advised on several grants as well as recommending different organizations. So as an example, we've moved, I think, 3 million to an organization called IBIS, International Biosecurity and Biosafety Initiative for Science. I think Founders Pledge members have given about 300,000 to Blueprint, we recommend other organizations like Johns Hopkins, CHS, NTiBio, SecureBio, and basically the reason for these grants is I think easier to explain when I sort of go back and talk about the threat landscape of biosecurity. So if you don't mind, i'll go I'll go and talk about that a bit and then we can, I think, explain why we made these grants.
01:15:32
Speaker
The way I like to describe the landscape here, in addition to the lack of appropriate spending, is that the threat is large, growing, a complex, and adaptive. So it's kind of at a really high level because, again, work is geared toward philanthropic strategy. But at first, the threat is large. So we know from history that pandemics have been some of the worst things that have happened to humanity, and and we know that they can spread really rapidly around the world. We know that somebody who was intentionally trying to do harm could do much, much more harm than we might expect from nature and actually want to be more specific. The threat is super linear. um So probably the worst kinds of pandemics are disproportionately worse than smaller outbreaks. so And second, the threat is growing. So certain developments in the life sciences are making
01:16:27
Speaker
progress and finding cures to disease more effective, easier, more accessible, and simply cheaper. But those same trends might make it easier for bad actors to develop pandemic agents. So for example, have AI-powered biological design tools might be useful for accelerating science, but they could also be Misuse for creating pandemics that evade or early warning systems and then next when a threat is complex. They're like so many different actors terrorists states individuals so bats pangolins right so many different threats and it's only getting more complicated as the technology is ah advancing so we can draw some kind of
01:17:12
Speaker
stylized conclusions about the threat landscape, but actually the whole thing is really amorphous and predicting technological progress is hard. And finally, the extreme threats are adaptive. And I think this is really important. Basically, when we think about engineered pandemics, whether by humans or AI, we have to build in the idea that we're facing intelligent adversaries who can respond to our defensive actions by shifting their own strategies. So we don't want to build up these targeted defenses that might protect us against specific threats because they just shift the risk. So it's a waste of money potentially. And so that kind of helps us to derive what founder's pledge has been calling impact multipliers, these kind of features of the world
01:18:00
Speaker
that help us rank different interventions relative to each other. I think you may have talked about this a bit in your conversation with my colleague, Johannes, about climate, but it's impact multipliers that are derived from this overview of the landscape and help us figure out what should we fund. So okay so back to like these different organizations that we funded rather than trying to explain individual grants, which often rely on things like the strength of the specific organization or specific funding needs. One big impact multiplier that I really want to hammer home is that we should generally pursue pathogen agnostic and threat agnostic approaches to defend against the worst pandemics. So rather than trying to guess where the next pandemic will come from, philanthropists should support approaches that are robust to a range of different threats. So this kind of follows directly from what we just said about the landscape of biosecurity. If you have a threat that's
01:18:51
Speaker
really complex with many different possible pathogens. And on top of that, you're facing an intelligent adversary who can sort of shift the threat landscape under your feet. You want something that's robust to this uncertainty and is robust to and to the adaptive nature of this threat. So that's why we fund organizations that might work on things like germicidal ultraviolet light or and sort of pandemic-proof PPE, different defenses that might be useful against a whole wide range of different pathogens, different different threats without having to sort of try to predict what the next pandemic would be. Instead, we can think strategically of how can we make our society more robust to this really complicated and uncertain threat.
01:19:35
Speaker
yeah so Do we actually have these threat agnostic technologies available? You might worry that if an adversary is is too good at adapting, they might be able to undermine even those kind of defensive technologies that we thought were threat agnostic. What's your assessment here? One answer to this is like, yeah, right no technology is perfect, but certain technologies might be able to restrict the range of options open to bad actors to such an extent that they might just stop pursuing bioweapons in the first place. um This idea is sometimes called deterrence by denial, you so denying the other person the option to even have the effect that they want to.
01:20:16
Speaker
they might not even pursue these weapons in the first place and sort of evading really broad defenses. So like fortunately, currently, sort of weaponizing pathogens are still fairly hard. We're worried a lot that that might get easier and more accessible. But if you have something that has a broad range, evading that might just require a skill that even sort of the top scientists in the world just don't have. They don't have to be perfect in order to make us much, much, much safer. yeah so You have a report where you specifically evaluate germicidal ultraviolet light and as as one of these potentially threat-agnostic and or, in in ah in another word, ah general approaches to preventing pandemics. What's your what's your conclusions from from that report? how How hopeful are you about germicidal ultraviolet light?
01:21:04
Speaker
Yeah, so my colleague Rosie and I wrote this report. They published early this year, germicidal UV and disease transmission reduction. We really tried to get into the details of how certain parts of UV light can help to inactivate pathogens and reduce the transmission of disease. So we've known for over 100 years that certain kinds of UV light can inactivate pathogens and can do so really. effectively and so I'm really excited about this and I think it's very possible that sort of different ranges here ah have different effects but as a whole sort of germicidal UV has these attributes that make it really attractive it's not pathogen specific unlike say most
01:21:55
Speaker
vaccines or drugs. It has no development time once it's in place, so no 100 days or whatever to wait for a vaccine to actually roll out. We don't need to research dangerous pathogens to develop germacetylite. You're not accidentally increasing the risk in that way. It's a kind of passive defense that doesn't require people to change their behavior. You kind of set it and forget it. And for some of these reasons, it can be helpful with pathogens that were explicitly designed to get around other defenses. And again, this is just, we should think of it as sort of one tool in our toolbox. And we need sort of multiple layers to our defense, but it's one that's been
01:22:36
Speaker
neglected relative to how promising it is, and it's when I'd be excited to see a lot more work on as a potential layer for our defense against pathogens. So how widespread would German satellite have to be in order to be effective? Would it have to be in in my car and my house and my workplace and public transport? Or yeah where would it have to be? Would it be enough that it's that it's available in in private homes, for example, or and or in public spaces? Yeah, so obviously like sort of the answer to this question depends on what you're trying to do. I think very often people will talk about public spaces mainly, and it can be very effective. This is where people mingle. It's where a lot of the big spread happens. And one reason
01:23:25
Speaker
why sort of germicide UVB might actually be confined mostly to public spaces is that the conventional wavelength for some of this is about 254 nanometers, when that gets into contact with human skin and eyes, it can actually damage them so they have to be set up usually at the top of a room above the inhabited zone shining in such a way that it doesn't come into contact with humans. So that sort of limits the amounts of spaces that it can be installed in. But again, you might think offices, hospitals, train stations, airports, all of these big spaces where people
01:24:09
Speaker
Mingle and spread disease. Yeah. So we have other potentially threat agnostic or general approaches. And here I'm thinking about early detection of of of new pandemics and rapid ah vaccine development or development of vaccines that are universal. How would you compare germicidal light to to those two interventions? Yeah, so there are, again, some reasons to be especially excited about germicidal UV. They're mostly the ones I talked about earlier. but Again, I want to be clear that like we actually need many different layers of defense. and This is in no way meant to be a silver bullet. like We should have all of these things and like we should just like cough up some money to pay for all of this. so Obviously, the difficult question that sort of we might face as grant makers and as people advising philanthropists
01:24:57
Speaker
is on the margin, where do extra resources make the most sense? And there, I am more excited about interventions that are pathogen agnostic, like germicidal UV, than interventions that have to target specific pathogens or groups of pathogens. Because you know even if you develop something like a universal flu the vaccine, that's great. um But if you're facing an intelligent adversary, going back to what we talked about with the threat landscape, then they just won't use flu as their sort of weapon of choice. so And you plan to just shift the risk rather than decrease that.
01:25:32
Speaker
And maybe more generally, early detection and like platforms for better and faster vaccine development are great. um But they also receive more funding in some of these stranger niche things like germicidal UV. So again, you might expect the value of philanthropic funds to be higher. in those kind of spaces. You might to be sort of expect there to be low-hanging fruit there. yeah So your day job is to think about how we might fund organizations that decrease the risk of civilization are collapsing.

Historical Collapses and Modern Lessons

01:26:03
Speaker
There's also the historical ah aspect of thinking about how have civilizations collapsed in the past.
01:26:10
Speaker
you actually have you actually yeah You actually have some interest in this area and you've done some ir relevant archaeology if I'm not mistaken. So perhaps you could tell us about how you became interested in kind of historical civilizations collapsing and and and what you what you did with that interest. Yeah, absolutely. So yeah, I first kind of really became interested in this, I guess, 10 years ago now after working for a few months in college at an archaeological excavation in Guatemala. It was a site called Chachaclum, which was a kind of secondary city to the center of Motul de San Jose. It's kind of late classic Maya site. I mean like the 8th century, a bit beyond and something about seeing the
01:26:58
Speaker
sort of remains of this great civilization from so long ago just kind of has kept me worried and that this could happen to us, right? And it's kind of part of what drives me in my current job is this concern. You know, people think The Maya collapses a complicated example and sort of might want to be careful about the word collapse. But when I think about all of these civilizations in the past that, you know, thought, you know, we're going to stay forever, none of them have. And I think that's a reason for us to worry about the resilience of our current civilization and to try to do as much as possible to prevent ours from collapsing.
01:27:41
Speaker
yeah How much can we actually learn from studying the history of previous civilizational collapses? i You might worry that these these civilizations are so far apart from ours in time that we can't really draw any relevant lessons from from the collapse of the Roman Empire or the Aztec, for example. What do you think? Yeah, I think it can still be useful, at least to kind of instill humility that this could happen to us. is and there's There's no strong reason to think that it can't happen to us. But yeah, in addition to what you said, I think one big challenge of studying historical collapses is that motivated reasoning is very easy when evidence is scant and you kind of often see peace people's personal politics entering.
01:28:28
Speaker
into their interpretations of what happened. So, some archaeologists, some scholars of collapse might say, oh, you know, what happened here is that the landlords got too greedy, inequality got too extreme, farming practices got too exploitative, and we stopped being in harmony with our Earth. The others might say, sort of, actually, what happened here is the state got too large, there were too many elites, taxes got too burdensome, and invasions by foreign forces can weaken the empire. And you know, it gets interesting because different sort of groups have interest in using archaeology to confirm their worldview. And of course, all that either side has are are a couple of shards of ceramics. So it's very malleable to different interpretations. So maybe we can draw some patterns from collab studies. There's a good recent book
01:29:18
Speaker
called how worlds collapse and sort of catastrophes I think often feature as this event that push a fragile society over the edge. So, you know, plagues, earthquakes, volcanic eruptions, external conflict and invasion, war, abrupt climate change from volcanic eruptions. And that might be sort of analogous to the kinds of climate shocks that you might see from nuclear winter. And then there are also kind of these theories
01:29:50
Speaker
that some people put forth about like endogenous problems with the so society. For example, there there's this theory that they're diminishing returns to come complexity and civilizations are these kind of problem-solving mechanisms that for some amount of added complexity, it helps them solve their problems. But that comes at a cost. Their diminishing marginal returns to that. And so so so sometimes, civilizations reach a point where the costs outweigh the returns. They can no longer sustain that structure. But like maybe like another thing we can learn is that this idea that somehow we'll recover to the place we are now and that we'll recover with the
01:30:31
Speaker
values we have now, I just think we should sort of be very careful about that. We talked about that earlier, sort of the post-war problems, I think. Again, in this book that I mentioned, there's a great chapter by Hayden, Bellfield talks about these issues. I think, for example, a lot of collapses have led to a time of warlords competing for political power in the remains of that civilization. That's maybe a kind feature over some. And that that situation could be a lot worse today where those warlords could capture, for example, state biological weapons programs or nuclear weapons. um So for example, so if if presidential authority is
01:31:12
Speaker
gone, it was unclear who's in charge. You might expect sort of different people to make the claim that they're next in line to succeed the president, and there are really interesting constitutional questions about this, but like some of these people might have access to extremely powerful and dangerous technologies that should make us especially worried about collapse today. That all seems true to me, but on the other hand, you have something like the Second World War, which in some sense it gives birth to the idea of of of human rights and a general aversion to war. So perhaps you could also see war and and this these kind of massive global conflicts change values in a positive direction.
01:31:55
Speaker
I have no idea how to evaluate what would happen in a potential next war. And of course, we should avoid it at almost all costs. But is there something definitive we can say about war changing values for the worse? Or do we have effects in both directions? yeah Really good question. that like I personally would rather not risk it. Of course. um but yeah like As you said, right like it's not a given that values would be worse, but we can maybe think about the distribution of different value systems geographically and sort of also across history. and and like Again, this is partly from this chapter that that that ah that that I was talked about earlier.
01:32:34
Speaker
but basically our current value systems of liberal democracy with respect for human rights, with many, many flaws, but like pretty good compared to everything we've had in the past is actually fairly rare when we look at what happened in the past. So that might be one reason to think like, um unless you think that there's some sort of natural progression towards better moral systems, which, you know, we can talk about, but unless you think that that's necessarily true, those kinds of values might not ah reemerge, you might also think that maybe certain ways of organizing political systems might be more resilient in the wake of a collapse. So again, the continuity of government plans that we talked about earlier, might expect some of those to look pretty
01:33:20
Speaker
authoritarian and combining that with sort of what we know about what can happen after a collapse, you might well sort of expect by default the sort of societies that emerge after a collapse to sort of have high hierarchical and potentially authoritarian systems. But yeah, as I said, like, we don't know, maybe we'll get lucky, but Is civilization as a whole more resilient now than it was in the past? It's a hard question to answer. We can maybe think of some reasons why we might think it might be more resilient. So like we have a scientific understanding of things like how disease spreads and things we just didn't have in the past. We have
01:34:03
Speaker
Currently, sort of instant communication around the world, we have sort vaccines and other technologies, we're more connected. We have these international organizations that might help us coordinate and cooperate, and we have people who specialize in risk assessment and trying to make systems more resilient to collapse. On the other hand, right like we keep coming up with dangerous new technologies, and the more science and technology advances, the more So if the surface area for catastrophic risks potentially increases, and partly because we're so interconnected, some people think that like maybe we're more vulnerable to a truly global collapse um you know in the past if there was a collapse in Greece that might not necessarily have affected China, but today. sort of with
01:34:48
Speaker
Global air travel, the spread of, say, pandemic pathogens could be much faster. Another reason might be sort of our supply chains are like hyper-specialists and global and everything kind of just in time. We don't really have redundancies and slack built into these systems. The like incentive structures for that just aren't there. Our civilization is so complex and it's the system of systems that some of these might fail. And another kind of thing I worry about is that in the process of a collapse, even if it's fairly localized, so even if it's just one country, say North Korea, um we might also expect weapons of mass destruction to fall into the wrong hands. so so
01:35:29
Speaker
state biodefense programs and offensive biological weapons programs. If the state itself collapses, who's there to guard those pathogens and so what might we expect competing factions to do with with those kinds of powerful weapons? I say, yeah, on the whole, hard to say, but many reasons to worry that catastrophes could be more widespread and potentially more extreme today. Christian, it's been fantastic having you on the podcast. I'm so glad you you joined us.