Introduction and AI Governance Risks
00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker and I'm here with Robert Traeger. Robert, maybe you want to start by introducing yourself to our audience.
00:00:10
Speaker
Oh, sure. Well, thanks so much, Gus. I'm so glad to be with you. I'm Robert Traeger. I am a social scientist. I have for many years been at UCLA, where I was a professor. And I'm recently moving from there to Oxford, where I will co-direct the Oxford Martin AI Governance Initiative. And I guess I'll have a position at the Blavatnik School
00:00:40
Speaker
And I'm also the International Governance Lead at the Center for the Governance of AI, which is also based in Oxford. So that's me.
AI Misuse and Democratization
00:00:50
Speaker
That's perfect. So it sounds like you know a lot about AI governance, which is exactly what we're going to talk about today. So what is it that we are trying to achieve with AI governance? What risks are we most afraid of?
00:01:04
Speaker
What risks are we most afraid of? What an interesting first question. There are so many things to be worried about, and there are different ways of categorizing risks. So one way that people sometimes do it is into the three buckets of misuse risks, accident risks, and structural risks. And I think that is a pretty sensible way to think about it.
00:01:33
Speaker
And there are all kinds of misuse risks, right? This technology is democratizing the ability to do things in general. So some of those things are misuses. And it's also potentially supercharging
00:01:51
Speaker
the other domains of science, and that presents some risks in itself. Because for instance, if biology is more effective at doing things, then it can be more effective at doing some things that are harmful. And so there are some risks associated with that. Of course there are. To go along with, of course, all of the positive aspects of
00:02:19
Speaker
of the technology and the extraordinary things that it can do for the world. So two examples that I've heard thrown around is the possibility of creating an engineered virus to potentially create a pandemic that's worse than COVID-19 and the possibility of cyber attacks helped by these advanced AI capabilities. Do you think those would be the two kind of top priorities to prevent?
00:02:47
Speaker
Oh gosh, top priority is actually a really tough question. I think that those are important priorities. I wouldn't want to say they were the only priorities or even the top priorities. But I do think that those are some examples of things that progress, even near-term progress in AI,
00:03:11
Speaker
can really democratize the ability to do things in those areas. And so there are certainly things that we ought to be worried about. So when we talk about democratizing these abilities, one criticism that I've heard here is that, well, what's the difference between just searching with a search engine and finding this information, as opposed to getting it through perhaps a language model or gaining knowledge about biological weapons through one of these AIs?
00:03:41
Speaker
The threats we're worried about, how are they different from whatever information people can find online already now?
00:03:48
Speaker
I don't think we want to exaggerate the threats of the systems that exist today also. I think people are right. I remember talking to some of the people who were doing some red teaming on some recent systems, and they were coming from the national security establishment, and their view was basically, these things are making things up.
00:04:11
Speaker
And they're making things up from a knowledge base that is all not classified information. And so how dangerous can they really be? And I think that's important to recognize those bounds on what
AI System Limitations and Incentives
00:04:26
Speaker
current systems can do. So I think there's a question about what the next generations of systems will be able to do, certainly. But even when it comes to current systems, I mean, you use this cyber example, being able to write code
00:04:44
Speaker
is very powerful. And so when you have people who now don't actually have to take a course in coding or many courses in coding in order to do something, but presumably can convince a language model to do it for them, that can be much easier and mean that a much wider section of the public can think about doing those things.
00:05:07
Speaker
So that's what we mean by democratizing. I think there are some risks, very significant ones, along those lines in bio and cyber that you mentioned and a whole host of other things. And while we're talking about these things, probably we shouldn't forget about social justice also and access to the technology and having a voice. That is, these technologies are affecting everyone. So everyone deserves a voice.
00:05:34
Speaker
what happens, how they're governed, how they evolve. And I don't think we're we're there right now. So really, there's a lot of work to be done on the governance front. And also, we should keep in mind how quickly these technologies are improving. So we might find ourselves continually surprised by new capabilities that we hadn't predicted beforehand. And so just because something isn't possible right now, maybe it's possible in six months or in two years or whatever.
00:06:03
Speaker
That's a great point. People, I think, when they're thinking about, for instance, what should be released to the public, what technology should be classified and not, often they think, well, is it dangerous today? But if there are 10 steps that are involved in making a technology that's dangerous and you release information about the first nine of them,
00:06:28
Speaker
Well, then you've really proliferated the technology. And we've made mistakes about that in the past. You know, in the past, we have, you know, some of the techniques, for instance, for diffusing
00:06:42
Speaker
uranium, we thought, well, these techniques aren't going to work, but those may work. So we classified one set and not the other set, but it turned out to be the reverse. And as a result, some of those techniques are much more widely dispersed in the world than they otherwise would be.
00:06:59
Speaker
So I think these downstream effects are extremely important. I totally agree with you. And exactly, when it comes to AI, what can be built on top of current day systems, even just that, beyond what's the next iteration of the large language models or something like that, but what can be built and done with current systems with different forms of access? We don't know the answer to that question. So yeah, there's a whole range of risks, as you point out.
00:07:28
Speaker
Yeah, even if we stopped now and didn't train any larger language models, we would still have a lot to explore about the capabilities of a system like TPT-4, for example. I think we haven't exhausted what this system can do, even if we were just talking about implementing it in different ways or trying to get it to do new things.
00:07:50
Speaker
Yeah, I think that's right. I mean, I like the idea that others have talked about that we could think of a language model as kind of the system one in the sense of condiment system one and system two in the brain. I like that idea. It seems sort of right to me. And obviously in the brain, there are all these systems that are kind of built on top of other systems. And I think it's
International AI Governance Challenges
00:08:13
Speaker
the same. We don't really understand how those systems work in the brain. And we don't understand how they work in large language models.
00:08:20
Speaker
The sorts of interpretability that we have are several generations behind in terms of trying to figure out how these models are working. That's absolutely right. Even the current systems are a frontier to be investigated in terms of understanding them.
00:08:39
Speaker
If we return to the question of governance, AI governance for a moment, I think one important place to start is to talk about the incentives involved, just because institutions respond to incentives. So when you look over the landscape of governments and companies, what do you see as the incentives for the various governments involved and for the various companies involved?
00:09:02
Speaker
Well, motives are mixed. Of course they are. And there are lots of individuals who want to do the right thing out there. And it's often fascinating to watch them be a part of an organization that has an organizational imperative. So an individual might have all sorts of ideas, be they in a government or in a company, and they want to do one thing, but then there's
00:09:26
Speaker
There's an organization that has an incentive to, let's say, maintain its reputation. Or there's just a range of institutional incentives that make it hard for, I think, individuals to always do the things that they want to do. But the organizations, of course, they have mixed motives. That's what makes it such an interesting and complicated strategic landscape. And mixed motives, by the way, are exactly the sort of thing that are hard for
00:09:55
Speaker
AI strategic systems to deal with. We have superhuman systems that deal with the two-person zero-sum case. But when we're talking about a complex environment which involves both incentives to cooperate and incentives to compete, we don't really have superhuman, at least when there are multiple actors,
00:10:22
Speaker
in that case. So those are the complicated cases and that's the world that we're in when it comes to the governments and when it comes to the governments and all of the labs in the area.
00:10:38
Speaker
One account I've heard is just that both governments and companies are incentivized to just rush ahead. For companies, it's about gaining market share. For governments, it's about gaining geopolitical power. But isn't there another incentive for companies to create products that they can actually sell? These products must be at least somewhat safe for the consumer.
00:11:03
Speaker
No company is interested in selling a self-driving car that kills the driver. And for governments, there's a question of international reputation. And if you have accidents with your AI systems that might cause embarrassment and might weaken your alliances with your allies and so on, I think you're right about the question of mixed motives here. But which incentives do you think are strongest?
00:11:32
Speaker
Yeah, I think you're absolutely right that there are these incentives against allowing regulatory backlash, for instance. We really have seen that in other industries, like most famously the nuclear industry after Fukushima, for instance, the industry experienced, and Chernobyl, the industry experienced a huge contraction.
00:11:56
Speaker
So industries have an incentive to avoid that. Some of the self-regulation on the part of companies is them being preemptive and worried a bit about the regulation that's coming and therefore regulating themselves. So I think on the one hand that that's right that companies are worried about those things. On the other hand, they have these other incentives also, as you point out, to be first to market and to race quickly.
00:12:23
Speaker
And which of those went out in a particular case is, I think, is in a way unknown.
00:12:31
Speaker
But I think it's also fair to say that industry players and countries, when they're competing with other countries, they have some incentives which are different from the broad societal incentives. I don't think we can get away from that, even though they have some incentives that push them in the direction that maybe we as society would
00:12:54
Speaker
would want to push them in the sense of, well, they're worried about a regulatory backlash. That's great. They're worried about regulatory backlash. I want them to be worried about it. They're worried about bad regulation from governments and therefore they're self-regulating. I love that. The potential of bad regulation from governments is good in this case. Those things are great. On the other hand, I don't think we can rely
00:13:21
Speaker
on incentives being totally aligned because they won't be. They're still private incentives that these countries or institutions or even in some cases individuals
00:13:32
Speaker
have that are different from the general interest, even though they have some things that push them in the way that we would want them to be pushed. You've written about regulation and the need for international AI governance regulation. So perhaps we could talk about why AI governance, why does this regulation have to be international in order to work? I think there's some questions that we don't fully know the answer to in order to figure out
00:14:01
Speaker
whether we really need lots of international governance or just some international governance. So I think there's a set of questions that we can start to ask ourselves to figure out what sorts of international governance are really important. But I guess what I would say is if we're not thinking about international governance, at the very least, we are not addressing some buckets of risk.
00:14:30
Speaker
So just to give you an example, and maybe we'll get to this later, but one of the questions that we can ask ourselves is, can we just regulate by controlling some, let's say, compute supply chains among a small set of allies? And maybe that can mitigate quite a lot of risks. I think that's a possibility.
00:14:59
Speaker
I don't think it's a certainty because, for instance, it might be the case that existing compute that's out there can produce quite a lot of risk already. And these things are still debated. So even if you don't, for instance, have access to the latest AI data center chips, you might be able to have some technical workarounds
00:15:26
Speaker
that allow you to use non-state-of-the-art chips, many more of them, maybe somewhat more slowly, but nevertheless use that compute that you as an actor have access to to do all sorts of things. So we don't really know the answer to that kind of question where
00:15:46
Speaker
Lots of risks are gonna come from, and then there's all kinds of related questions we don't know the answer to. We don't know the answer to whether, for instance, AI systems are going to be
00:15:57
Speaker
able to protect against other AI systems? So if the answer is yes, then that's great. Then you need less international governance, and that's one problem or set of problems that we don't have to worry about. But I would say that I don't think I would bet in that direction. At the very least, there are going to be some real trade-offs if that's the direction that we're going in. And so I think when we think about international governance,
00:16:26
Speaker
It's a question of addressing some of the risks that we can address through some feasible strategies that I think we have rather than relying on other strategies that are somewhat tenuous and might work, but we really don't have good reason to think that we're living in those worlds. Yeah, so are there types or categories of risks that international regulation is particularly well-suited to solve or to alleviate for us?
00:16:56
Speaker
that international in particular is well suited to alleviate? Yeah, that's an interesting question. So I think that civilian governance is one area that we can really make progress. I think one of the reasons why we throw up hurdles in the way of international governance is because we think about the difficulty of trying to have arms control of some variety. And that it's not controversial to say that that is very difficult.
00:17:25
Speaker
And we don't have to look far. We can look, for instance, at the attempt to regulate lethal autonomous weapons, more than a decade of attempt to do that in the context of the CCW, the Convention on Certain Conventional Weapons, through the United Nations and Geneva. And I think that has been effective in creating some norms. And I think that's important. But on the other hand, the advocates for regulation, they've really wanted positive law.
00:17:54
Speaker
and they haven't gotten that. And it probably is not a coincidence that they haven't gotten that. That is 10 years ago, probably that outcome could have been predicted.
Civilian vs Military AI Governance
00:18:07
Speaker
So arms control, we can talk about all sorts of analogies, it's very difficult. Civilian governance is I think the kind of thing that we have some viable models of, and I personally,
00:18:24
Speaker
along with some others, think that the models of things like the International Civilian Aviation Organization, the International Maritime Organization, the Financial Action Task Force, these models are good ones. I don't have to go into that more, but that gets into some detail. Yeah, so when you say civilian AI, you're simply talking about non-military AI, is that correct?
00:18:46
Speaker
Yeah, I think we have a white paper that's coming out on this topic and we use a slightly different definition, but I think that definition is fine for our purposes. If we begin regulating on a global scale and have one set of regulations that apply to all of the world,
00:19:02
Speaker
Does this prevent us from discovering the best possible regulations? So my worry here would be that we can't experiment with different types of regulations, say that you have 200 countries each with their own regulation and then you monitor the outcomes and then see what works best. That seems to be something we are shut out from doing if we have one set of regulations for the entire world.
00:19:29
Speaker
If we set aside the issue of whether that's even feasible and whether we would have time to make such investigations and so on, but do you see a problem with applying one set of rules to the entire globe? Oh, we should not even attempt, absolutely should not attempt to apply one set of rules across all domains of AI governance.
00:19:54
Speaker
to the whole world. For instance, Europe and China and the US have different preferences over privacy regulation, and that's good. They should be allowed to have different approaches. Societal values should be reflected in national regulation, and we shouldn't attempt to quash that. On the other hand, there probably are some areas of, I would characterize them as
00:20:23
Speaker
minimal standards where we can consent and we can agree. And those are probably the things that should be internationalized.
00:20:33
Speaker
I agree with you that experimentation is important. I'm all for empirical experimentation. At other times and places, I've thought of myself as a theorist, but being a theorist, I know the limits of theory, and I really appreciate the opportunity for experimentation. I think it would be good here. On the other hand, it also depends on the stakes.
00:20:54
Speaker
We don't want to be experimenting. You can do inefficient experimentation also. So no, we don't want one-size-fits-all regulation. We want to agree on some minimal standards. We want to investigate whether those standards are the right standards. But in some cases, probably we don't want to have too much experimentation also.
00:21:13
Speaker
And so that set of minimal standards would apply globally and would be one size fits all in a sense, but with room for significant national variability within those kind of common standards.
00:21:27
Speaker
I think that's right. I think there are certain things that you wouldn't want any country in the world to release a system that made it very easy to do harms that other systems and actors couldn't defend against. That would be a pretty minimal standard or people worry about agentic AI. And if you have an AI, which is power seeking or something like that,
00:21:57
Speaker
then that would also be the sort of thing that you would want to prevent any actor from doing. But, you know, within those kind of basic limits, I think it's good for different societies to make different choices. Am I understanding you correctly that you're pessimistic about getting a common set of rules for military AI?
00:22:19
Speaker
That's a really interesting question. I think there are some things that I don't think we can give up on thinking about arms control, not at all. I think there are lots of things to think about. I just think that that is maybe the second thing to think about and civilian regulation because so much of the development is already going on within the private sector and it's an easier problem. I think it's a problem we should do first and
00:22:45
Speaker
maybe at the same time, we can be worried about what we can do in terms of the military side of things. Yeah, because it seems to me that if we leave out the military side of things, we might not have solved the problem at all. If we have dangerous AI systems in the US and China militaries, then the problem almost is fully there. Why would you start with the civilian side and then try to move to the military side?
00:23:12
Speaker
Well, the main reason is that it's what I think can be done in the near term. I think there are so many challenges on the military side. One thing that will at some point maybe be available is a nonproliferation regime with norms of use.
00:23:32
Speaker
And that can deal with a whole class of risks when it comes to maybe not the countries that are absolutely at the forefront, but other countries and any agreement like that almost certainly has to have a development component too in order to get any traction, as well as just from the point of view of societal justice. There are things we can do on the military side, or there may be,
00:23:56
Speaker
and non-proliferation regime with some norms of use, although there are specific challenges when it comes to norms of use with respect to AI. Because, for instance, you know when a bomb goes off. So you can say, well, we should have a norm against that happening because you know when it happens. On the other hand, when a lethal autonomous weapon is used, it's hard to say if the autonomous capabilities were actually engaged. And more broadly,
00:24:22
Speaker
it is hard to know if AI has actually been engaged and used. So I think as a result, that means that it's more difficult potentially to develop norms of use. And when we think about, for instance, deterrence or mutually assured destruction, mutually assured destruction is a contested term.
00:24:44
Speaker
But I think we have an idea about what we mean, and there's a popular idea about what we mean. And so we can just say that it's harder to have that sort of deterrence equilibrium when it is unsure when a system was actually used. And so it's possible that we could really be in the world of, let's say, cyber technology, where the actors, the state actors,
00:25:09
Speaker
are doing all sorts of things to each other within some limits. But in terms of developing the capabilities to do things, they're doing everything that they can. And they don't even try, really, to have agreements to prevent each other from doing it because they don't see how they could be sure that their adversaries were actually complying with such an agreement. So it's possible that with advanced AI will be in a similar sort of a world.
00:25:37
Speaker
And that would mean it would be very hard to have regulations among major powers. But even then, I think there's the possibility for non-proliferation regimes, as I say, with a development component also. And what exactly do you mean by that in this context, in the context of AI? Non-proliferation of the AI systems themselves, or how exactly does it work?
00:26:03
Speaker
So AI is often talked about in terms of three parts, the data, the algorithms, and the compute. And I think we can think about nonproliferation of all those parts, actually. So there are some ideas.
00:26:16
Speaker
that may be, in fact, dangerous ideas. Just like we have classification and the so-called born secret doctrine in nuclear policy, we might need to say that while some ideas are not the sort of things that we're going to allow out into the general public.
00:26:34
Speaker
or allow to some governments around the world. So when we're talking about computing hardware, it's easier for me to see how non-proliferation would work. You could talk about export controls. Yeah, it's a physical thing. When you're talking about data and algorithms, it becomes more difficult for me to see how that would work. As soon as something is online, uploaded somewhere, isn't it just out there? Wouldn't we need
00:27:01
Speaker
kind of military-grade information security in order to secure a nonproliferation of the algorithms and the data? Well, I think it's a huge topic, sort of cybersecurity in labs. I don't think we should at all give up on that. I think it's incredibly important. And we may need a military-grade security. There's a justifiable focus on compute, which you also reflect.
00:27:25
Speaker
And I agree with that. But on the other hand, we don't really know if we're in a world where doing dangerous things requires lots of compute or not. We actually don't know that yet, for sure. And so I don't think we should rule out all the other things that we might have to do if we're in the other world. And no matter what world we're in, I think there are certain
00:27:50
Speaker
ideas and capabilities that probably we don't want being generally available. And so, you know, if you're talking about securing systems against a determined state adversary, that's one thing. If on the other hand you're talking about securing information against general knowledge or something like that, that's something else. And we should probably be investigating both of those avenues.
00:28:14
Speaker
Yeah, and also these three components can be traded off against each other. So if we have a strict limit on compute, companies might be able to invest heavily in algorithms or in data collection and management and thereby still make a lot of progress. And in that situation, you might even have what's called a hardware overhang so that your
00:28:34
Speaker
You're building up your algorithm or you're improving your algorithms and you're improving the way you construct your training data such that when the hardware becomes cheap enough or if the hardware improves you can make significant gains in a quick amount or in a pretty fast.
00:28:50
Speaker
What we're trying to do here is to think about in advance how we might govern AI. And just if you look at the historical track record, you were talking about autonomous weapons. This is something that's pretty difficult to do. What we attempted to do with autonomous weapons was to think in advance, okay, this is a technology that might become dangerous in the future. Let's think about how to regulate it now before it's as dangerous as we fear it might become.
00:29:18
Speaker
How would you rate our track record here of trying to think something through in advance and regulate it? Oh, what's our track record across technologies? That's an interesting question. Well, there's something called the calling ridge dilemma. I always forget if it's a ridge or a wood, but it's a ridge, a calling ridge dilemma, which says that time when you are able
00:29:42
Speaker
to effectively regulate is early on in a technology's life cycle. But the time when you actually know what regulation would be good, that comes only later. And actually, he thought that there was just simply no way to know what the right regulations would be early on. So he thought we had to deal with it some other way.
00:30:03
Speaker
But I think that what we're realizing now is that given the extraordinarily increasing capabilities of technology, we have no choice but to do anticipatory regulation because it could end up being the case that the next version of whatever technology has such an impact that if we didn't prepare for it, we'd be derelict. So I think you're absolutely right that
00:30:31
Speaker
that we need to do that. Now, do we have a good track record? No, no, not really such a great track record of thinking about technological regulation in advance.
Anticipatory Governance and Historical Lessons
00:30:46
Speaker
Yeah, so I think in the case of civilian technology governance, we have lots of examples of success stories, including for powerful technologies. Airplanes and boats and lots of other things are very powerful technologies and we manage to regulate them. On the other hand, on the
00:31:08
Speaker
On the arms control side, when we think about nuclear and biotechnology, as I would say, we haven't really been that successful. The nuclear analogy is an interesting one. There was an attempt, of course, the Atchison-Lilienthal plan and then the Baruch plan. There was an attempt to really internationalize the technology and, in fact, to create a sort of international monopoly.
00:31:35
Speaker
over the nuclear industry, the entire nuclear industry. And of course, that didn't succeed. And there's a question about whether it would have been a good thing if it had succeeded. But that sort of anticipatory governance wasn't particularly successful in that case. Or we might think about the Biological Weapons Convention. I mean, that's an interesting one.
00:32:03
Speaker
which I think really shows us how difficult arms control is in some ways, paradoxically, because the Biological Weapons Convention is a case where many countries in the world came together and signed a treaty in which they agreed not to develop, not to stockpile, not to use this technology. But they didn't have any
00:32:32
Speaker
verification means. They didn't worry about that, really. And it turns out that there were violations, and not just little violations, but huge violations of this convention. And the Soviet Union had tens of thousands of people working on biological weapons. So I think that these sorts of agreements in many cases
00:33:02
Speaker
They have done some important things. The Biological Weapons Convention probably made it harder for actors around the world to develop biological weapons. It probably had, therefore, a non-proliferation effect. But it didn't do the letter of what it set out to do, which was prevent any actor in the world
00:33:30
Speaker
from developing these sorts of weapons. So this question about the track record that we have, I think is a fascinating one. I would say we have some success stories. We have fewer success stories about anticipatory governance.
00:33:47
Speaker
In the case of regulating military, one of the interesting things is that in a way we're successful about being anticipatory in a limited way. You know, if you think about the space treaty or the ABM treaty, for instance, these were things that were regulating technologies before they really existed. But the very fact that they were anticipatory in this sense
00:34:11
Speaker
is perhaps what allowed the treaties to happen in the first place. For instance, in the case of ABM, you might say that as it looked more and more like the technology was becoming real,
00:34:25
Speaker
the treaty, of course, went away. So I'm not sure we can quite count that as a success of anticipatory regulation. So maybe we could have gotten countries to sign on AI regulation in 1960, for example, because there would have been no perceived cause to doing so.
00:34:45
Speaker
Exactly. But then would it have done any good? Because then once the technology evolved such that it was meaningful, they may have dropped out. Yeah, I think there's some other examples of that. And I think it's an important caveat when we're counting sort of successes and failures of attempts to regulate technology.
00:35:03
Speaker
We talked about incentives before, and I think we should touch upon some of the potential negative incentives of international regulation.
Transparency Challenges in AI Development
00:35:11
Speaker
So one thing I thought about was whether just rumors of international regulation could incentivize countries or companies within countries to be less transparent about what they're developing. So would it be the case that they publish less and they don't announce their breakthroughs because
00:35:33
Speaker
Well, if they don't announce anything that seems powerful, then they might be able to delay regulation. Do you think that's plausible and that's this potential downside?
00:35:45
Speaker
Yeah, so I think those are exactly the right worries to have on the one hand. And I think we have examples of that from other industries. So two that I really like are, I don't know that I like them, but I think they're very interesting and telling from the oil industry and the tobacco industry.
00:36:03
Speaker
Because in these cases, you had the potential for liability, and firms knew that maybe they could be liable for some of the things that they were doing. And so in some cases, that meant that their view was, well, better not to know if there are negative consequences to what we're doing. And they actually fire people as a result. They try to make sure that they don't
00:36:27
Speaker
have the ability to know things. And I think that is a real danger. There may even have been some cases of that already in the AI industry. I mean, it's speculative and I don't want to name names, but I think that there is at least one case where it sort of really looked that way. So I think regulation can do harm.
00:36:51
Speaker
That is a danger. Some of the other specific dangers that you mentioned like driving activity somewhere else are exactly the sort of thing that I think countries will be wary of and have been wary of. On the other hand, this is also an argument for international regulation, right? It's precisely a case where you're worried about regulatory standards having a so-called race to the bottom or maybe a race, maybe it doesn't get all the way to the bottom, but it goes down anyway and there's kind of corner cutting all around.
00:37:20
Speaker
And that is the kind of thing that you want to prevent with an international standard. And that's exactly what international standards can do sometimes, right? Because none of the actors necessarily want to regulate themselves to the degree that is optimal, but they're willing to regulate themselves because they get out of the bargain that everybody else gets regulated to, and they really are happy with that.
00:37:47
Speaker
So I do think that these things are a worry and what we have to be thinking about, but they're also the reason why we need to do it. Let's talk a bit about the security dilemma and AI. Perhaps let's start by explaining what the security dilemma is. I'm particularly interested in how it applies to the situation between China and the US.
00:38:11
Speaker
Yeah, so the security dilemma is a case where actions taken by one actor to make itself more safe are making other actors less safe. And absolutely, I think we see that in the case of AI. One of the interesting things with AI is that it's maybe harder to see the actions that other actors are or are not taking. That is to say, development is maybe even harder to see than it is in other areas. And so in some ways,
00:38:40
Speaker
You might not have the kind of spirals where one actor says, oh, I see what you're doing. The other actor says, oh, yeah, I see what you're doing too. And they kind of ratchet up. But on the other hand, they can have a kind of spiral in the mind, if you will. They have an idea about what others are doing. There's actually some attempts to model this formally.
00:39:03
Speaker
Stuart Armstrong and some others thought about this with their racing to the precipice paper. And then there've been some recent papers that I've been involved with, including one by Nicholas Emery Shue, that thinks about exactly these kinds of issues. Is it better for the actors to know what each other are doing or to not know and be able to guess?
00:39:27
Speaker
Well, the somewhat less dramatic answer is that it depends. It's not always good to know. It's not always good not to know in expectation. Just to take a non-AI example of the security dilemma, we could talk about the situation surrounding Taiwan where
00:39:47
Speaker
One account is that the U.S. is trying to encircle and arm various islands surrounding Taiwan, and China perceives this as a threat to their security. Then they respond by beefing up security, which the U.S. perceives as an aggression and so on. And that is kind of the spiraling down dynamics of the security dilemma. And the question is then whether AI as a technology changes this dilemma.
00:40:16
Speaker
As you mentioned, AI is less obviously visible. You can't necessarily see it with satellite technology. It seems to me that it would be better if China wasn't so worried about what the US was doing with AI, and the reverse would also be the case. It's that the US wasn't so worried about how China was doing with AI progress.
00:40:39
Speaker
I think it would be much better if they weren't worried and there's a question about how to make them not worried. And sometimes having information makes you more worried and sometimes it makes you less worried. One thing that information pretty much always does is allow for agreements because information is something that you can condition on. And so, for instance, if you know what the other side is doing, you can say, okay, well, we won't do it either if you don't do it.
00:41:05
Speaker
And that can lead to an agreement as long as doing that thing isn't, neither side has such an incentive to do it in the short term. Then you can have these kinds of punishment strategies through agreements. And so in that sense, information is good, but in other senses, if they know exactly what each other are doing and maybe they then realize that, okay, we're really
00:41:30
Speaker
close in some dimension that we're being just a little bit ahead is important. So those are the kind of cases where very often, although not always, having more information can be bad because that then allows them to engage in these really, really sort of negative outcome races where they're competing over very fine leads that they each might have or seek to have.
00:41:58
Speaker
Yeah, so the US and Taiwan is a good example of a security dilemma potentially, because for instance, the US could support Taiwan and its support and the more clear its support is, the more threatening that might be to Chinese interests. And in fact, that might even precipitate a conflict. That is, if the US were to come and say that it was giving absolutely its full support behind Taiwanese independence,
00:42:28
Speaker
that might actually precipitate a conflict, which, you know, there are funny dynamics here where doing that by the US could be credible precisely because it's precipitating a conflict potentially. But on the other hand, you might not want to precipitate a conflict. Yeah, so the sides have this kind of dilemma, which is why we call it a security dilemma in terms of doing things that seem to make them more secure, but might ratchet up
Geopolitical Tensions in AI Governance
00:42:58
Speaker
the conflict. And I think Taiwan is a place where we see that. And maybe you've brought it up because it's also a place that is so critical to AI supply chains. And why is it that it's so critical to AI supply chains?
00:43:11
Speaker
Well, in terms of the high-end chips, the fabricators are mostly based in Taiwan. So I think upwards of 90% of the cutting-edge AI chips are actually coming from Taiwan, this nationalist target of the Chinese government, maybe the most important and certainly one of the most important goals of the
00:43:34
Speaker
Chinese Communist Party is reunification with Taiwan. So, throwing all of this into the AI mix. Yeah, if you wanted to set up the world for conflict, you couldn't have done much better than to put a lot of very valuable chip production on Taiwan, I think. Yes, I think that's right. As if there wasn't enough reason to have conflict over Taiwan already, and we in fact have had conflicts over Taiwan already, it seems like in the future,
00:44:04
Speaker
part of the balance of power in the world could be related to activities on Taiwan. And so yet another reason potentially for conflict there.
00:44:14
Speaker
So if we talk about the security dilemma from the perspective of AI, and if you allow me to speculate a little bit here, is it at all plausible that countries might begin hiding their AI capabilities potentially so that they can use their advanced AI to develop even more advanced AI and gain a decisive strategic advantage over another country?
00:44:40
Speaker
Is it plausible that they will hide what could amount to military capabilities? Yes, of course. Of course, they've always tried to hide those things. And when we think back to projects like the Manhattan Project, they have tried to hide and we should expect them to continue to try to hide those things. And some really large scale projects are hard to hide.
00:45:07
Speaker
But actually in this area, there's probably quite a lot that can be done without it being at least easy to detect, although you can be sure that intelligence services are working on detecting it. And so it wouldn't be a situation like the space race in the Cold War, where you had superpowers trying to showcase their abilities and their power. I could see it going both ways in a sense.
00:45:32
Speaker
Yeah, absolutely. They'll want to showcase certain things, absolutely, and sometimes they will feel like they have a kind of social status interest in showcasing. As in Sputnik, you know, look at this, this thing, we can launch it right over your head. So there's no doubt that sometimes they will contemplate doing that.
00:45:56
Speaker
But there's also the other incentives. And I think it's hard to know which of those incentives will win out in particular cases. That's a fascinating social science question, Gus. I think it's very interesting. Any graduate students out there who would like to investigate that, I think it would be a great question we should chat.
00:46:15
Speaker
Yeah, okay. I'll put your email in the description and people can contact you if they're interested. Okay, you have a chapter in a forthcoming textbook where you sketch out four kind of crucial considerations surrounding AI governance.
00:46:31
Speaker
And I think it will be fruitful for us to kind of run through them and get your thoughts. The first of your questions that you actually mentioned previously is whether AI can defend against other AI. So is AI able to protect you from the enemy's AI in a sense? Can one country use their AI to protect themselves from another country's AI?
00:46:54
Speaker
This is related to kind of a defense-offense balance and so on. So maybe you could explain that concept also. Well, so I haven't seen the Oppenheimer movie yet. Oddly, I keep being prevented. I'm sure I'll see it soon. But I understand there's a scene, which is also one that I think
00:47:15
Speaker
People are familiar with from the historical record where Oppenheimer is testifying before a committee and he is asked in the early days of the nuclear period, he's asked, what is the technological solution to prevent somebody from smuggling a bomb into New York Harbor in a crate? And his answer is a screwdriver.
00:47:39
Speaker
which is say there isn't a technical solution to that problem. We need a social solution. Yeah, so can technological developments defend us from technology? Well, sometimes they can.
00:47:56
Speaker
But sometimes it seems like they probably can't. And this gets to lots of interesting questions about, for instance, open sourcing. So we can ask ourselves, would open sourcing some of AI technology be helpful or harmful?
00:48:14
Speaker
And the answer is probably both to some degree, but how does it net out? And there are areas where we have open source. And in fact, in the cyber area, there are some things that have been open source that have been worked on by whole communities of people who have made those systems more robust, and that's been effective. But if we were to open source nuclear weapons ideas,
00:48:41
Speaker
or the science behind it, would that actually then lead to defenses against nuclear weapons? I think we can be skeptical there. So sometimes it does, and sometimes it doesn't. You mentioned, I guess I should say, the offense-defense balance is this idea. Well, actually, it's quite fraught, exactly how you define it. But it may be that when countries are fighting, sort of being on the offense is advantaged.
00:49:10
Speaker
And so sort of being the first mover is advantaged, or it may be that the reverse is true, that it's good to sort of wait for the adversary to come to you or something like that. And so that's a strategic parameter, which is going to affect the incentives that countries have to get into conflicts.
00:49:29
Speaker
in the age of AI potentially as well. Do you think it makes sense to talk about weapons having specific balances between their defensive and offensive capabilities?
AI's Strategic Offensive and Defensive Nature
00:49:43
Speaker
So perhaps we could claim that nuclear weapons are better at offense than they are at defense, for example.
00:49:51
Speaker
It's funny because people often say the opposite about nuclear weapons that actually, even though it seems like, as you say, they're better at offense. In fact, the result of them is that you have mutually assured destruction and so they lead to this defense dominant sort of world. So I do think that that points out that these things are in some sense hard to be rigorous about or at least people haven't come up with the way of sort of
00:50:17
Speaker
ex-ante thinking about a weapon system and really effectively coding, whether it is offense or defense dominant. On the other hand, I think it's kind of a useful idea that helps us to think about what sort of world we're likely to be living in. So for instance, you might think,
00:50:37
Speaker
that a world that's more offense dominant in the past has led to kind of consolidation and sort of larger state spaces, internal spaces, precisely because there's been conflict. Those conflicts have happened. They've been resolved in favor of one side or the other. That's led to a larger state. And then there's kind of a peace within that state.
00:51:00
Speaker
which in some cases might be more beneficial. On the other hand, people are of course also pointing out that having offense dominant technologies may lead to more conflict and that's on the negative side.
00:51:16
Speaker
So, yes, I think it's sort of a contested and fraught area. It's one that within the social sciences was much more active some decades ago, not that active these days. But I think it's still sort of interesting to think about, I think, productive and fruitful.
00:51:37
Speaker
Do you dare to make any best guesses about what we really want to know, which is, is AI as a whole more, does it lend itself better to offense than to defense, for example? The first thing to say is that we don't know. We don't know exactly how this is going to go and how the technology is going to develop. But if I had to guess, I would say, so the question is, can AI defend us against AI?
00:52:04
Speaker
I would say very speculatively, not without really interfering with privacy. So probably, unless we're willing to change some societal parameters, like how invasive the state is in our lives, at a guess, we actually won't be able to use AI without doing that.
00:52:33
Speaker
we won't be able to use AI to defend against the sorts of offense dominant things that AI will enable.
00:52:43
Speaker
But as I say, it's really hard to say for sure, or with any certainty at all. And here you might be thinking of, for example, continually scanning what software is running on people's computers to prevent them from running dangerous AI. Exactly. Scanning what everybody's doing, maybe at some point in the future, what everybody's thinking, all these things.
00:53:04
Speaker
So without becoming very invasive, verging on totalitarian, we can't really... AI is often dominant, you would say.
00:53:14
Speaker
No, I wouldn't put it that strongly. I think this is very, very speculative. And I think that that's sort of a guess, but I wouldn't put a lot of credence in that particular guess. Maybe we could move on to your next crucial question. And this one is a bit complex. It's about
00:53:34
Speaker
thinking about the failure rate of a technology, in this case AI, relative to the risk of that technology. And here we're thinking about international agreements. So perhaps a starting point is to talk about the downsides of having an international treaty that does not allow countries to have any failures with the technology.
00:53:59
Speaker
So just imagine that you, what are the downsides of presenting a document like that and trying to get people to sign it? Yeah, I think one way to get a handle on this is maybe to think about, again, to think about the nuclear analogy and to think about two different cases. So, you know, one of the things that we have or have had are agreements on the number of deployed nuclear weapons that the US and Russia can
00:54:27
Speaker
can have, and at some point it was 4,000, and then it went to 1550. So that's an agreement. And then a question is, well, what would the world look like? How different would it be if one of the sides were to cheat on this agreement? You know, suppose one side built an extra 10 nuclear weapons.
00:54:47
Speaker
Would that change the balance of power? No, not very much, right? Not very much. Besides have supposedly secure second strike. And so building a few more nuclear weapons isn't really fundamentally changing strategic parameters probably. Now contrast that case to the case where we have a worldwide agreement to ban nuclear weapons.
00:55:16
Speaker
So if there's zero nuclear weapons, and people have taken that idea seriously, there's been a lot of thinking about exactly what a regime would look like to get to zero nuclear weapons, and they concentrate on preventing any individual state from breaking out, so-called, and developing weapon. What could you do in order to prevent it, do something before they're actually able to do it? Obviously, if you're in the world,
00:55:43
Speaker
of zero nuclear weapons, one state managing to build a few probably has a huge effect on the balance of power. So the sort of agreement that you would need to design
00:55:57
Speaker
in that kind of a world is very different from the sort of agreement that you need to design in the world that we have, where it's an agreement just to limit to 1,550 deployed nuclear weapons. And so that's the point, that in some cases, we just can't really accept even a single failure.
00:56:21
Speaker
And in other cases, if the agreement failed to some degree, it wouldn't be such a big deal. And how we would design institutions for the one case or the other case is really very different. Do you think we can allow failures with AI?
00:56:36
Speaker
I think it depends. I mean, it depends on the use case and the sort of risk. I think in a lot of cases, absolutely. We can allow a certain degree of failure because they're trade-offs, right? So we shouldn't give up privacy rights.
00:56:52
Speaker
to prevent misuse of privacy rights, for instance. That would be particularly silly. So I think there are areas where we can accept a certain failure rate, but I think there are some areas too where we get into the catastrophic risk areas where probably we want the failure rate to be very, very, very, very low. And I think that involves a different sort of agreement.
00:57:17
Speaker
Yeah, we can talk about different stakes for, say, a recommendation algorithm versus that's on the one hand and then potentially an agentic super intelligent system way on the other hand, where.
00:57:31
Speaker
Some people will claim that with the smarter-than-human genetic system, we might not be able to allow any failure rate. Do you think we can succeed in building agreements and regulations that involve no possibility of failure for each of the states?
00:57:51
Speaker
It just depends. I mean, I don't think we know all the strategic parameters. So if we're in a world where an answer to the first question that we talked about is that AI just can't defend against other dangerous AI, right? Just can't do it. That's one of the things that could suggest you really need to have kind of a zero failure rate, for instance. And then can you actually do that?
00:58:17
Speaker
It's extremely difficult. I think, you know, again, it depends on other, you know, the sort of plausibility of it depends on the answer to some other questions. So if it turns out, for instance, that the things that give states, let's say military advantage are the same things or involve taking on more risk of misaligned AI, for instance,
00:58:47
Speaker
So in other words, if risk taking and augmenting power are throughout the course of technological development are always kind of wedded together,
00:59:01
Speaker
then that is not a very safe world and will make it very hard to get an agreement of the sort that we would need in that context.
Verification and Compliance in AI Agreements
00:59:11
Speaker
And we haven't really faced that before, right? Because with nuclear weapons, you know, again, closely related to the point we were making a moment ago, with nuclear weapons, there's decreasing marginal returns probably of building another nuclear weapon. So at some point, you know, you can blow up the world.
00:59:27
Speaker
People who study nuclear politics would be upset at me for putting it in kind of such a cavalier way, but let's just put it in the way that makes intuitive sense to all of us. If you can blow up the world three times, then being able to blow it up again once or twice, it doesn't really change things that much. So you have less incentive once you have secure second strike at any rate to
00:59:48
Speaker
to continue to invest in the technology. But it may not be like that with AI, because it might be the case that at each point, you can have very significant new capabilities if you continue to invest in the technology. And so you have continual incentive to do that. And if, in addition to that continual incentive to invest in the technology, that's a continual incentive to take extreme risks, of course, that's a really
01:00:18
Speaker
Bad world and then if it turns out in addition that we really need to have an agreement that has a zero failure rate in that world because the technology can't defend against itself. Well that you know that that's not good. Maybe maybe maybe if we can settle this question we can move on to the next one which is about verification.
01:00:39
Speaker
So we're talking about which agreements governments could make. And a crucial factor there is what one government can verify about the actions or the behaviors of the other government. What options do we have available for verifying whether states are compliant with an agreement? Perhaps we could talk about the nuclear case and then maybe the AI case.
01:01:06
Speaker
Yeah, sure. In the nuclear case, as we've already said, when it comes to weapons actually being used or even tested, now we have technology that is really quite reliable in detecting that. Again, this is a case where because we have that information,
01:01:23
Speaker
we can have an agreement in which we condition on that information. And that can be very helpful. And that is what facilitates things like mutually assured destruction, which supposedly is stabilizing and at least seems to have helped us get through and survive. When, of course, we should remember at the beginning of the nuclear period, many, many people thought that humanity would not survive. Bertrand Russell wrote a book. The title of the book was Will Man Survive?
01:01:50
Speaker
And I think his probable answer was no at that time. The ability to verify, to have the information that an adversary is actually complying with an agreement can be extremely important. And this is also a technical challenge. It was a technical challenge when it comes to detecting nuclear tests.
01:02:13
Speaker
And it's a technical challenge today when it comes to detecting what other actors are doing with AI technologies. But it's not exclusively a technical matter because it may be that you need to put some things in place in an agreement in order to facilitate verification. So, for instance, in the Open Skies Treaty, that's a treaty where the sides are allowing overflights in order to see, in fact, what each other are doing.
01:02:43
Speaker
they use what's called national technical means in order to do that. On the other hand, there's some agreements about what can be covered up and what can't be covered up in order to facilitate using national technical means. So yeah, so I think, you know, we need to sort of have a unified technical and social strategy here. And we don't actually know
01:03:06
Speaker
what sorts of verification techniques we're going to be able to develop in this area and what they will require and how invasive the technical procedures will be that go along with these techniques. But that kind of question, because countries don't like revealing what they're doing in their national security establishment. So they're very unwilling to do that. So if it requires really invasive techniques to verify what another side is doing, that's a problem.
01:03:35
Speaker
I'll just say one thing because maybe to me it's a hopeful point. It's a point that others have made, but what we need in a way is something like a dog because when a dog is sniffing a bag at an airport,
01:03:52
Speaker
It's able to detect, okay, is this a dangerous thing? But the nice thing about the dog is that it doesn't give you other information. It just tells you it's one thing, dangerous, not dangerous. But it doesn't tell you if the person is going to Florida or is cheating on their spouse or anything like that.
01:04:12
Speaker
And so that's exactly what we need. We need to build that kind of sophisticated, but not too sophisticated in giving us more information than we actually want so that it's acceptable to these national security establishments.
01:04:27
Speaker
Yeah, okay, I get that. So do you have any idea how a dog might work? How can we make a technical dog that tells us exactly what AI is doing in the enemy's companies, but no more?
01:04:45
Speaker
Is there anything promising on the technical side there? Yeah, that's a great question, a difficult question. The first thing I'll say is that there probably are some others, I hope there are some others who can give a better answer to it than I can. Maybe the first thing to say is that probably we can have some hardware mechanisms on chips which allow us to monitor exactly how those chips are being used.
01:05:10
Speaker
At a guess, those sorts of techniques will have to be combined with data governance of some sort because when training, for instance, is started on on chips, you might be asking yourself, well, how much compute is embodied in this training run?
01:05:27
Speaker
But that's actually hard to say if unless you know something about the sort of starting point of the training run, which in part related to the data that's being used. So most likely, I think we're going to need to have both a sort of data governance and some hardware mechanisms in order to make these ideas really work.
01:05:54
Speaker
People have started to flesh out what a general scheme could look like, but I think there are many technical challenges that remain. I don't think we have or are really close to having technical solutions that we might need for this.
01:06:09
Speaker
Also just one thing that seems to make it inherently difficult is just you can't really prove a negative. It's difficult to prove that you don't have additional computing hardware or additional nuclear warheads. And so that's always the challenge that's looming, as I see it. Yeah, exactly. I mean, in the nuclear area, often there are agreements, and also the chemical weapons area, there are agreements to search declared sites.
01:06:39
Speaker
but what about the rest of the space in the country? Obviously other things could be happening there and yet countries have not been willing to make everything in their country available to search and examination. And so there's this kind of tension there and whether we can have detection techniques that are sufficient so that they're willing to really trust that their adversaries aren't violating
01:07:06
Speaker
conventions, but on the other hand, don't give adversaries so much information that they are simply not willing to enter into the agreement in the first place. I think that's the very severe technical challenge that we face on the military side of governing AI between the major powers.
01:07:28
Speaker
So as I say, that's really hard. We have lots of opportunity to govern when it comes to relations between other powers that are really part of the security environment that are created by the major powers and also the whole civilian space where I think also there are a lot of risks that we need to deal with.
01:07:50
Speaker
Yeah, maybe this flows nicely into your fourth crucial question, which is whether it's the case that a small club of aligned states can control the inputs into the technology, in this case, AI. So we're talking about whether, say, the US and its allies can control the supply chain of computing hardware and potentially also algorithms and data. What do you think is the answer to this question as it stands right now?
01:08:20
Speaker
Well, I listed this as a question that we didn't know the answer to, so it's hard to answer it. But I would just say that I think we do know that the supply chain is narrow and that a small club of aligned states can do quite a bit when it comes to
01:08:42
Speaker
when it comes to controlling access to the latest computing technologies. So that much I think we have a pretty good idea of. What we don't know is how long will that be the case? We don't really know yet because we haven't seen specific regulation to deal with cloud computing. So we don't know how well those things are going to work. And we don't know what risks
01:09:08
Speaker
are associated with existing compute and non-frontier compute or perhaps one generation old compute, something like that. So that's what we really don't know the answer to.
01:09:24
Speaker
I think many people are really hopeful that the scaling hypothesis will hold and that it will turn out that really advanced forms of AI need absolutely enormous amounts of compute, so enormous that controlling just the cutting edge is sufficient. People have different intuitions about this. Some people have more finely calibrated intuitions than I do, but that's just not my intuition. I don't believe it.
01:09:54
Speaker
older generations of compute, if you're really willing to maybe spend a little bit more, take a little bit more time in order to keep up, that yes, it will require some technical solutions. Yes, interconnect speeds are a big deal, but nevertheless, if you're talking about really motivated actors, China likes to talk about two bombs in a satellite. That is, in spite of all the restrictions from the West,
01:10:20
Speaker
on developing a nuclear weapon and a thermonuclear weapon and a satellite, it developed two bombs and a satellite. So I'm not sure it will be, you know, people are skeptical, many people are skeptical about recreating a chip supply chain or another chip supply chain in kind of the medium term.
01:10:41
Speaker
And they may well be right about that. But on the other hand, using existing compute and doing some technical modification in order to make that existing compute work, again, I don't know. But intuitively, it seems to me like that's a source of risk. There's a question here of how quickly you can get from, say, the cutting edge system in 2022.
01:11:11
Speaker
Can you train such a system on much cheaper hardware in 2032, for example? And if that's the case, then if we believe that today's systems are on the verge of becoming dangerous, then it seems that we can't really control the necessary ingredients for AI for a long time. Because at a certain point with enough advanced and computing hardware and so on,
01:11:38
Speaker
training these systems will be available potentially even to individuals you can get different estimates of this but i don't think it's it's a it's a bad estimate to think that within ten years you would be able to train a system like tp4 on a on something that's available to to a rich individual.
01:11:58
Speaker
Exactly. And yet it also comes back to this first question that we had, can we actually use advanced technology to defend us against these other advances in technology? And if we can, so if it turns out that whatever dangerous thing you can do on your laptop can be well defended against by the thing that the latest supercomputer can do in 10 years, well, that's like a much safer world, but we just don't know if that's going to be the case.
01:12:22
Speaker
Do we have any, so I know you mentioned this as an open question, but do you have any leanings? Do we have any indications from the world about how this might turn out? Which world do you think we're in? I mean, you know, it's so much speculation. I sort of go back to, you know, I think that technologies are going to become so capable that, you know, if we just kind of project forward the world of today with all the freedoms that we have today
01:12:52
Speaker
and all the access to the information that we have today. If we just project forward that world, but we say that algorithmic progress has happened for 10 years, something like that, then I think probably individuals in that world will be able to do some things that we
01:13:13
Speaker
will find hard to defend against in that world. And so at a guess, I think we're going to have to make some choices about what individuals have access to and the ways that they can be monitored. But I'm still hopeful that we can do things like create the dog that we were talking about before. So we monitor for certain things, but a whole range of other things we don't monitor for.
01:13:41
Speaker
at all, but I think that this is going to occasion all sorts of broad societal conversations. One thing we talked about earlier was this question of hacking and cybersecurity at the top companies.
01:13:55
Speaker
And this is interesting in this context also because even if we have the treaties and we have the verification system and we know that, for example, no training runs larger than X are occurring outside of these companies. You could imagine rogue groups or rogue states hacking into top companies and simply stealing the model.
01:14:17
Speaker
When you've trained the model, it doesn't take up that much space and its current labs or current companies do not have military grade security yet. And so this seems like a live opportunity that we might see leaks or hacks getting these models out there available in the world. Does this undermine all of this talk of the treaties and agreements and verification that we've talked about?
01:14:44
Speaker
It's a great question, I guess, for four points to make. The first is that, you know, I think you hear this view in DC, where some tech companies have said that they don't want to work on military grade things or they don't want to work with militaries, things like that.
International Governance Structures for AI
01:15:01
Speaker
And the view of some of the national security establishment is well, actually what they're doing is working with China and everybody else but not working with the US government because their systems are compromised and they're hacked by these other governments but maybe not by the US government or at least that's the perception that people have.
01:15:21
Speaker
And so, yeah, and so and so why and so is that really the right thing so I you know without a doubt, there's a need for military grade, as you've said, cybersecurity and even if you have it, it's just not clear that it really.
01:15:36
Speaker
works in the end, that these systems can't be stolen. I think two points. One, maybe just a tiny bit of a tangent, but you've been asking about intuition. Here's another intuition for you. I think that we're going to be coming back to a world where some of the advanced systems are really going to be using more inference compute.
01:16:02
Speaker
So it'll still be a tiny fraction of overall training compute, but there will be sort of another bottleneck when it comes to inference compute. And I think the chain of thought reasoning is probably an example of that. And we were talking earlier about how we don't really know how systems work, but probably we can build things on top of current systems that kind of make them, if you will, think harder. There used to be with Mathematica, there was a way you could say simplify, and then you could say another command was
01:16:31
Speaker
Simplify harder was I always love that like you were just telling it think harder, you know work a little more Yeah, you can you can do this with current language models, too So you might say think step by step through this problem and and and your your argument here or your conclusion from this would be that These systems would begin using more compute when they're running when they're actually solving our problems because of that
01:16:52
Speaker
Yeah, I think that's right. And really, the reason I draw that conclusion is less these sorts of speculative things, although I think that's likely, and more sort of what we're seeing in this strategic space. Because the branching factor in any sort of significantly complicated social situation is such that just doing all the training in advance for this strategic situation isn't very effective. And we see that in Go.
01:17:22
Speaker
most advanced go playing systems don't do that well unless they're kind of thinking about, okay, what is the strategic situation on the board right now? What does that look like a few moves ahead? Now, that's not enough. That was enough for chess because chess had this really simple heuristic for understanding, okay, how good is the state of the board for me at any point in time. But so you need
01:17:45
Speaker
These other diffuse approaches to AI that have been so incredibly successful for figuring out, well, is this a reasonably good state for me or is this not a reasonably good state for me for something as complex as Go? On the other hand, you also need these compute intensive things that is inference compute intensive things to think about, okay, what is the strategic situation in this particular context?
01:18:13
Speaker
that some of these folks refer to as search. And so I think it seems to me both based on loose theory, if you will, in my case, and some empirical evidence, I should say, from these strategic systems that are being developed in more complex multiplayer, non-zerosome environments that, in fact, in order to really be more effective in those environments, we're going to need more inference compute.
01:18:42
Speaker
So I think, again, this gets back to the question of what can you do just by stealing a system and having it on your PC or something like that. I think we're going to be in a world where in order to be most effective in strategic contexts, we need more inference compute. Again, it's not really my direct area of expertise. I mean, game theory has been a long time interest of mine, but on the compute side, I think.
01:19:09
Speaker
It's more speculative, but that's the way it seems to me. Okay, so there's a little bit of a digression, but now I want to get back to your specific question, which I actually think sort of, I probably would answer in sort of the opposite way from what you had suggested. So do we need international governance?
01:19:29
Speaker
given that these systems can be stolen, I would say, yes, we need it precisely because the systems can be stolen in some sense, because we need to worry also about the development stage. If the systems can be stolen and then they can be easily used, if that's true,
01:19:49
Speaker
Well, then we need to also have things like licensing back at the development stage. And if we're going to do that internationally, that means some sort of international regime to make that happen. I guess the point about copying sort of leads me in the opposite direction, that we need more and at earlier stages of the technology lifecycle.
01:20:14
Speaker
might this be easier to implement as an agreement between states? It seems that each state would be interested in the companies in that state having excellent cybersecurity, because then they're kind of very important and valuable, perhaps even militarily valuable, secrets wouldn't be stolen. So just intuitively, it seems like it would be something that would be easier to sign up for states.
01:20:40
Speaker
Absolutely. I think cybersecurity you would think would be something they could sign up for. I think it's possible that there's some trade-offs in terms of effectiveness in the company. I think in some cases, really significant cybersecurity has cost to people who are working in the company. They have to suddenly start doing things in secure rooms and all sorts of things.
01:21:03
Speaker
You know, a friend of mine who's working at the Defense Department, he has a really hard time picking up his kids after school because he tends to work late. And if he picks up his kids, that means he has to leave the secure environment and go back to the secure environment. And that takes a lot of time. And the result is that it's hard to pick up your kids. So they're very practical things that go along with these sorts of security measures. But I agree with you that from the point of view of a state, these things might be desirable.
01:21:31
Speaker
Okay, I think we should talk about, so we've been talking about various problems and various opportunities for how we might regulate AI. And I think we should get to your more specific proposal for how we might do this, which you've kind of sketched out in three parts, which consists of an international body that sets standards for AI, and then this jurisdictional certification body, which you might explain what that means.
01:21:57
Speaker
and then an implementation using national or domestic laws. Potentially, you could talk about how the system might work and what the advantages of setting up AI governance like this would be. Great. This relates to a white paper that I've been working on and with a whole host of other folks. I think 12 people are on the paper, so credit certainly to many others.
01:22:22
Speaker
Yeah, so what we do in this paper is sketch out an approach to international civilian AI governance. And as I think we've sort of now covered, we think that civilian governance is maybe there just is more potential in the near term for doing things there.
01:22:38
Speaker
But even though we're talking about civilian governance, there's still a security aspect of things. So one thing you might ask yourself is, is the security apparatus in the United States going to allow an international organization to enter the offices of OpenAI and see everything they're doing and how they're training things? And even if they did,
01:22:59
Speaker
would you actually want that because of the proliferation implications? I mean, the IAEA is sort of conventionally believed to have all sorts of spies in it, for instance. Not that it gets to go into the national security establishments of the club of nuclear powers, but it goes into everybody else's what they're doing in the nuclear area. So that was one of the
01:23:21
Speaker
sort of motivational ideas for the framework that we developed. We thought probably the answer was no, it wouldn't be allowed. You wouldn't actually want it. So what else could we do in order to regulate civilian AI? And so we're interested in something that we think is kind of similar to some of the other regimes that are out there. For instance, International Civilian Aviation Organization and the International Maritime Organization and Financial Activities Task Force are the three that we look at primarily, but I think in some other
01:23:51
Speaker
areas too. What all three of these have in common is that they are not auditing firms within countries. They are auditing jurisdictions to see if those jurisdictions have the appropriate regulation and in some cases to see if they're actually enforcing the right regulation. That's the approach that we've also taken here.
01:24:16
Speaker
And we call it jurisdictional certification. And so the idea, so our sort of suggestion is that you have an international organization that has talked to everybody, all of the other standards organizations, domestic standards organizations and experts around the world and has consensed on a kind of minimal set of international standards that globally we think should be applied everywhere.
01:24:45
Speaker
And again, that's not every standard. Jurisdictions should be able to have different laws on lots and lots of things when it comes to regulating AI. But once they've agreed on a minimal set of standards, they can audit the jurisdictions to see if they're actually putting those into law and are effective at making those standards a reality in terms of outcomes.
01:25:12
Speaker
And so any international regime needs some teeth, and you might say some reasons for compliance. And our idea is, well, we could tie this certification to the trade regime in two ways, both in terms of imports and in terms of exports. And this relates a little bit to what's done in some of these other areas. So in the case of AI, what you might say is
01:25:40
Speaker
So countries around the world might say, well, we're not going to import any AI technology that uses AI from a jurisdiction that doesn't have certification from this international standard-setting organization.
01:25:59
Speaker
I think that would provide really quite a strong incentive for firms within those jurisdictions to pressure their own jurisdictions to adopt the international standards. And that's exactly what we see, for instance, the effect of the Financial Action Task Force appears to be that when it puts a jurisdiction, so it actually has what's called a black and a gray list, and when it puts a
01:26:26
Speaker
jurisdiction on one of those lists, it tends to raise costs for financial institutions within that jurisdiction and they in turn put pressure on their own jurisdictions to get off those lists. So you can imagine something similar happening. Or similarly, in the case of ICAO, the International Civilian Aviation Organization, the FAA in the United States has the ability to prevent
01:26:53
Speaker
flights entering US airspace from any jurisdiction that is in violation of ICAO rules and standards. So again, there's sort of similar things that are already going on in these organizations that we think
01:27:12
Speaker
that we think we can draw from when it comes to AI. And similarly on the export side, countries could say in a kind of multilateral export regime way that they're not gonna export the inputs into AI technologies to jurisdictions that don't have certification from the international standard setting organization. So that's sort of how we think it could work.
01:27:37
Speaker
And as I say, I don't think it solves all problems, but it's sort of an interesting, to us an interesting potential institutional model to consider.
01:27:48
Speaker
Yeah, and you could see a situation, say that one country, say that Denmark refuses to sign up for these standards. Well then, potentially, AI companies within Denmark would pressure the government to sign up for the standards so they could export their products, earn money on the international market.
01:28:08
Speaker
So you could see the incentives turning the other way as opposed to the incentives of the Danish government and the Danish AI companies being aligned in the sense that they want to push ahead as quickly as possible and they don't want to be regulated in any way. There would now be some form of incentives for signing up for these standards. So it's an interesting proposal, I think.
01:28:32
Speaker
That's great, Gus. We're trying to build support for it one person at a time, so we're really glad to have you on board, absolutely. I think we should run through a list of objections to the whole project of AI governance. The common theme here is just skepticism about the motives of the institutions or the actors involved.
01:28:56
Speaker
If we start with the governments involved, maybe we set up an international organization and we use this organization to monitor different AI companies in different countries.
01:29:08
Speaker
Couldn't one government simply gain information about what companies in other countries are doing and use that information to advantage their own AI capabilities? You could see the monitoring process involving gaining information that would be useful for increasing your own
01:29:29
Speaker
AI progress. Absolutely. I think that's really the right worry to have. And I think that the frontier AI states will have exactly that worry. And so we really need to consider it if we're going to get buy-in from some of those key actors. On the other hand, I guess I would say that if we feel like we want some form of international AI governance, the version where you have an international organization that's actually looking at firms, that seems to
01:29:58
Speaker
have that proliferation aspect in a much, much stronger sort of sense. So I think the main proliferation aspect when it comes to the kind of, if you will, ICAO model that we've proposed or jurisdictional certification model that we've proposed is when it comes to knowledge that the regulator would need in order to set standards.
01:30:24
Speaker
And yes, I think that it's entirely possible that the regulator would need to know things that some countries around the world would consider harmful forms of proliferation. But I think we shouldn't sort of assume that that's always the case. I think it's likely to be the case sometimes for some standards, but isn't sort of
01:30:49
Speaker
broadly across the board likely to be the case. And I think there are some models also for sort of dealing with kinds of sensitive information that, for instance, a government or a firm might communicate to the international organization. So one model, for instance, is the IAEA after it decided that it was going to take information from state intelligence services.
01:31:16
Speaker
A few decades ago, it previously had thought, no, no, no, that is dangerous to do that because then things could be revealed selectively for political gain, and that would be a bad thing. But then it realized, well, there's a lot of things we don't know that intelligence services do know, so we want to be able to take on some of that information. And it adopted a set of rules and approaches to taking on that information and trying to really keep it.
01:31:42
Speaker
keep it secret. So for instance, states have the ability to go and talk to the general director directly. And I think one other person in a room, if they don't want to reveal information to others, I think that there will be that the specifics here will be very consequential and important. But I wouldn't I wouldn't sort of rule out the approach in advance, because I think we have models of it in other contexts. And I think, you know, there are ways of kind of mitigating, hopefully,
01:32:12
Speaker
if not fully, some of the issues. There's even more skepticism about the motives of the governments involved here. You might say different governments have their own motives and they might pretend to care about AI safety, but really they care about, say, national power and specifically the power balance between them and another country. The US might think, let's try to slow down China by implementing some safety measures.
01:32:41
Speaker
In China, I think let's try to catch up to the US by implementing safety measures. In some sense, this is a fully general problem that we can't really solve. But how can we assess whether governments are being sincere in their efforts to regulate AI?
01:33:01
Speaker
Yeah, I mean, I think that's a great question. I think you're absolutely right that we shouldn't expect, you know, we talked about zero failure rates, and I don't think this is necessarily a model with a zero failure rate. I don't. I think it ameliorates some issues.
01:33:17
Speaker
But if the civilian actors, we think they have incentive to do things that even one failure is catastrophic, then probably we need to accordion, we need to expand what the international regime is doing. But I just don't see a potential for expanding it too much right now. In the future, it's possible that there'd be more appetite.
01:33:43
Speaker
for that sort of thing. Yeah, so maybe it's also worth just mentioning that sort of allowing enforcement to happen at the domestic level, at least enforcement on firms to happen at the domestic level as opposed to.
01:33:58
Speaker
the sort of incentives on jurisdictions is, I think, important here because there's less sort of trust, if you will, that's required going in. So I think, you know, these sorts of concerns about, okay, maybe this other jurisdiction won't be doing X or Y are, first of all, more alive in the security space than they are in the civilian space. But they exist also in the civilian space.
01:34:26
Speaker
On the other hand, if you're allowing the enforcement to happen at the domestic level, then that gives more flexibility to domestic regulators and to the domestic government as a whole. If they actually do feel like regulations are specifically targeting them and they're having a sort of negative effect on their international security,
01:34:49
Speaker
they would have the ability to simply not enforce them. Now that could then have implications in terms of international markets, but maybe that would be a kind of way to strike the balance between not so scary the regulations from the point of view of regulators and countries that they're not willing to enter into the agreement in the first place.
01:35:15
Speaker
but also significant enough that they would have incentives for compliance. But this is all out of hope. I think we're still...
01:35:24
Speaker
trying to figure out what the best approaches to international governance are. And I mean, there's also the question of how some people have been looking at their own country's AI companies. They come to the international governance organization and they say, we have this data, we found out these things.
01:35:45
Speaker
How do you know whether you can trust them? And how do the other countries that are members of the standard setting body know whether they can trust the information coming from the domestic regulators? And here we might, this maybe ties back to the question of verification that we talked about.
01:36:01
Speaker
Maybe there's some way to present something that's objective that can't be tampered with, that has some form of technical solution where you can present something that can be proven to be true information. So that's a great question. We can look to some of the models that we have from some of these other regimes and other industries. So the Financial Activities Task Force has the ability to request all sorts of information from
01:36:31
Speaker
jurisdictions and jurisdictions under the regime are obligated to comply with it. So there again is this balancing question of what sorts of information would they be able to request and would it be proliferating and that would have to all be worked out. That would be sort of the details of the monitoring. But probably, you know, there would be a set of variables that they could ask about that countries would be required to disclose.
01:36:59
Speaker
And that would provide some level of monitoring. And then there might also be, just like in the case of the IAEA, there might be all sorts of monitoring that, let's say, National Intelligence Services are also doing, and they could provide further information just like they do to the IAEA. They could provide further information to the
01:37:19
Speaker
proposed international organization in this context. So there's other sorts of information that you can make credible by giving the sort of the backup. Again, just similar to if you're presenting intelligence information, you might disclose your sources or you might not. And similarly here, if you're saying, well, maybe it's information that you have about an algorithmic advancement and you know that
01:37:45
Speaker
that actually the rate of algorithmic progress has shot up. And so that implies a different sort of regulation or maybe a different sort of bar for what sorts of systems require licensing or something like that. And you could say, well, we think that actually we need scrutiny of smaller systems than we did before because of this change in what we now understand
Industry Motives and Regulation Concerns
01:38:15
Speaker
about algorithmic progress. So that's a possibility and maybe that would be credible and maybe it wouldn't because maybe that would seem like putting up some sort of barrier to some other actor or something like that.
01:38:29
Speaker
Maybe in order to make it credible, a country could say, well, here's the evidence. Here's what you do. And of course, there would be a trade-off there. They wouldn't necessarily want to do that. Perhaps that could be proliferating. So there'd be, I think, some trade-offs to be weighed. And exactly how they do that, I think, would be case by case, but certainly a set of difficult problems.
01:38:52
Speaker
There's also skepticism about the motives of the companies involved. So right now we have the top AI companies in the US basically calling for regulation and being kind of open to regulation. We discussed earlier whether this is an attempt to front run the process, to make sure that they don't get regulations that they consider too strict, basically.
01:39:18
Speaker
But another interpretation that I've heard is that this is an attempt to capture the whole regulatory process and make sure that the rules are favorable to them and potentially are disfavorable to their competitors.
01:39:37
Speaker
You could see how it might be in their interest to set up a system of rules that are difficult and expensive to comply with such that new AI startups can't afford the legal burden of complying with these rules and that they kind of cement their market dominance.
01:39:54
Speaker
I think these are exactly the right questions to ask. I think we should have these suspicions of industry all the time, and I think we should be wary of centralizing economic power in a small set of companies. I think there are real costs to doing that.
01:40:13
Speaker
And so I think it's the right set of questions. But I sort of want to say that even paranoids have enemies. That is, even if there might be some downsides here, there's also some potentially
01:40:31
Speaker
reasons to think about some of these regulations. And people have been really concerned about licensing in this way. Regulatory burden of licensing would be onerous, and I think that's the right question to ask. On the other hand, ultimately, we probably want for some
01:40:47
Speaker
For some systems, we want kind of go-no-go decisions, just like we have for buildings. Can you build a building? Well, it's got to stand up, right? And even before you get to build it, you have to go through all sorts of engineering sign-offs and other sign-offs. Are there financial costs to that? Yes, there are financial costs to that. But we think it's a good idea, and we still think that we should have those things.
01:41:15
Speaker
I think somewhat more speculatively, I'm a little skeptical that the regulatory burden would be the kind of major hurdle that would be making it hard for smaller players to get in into things. I mean, certainly if we're talking about training systems from scratch, there are, you know, the costs of training a model, I think, are going through a billion.
01:41:41
Speaker
a billion dollars to train the cutting edge LLM. So that's a pretty significant hurdle right there. I sure hope that the regulatory costs are less than a billion dollars. But again, I think these sorts of issues around centralizing power are really important ones. I'm glad people are raising them.
01:41:59
Speaker
Yeah, and maybe just a final worry here. This is skepticism about, again, the whole project of AI governance. But the worry is that we will over-regulate, basically. So this is about whether we will slow down AI progress in a way that we might, when we look back on it, have wanted that we didn't do. So AI could, we've all heard about these kind of projections, we could see a revolution in medicine, we could see a
01:42:27
Speaker
revolution in how many industries function and we could see incredible productivity gains that could help all of us. How do we avoid turning AI into what happened with nuclear power, where it is now so regulated that it's very difficult to build a new power plant
01:42:48
Speaker
even though nuclear power is pretty safe compared to other forms of energy generation. Basically, the question is, how do we regulate in a smart way in which we avoid overregulation?
01:43:02
Speaker
I think we have to be aware that this is a danger. We have to be careful of it all around. I think if we are adopting regulation and it looks as you describe the nuclear power industry, then that's a real worry. But I don't think we're there or near there right now. I think it's unlikely that we will
01:43:27
Speaker
get all the way there, given the sensitivity to exactly this issue in governments around the world. And I think just a broader point about technological progress, I don't think it's controversial to say that it seems to be speeding up. Not that it necessarily is, but I think it seems that way, and it seems like there are some drivers of it continuing to speed up.
01:43:51
Speaker
People have noticed, I mean, Rachel Carson said the problem in the modern world is there is no time, that is no time to develop appropriate regulations. So if we gave ourselves a little more time to figure things out, I think that would have also some benefits in addition to the real costs that you point out. Fantastic. Robert, thanks for coming on. It's been super interesting for me. Oh, thank you, Gus. Thanks for chatting. I really appreciate it.