Introduction and Purpose of AI Governance
00:00:00
Speaker
Welcome to the Future of Life Institute podcast. I'm here with Leonard Heim. Leonard, could you introduce yourself?
00:00:07
Speaker
Sure, thanks for having me. Yeah, my name is Larenz Heim. I'm a researcher at the Center for the Governance of AI, and short of AI. And I'm working on a research stream, which we call, I call compute governance. So I'm asking myself two questions like, where and why is computer particularism promising note for AI governance? Can I do something to computational infrastructure, which might allow us to achieve more beneficial outcomes? For example, looking at hard renewable mechanisms, which could support these regimes, but also just like, can we use compute monitoring to have more responsible actors out there on the AI world?
Understanding AI Risks
00:00:37
Speaker
To start with, it's important to specify exactly what you're worried about here. So why in general should we worry about AI? What's your preferred framing for thinking about AI risk? Yeah, I think my preferred way is grouping in three categories like this misuse risk, accident risk, and structural risks. Misuse if somebody is not using GPT-4 to send a bunch of phishing mails,
00:01:02
Speaker
Accident risk is like, for example, if a self-driving car crashes in somewhere, or we have some scenarios where AI systems try to do something on their own, some malicious tasks, and structural risks are like the things which are underneath the surface or just like, I don't know, slowly all fall in love with a chatbot. Maybe this might be good. I don't know if we are having a good time, but there are many reasons to believe something like these things might eventually be bad, right? These slowly coming things we should definitely look out for.
00:01:28
Speaker
Perhaps we could say a little bit more about the structural risk. So, what are you actually worried about here?
Structural Risks and the AI Triad
00:01:34
Speaker
So, yeah, structural risk. I think when we talk about AI systems, we talk about like to some of the often general purpose technology, right? So, it's going to be plugged in into many ways, how our economy works, how nation states compete with each other, even on a military's perspectives, right? And then these kinds of like structures can change over time and they change the dynamics.
00:01:54
Speaker
There's something you could be saying, nuclear weapons have increased to some kind of structural risk. There's some structural risk, just like we have them and it changed the dynamics of warfare. And I think AI might be similar in a lot of these cases. This might be a structural risk regarding military and how nations compete with each other and how we go about war. Other structural risks, you could, for example, think about recommender systems with Facebook.
00:02:17
Speaker
This might be a structure risk if it just turns all of us. It's really addictive, but it turns out we are like really miserable because of this. So that's like a wide variety of things like these low things which are like not immediately clear. That's like actually bad, but over time we actually see this playing out. It's like being bad.
00:02:35
Speaker
What is the AI triad? Yeah, the AI triad, or I also like to call it AI production function, basically describes you have this function which has like certain inputs and these inputs are those three things, this triad. So that's compute, data and algorithms.
00:02:50
Speaker
With compute, I mean something like computational infrastructure. We need computational infrastructure, data centers, your smartphone, your computer, to train and execute these systems. With data, this is the data we eventually train these systems on, be it a bunch of images or a bunch of text. And algorithms generally describe machine learning systems, deep learning within machine learning, and within there, for example, transformer architectures or other algorithms which eventually power these systems which are trained on the data using the compute.
00:03:16
Speaker
We throw this into the compute AI production function and out we get some AI systems, which has certain capabilities. And particularly this AI production function tries to think about how much do these individual inputs matter and what can they tell us about the capabilities, which are to something the output of this AI production function.
The Governance of Compute
00:03:34
Speaker
Why is it you've decided to focus on the compute aspect here in compute governance? What is it that makes compute specifically interesting from a governance perspective?
00:03:45
Speaker
Yeah, I think one interesting trend is just like the role of compute over time. I think Richard Sutton described it as the bitter lesson where he talked about like, well, we're trying to come up with this really cool algorithm, trying to model the brain. But actually, the bitter lesson is we just do like some types of search algorithms, thermal computed them and looks like it's just working. So better systems turn out to be formed better.
00:04:07
Speaker
better systems and use more compute. And we did an investigation where we looked at what is actually the compute training usage of cutting edge systems, those who have the best capabilities, those who have a lot of citations. And what we found there were the compute usage for training is doubling every six months. And something doubling every six months means, wow, it seems like this is a really important input.
00:04:27
Speaker
This does something like the empirical motivation for these kinds of things. Otherwise, what I usually describe it as like compute has some unique properties and it has some unique state of affairs, which makes it a particularly interesting governance node, which I'm happy to elaborate on.
00:04:39
Speaker
Yeah, yeah, please tell us. Yeah, so unique properties with unique properties, I'm usually saying, well, compute is this rivalrous product is like it's basically excludable. If you're crunching numbers on your computer, nobody else can crunch numbers on your computer. If a data center is fully utilized, nobody else can use this. This is in contrast to data and algorithms.
00:04:58
Speaker
If I have a data and I have an algorithm, I press Command-O-Z, Command-O-C, Command-O-V, and there we go. I got it twice. This does not work with compute. I wish it would work like this. Always, we didn't need this complex supply chain. It's just something excludable, right? If I'm not giving you access to computation resources, you can't have it. Even if I hack into a data center, I can only use the computation resources if A, nobody else is using this, and B, until the point that kicked me out. Whereas if I hack somewhere else and I try to steal the model, I start to steal the algorithm, I try to steal the data, I have it.
00:05:27
Speaker
I cannot exclude it later on because I can just copy. The other aspect is the quantifiability. It's like, well, a computer is like somewhat nice measurable. And it's like the various ways of
Monitoring and Challenges in Compute Usage
00:05:37
Speaker
measuring. I can just ask myself like, how many AI chips of which type has an actor access to? Because how many chips they might have given that computers important might tell me something about the AI capabilities or like which kind of access they have to AI capabilities. I can try to measure the training computers I just described, but I can also just
00:05:54
Speaker
trying to look, yeah, where are the data centers? Who's using them? How many chips are there? And which actors are there? So I have to something like those have a lot of compute to something like those AI actors, I should be like thinking about, I should be trying to govern and make sure they build responsible AI systems. They also like the fundamental properties where I'm claiming they stay the same over time. This is just a fundamental thing about compute. And it's particularly interesting compared to data and algorithms, like way harder to measure these. And you can't actually exclude them.
00:06:22
Speaker
So computers is basically physical and that makes it easier to control. You're not going to have for compute leaking online like the weights of a language model can. You're not going to have it at data set leaking online. Could it be the case that, for example, the
00:06:38
Speaker
Knowledge used to produce these chips could be leaked online and that would make compute less controllable in a sense or is the production of computer very dependent on specific hardware and perhaps local knowledge in the factories and so on.
00:06:53
Speaker
Yeah, this is on the second point, which I'm not calling the state of affairs or something, right? I'm like, well, compute is being produced by this really complex concentrated supply chain. And if you're just saying, well, if there's like certain IP now being leaked online, can I just build like the next TSMC in my garage?
00:07:10
Speaker
And my claim is like, actually, no, you can't. It's like really, really, really hard. This is a thing where like a lot of people are thinking about right now. You might want to ask certain employees at ASML and asking about what they're doing in their work and they might not be able to tell you. So they're paying a lot of attention that like no IP is leaking from these places.
00:07:26
Speaker
We have history of certain people just leaking this information or providing it to China and then China trying to use it and actually trying to eventually produce some chips of it. But it's still hard. I think that's still the case. If all of ASML ISPs or TSMC's IP would leak tomorrow, it would definitely make it easier to build.
00:07:47
Speaker
the machines which build the chips or to eventually build the chips in the fabrication, I don't still think it's sufficient. We still have this tacit knowledge of these people who build these machines. We're talking about the most complex machines in the world to actually set this up. Yeah, and this just seems really, really hard to eventually build this or like it's some combination. I think we all would wish for the world where like all of the organizational knowledge is written down, but guess what? Sometimes we need to talk to each other to bring certain things across or like it's actually just like, sorry, get out of the way. I need to
00:08:16
Speaker
I don't know, tighten the bullet on this machine, because there's some specific way of going about it. A symbolic example here. So it's very difficult, perhaps, even if you wanted to leak all of the intellectual property from these companies, because perhaps many of the engineers do not know or can't describe what it is specifically they're doing, or a document detailing what they're trying to do would never encapsulate all of the knowledge that they have about what they're trying to do.
00:08:44
Speaker
Indeed, and everybody just works on one small part. We talk about just one machine which is built by thousands of people. There's literally some person just responsible for maybe this one, I don't know, this one ball, which is setting up the mirror in the right space there. And the other one just trying to make sure that the laser has always exactly 12 balls or something. This just keeps the person busy.
00:09:07
Speaker
So there's one person might know how does one component work but they rarely have like the overview all of this works right this is like this is an endeavor of all asml and it's an endeavor of all of humanity to some degree to build these chips will be like all come together we try to put it into certain boxes right that different people designing it the different people like building the machines that different people fabricating it eventually assembling it.
00:09:28
Speaker
But there's definitely some crosstalk there and like within each of these units, it's like really, really hard to just like make it based off the IP and like steal it. And definitely people right now are trying this, right? They're like people trying to hack into ASML, TSMC and all of the other companies trying to get as much as IP as they can. And I hope they just have like pretty good cybersecurity and trying to prevent this. Do you think that these companies are actually investing enough in cybersecurity? Because I mean, compared to how valuable their IP might be?
00:09:57
Speaker
Of course, there's a general answer where no one is ever investing enough in cybersecurity. But do you think it's extra important for these companies?
Oversight and Verification Strategies
00:10:06
Speaker
A, I think it's extra important. B, cybersecurity is really, really hard. Somebody with a background in it, it's really, really hard. B, it gets really, really hard if you need to deal with nation states. It's not like some kitties which are trying to hack you. It's like trying a nation state, which is like trying to get certain IP.
00:10:22
Speaker
Is ASML and TSMC really trying hard? Yes. I actually just watched a documentary about ASML on the plane where the CEO was complaining that they spent more and more like certain millions just on securing their IP. But eventually, it's an interest. Lucky them, they only want building these machines. If somebody else would get it, it would not be good for their business perspective. So they're clearly trying this in this case. We also have unfortunate cases of NVIDIA being hacked.
00:10:47
Speaker
Eventually, actually just by some kitties, this is what it currently looks like, where a bunch of the data has been leaked. And I think Nvidia should have probably had better cybersecurity here, and maybe like scale it up, like a lot, because Nvidia is just a design firm. There's just like, to some degree, you can just steal the designs and then copy them or something. I think there's like more to gain if you just steal the IP. There's still some tested knowledge, but like there's more to gain. And eventually all of these companies
00:11:09
Speaker
All of these companies somewhat involved, if you think about AGI and AI, and if it turns out these systems are going to be really capable in the future, just need to step up the information security game. And that's a key priority within AI governance and within AI labs to ideally be as safe as the military or the NSA or whoever.
00:11:27
Speaker
Okay, so when we talk about compute governance in general, we want to know what certain companies and labs are doing. So we want to monitor how much compute they're using, for example. In general, which options do we have available for monitoring compute usage?
00:11:44
Speaker
One way of monitoring is like what is sometimes just reported himself, but it's just like, hey, we train this new system and use that much compute. The reason why I'm working on this, because some AI labs taught me, hey, we train the system, it's this big, it used that much compute. I'm like, oh, interesting. That's a lot of compute. This means you spent like single digital millions, millions just on training these systems, right? To some degree, they just put it out there. I think we're hitting new era now where like they stopped doing this, right? So like, I learned, like look like compute is important. It's an important act there. And
00:12:14
Speaker
Notice of doing this because you just also push maybe dangerous means and give like a way certain IP, which you have there. The general notion is more compute usually means more capabilities. And then I'm like, well, more capabilities means more responsibilities. That's what I'm advocating for. So if I see those who train the biggest models, I'm like, guys, you train the biggest models. You have the most compute. You should also bear the most responsibilities here. This could be self-reported. This is like just the companies just like telling someone ideally maybe like some act on the government.
00:12:44
Speaker
And I think this would have to be great act of just foresight, when people like, hey, you know what government, we just trained system X, use that much compute, has this capabilities, raise yourself or prepare or whatever. Something along these lines would be good for them to know so they can have some foresight. I think it would be good to know for the government to know Gb4 is being released because it might have certain implications on your society. I don't think it's going to be like a really big deal right now, but I think in the future, it will be more of a deal that edges like security implications.
00:13:13
Speaker
and the governance should be eventually briefed on this beforehand and we should probably also just mandate it was like every use of computers across a certain threshold needs to report it just like yep more responsibility just tell me what's going on there as the first step and later on you could even.
00:13:30
Speaker
mandate certain audits, certain evals, like, hey, you just trained this big system, could we could we please have like a risk assessment before you actually deploy this to the world, right? And maybe at its latest, you could even like, ask, deny people training certain systems, where it's like, they asked you if I can train a certain system, and you would be like, why are you actually are you responsible lab?
00:13:50
Speaker
And then we have maybe some ways of eventually assessing this. But again, you could imagine this as a tiered system with different monitoring with different regimes we eventually ask them to do. And it's unclear to me now what is warranted, but I think we should definitely set up in place some kind of infrastructure to eventually get this started if it's warranted. And I think we have more than enough reasons right now that this might be a bit warranted in the future.
00:14:13
Speaker
It's easier to prove that you have something, say that you have three servers, than it is to prove that you don't have something. So it's pretty difficult to prove that you don't have some extra compute hidden in the basement. What mechanisms do we have here to try to verify that a lab doesn't have more compute than they say they do?
00:14:35
Speaker
As you say, it's easy to verify you have that much compute. You could even imagine, run this algorithm. I know you have that much compute. It should take you 10 seconds. If I don't get the amount in 10 seconds, I know. I'm going to ask you, where did your compute go? Where it's harder if they're hiding something in the basement. The thing what we can do here is just build on top of the supply chain. The supply chain is really concentrated. There's basically two types. Actually, just one type of GPU sitting out there with giant video ones. At least if you talk about these hardware you can buy, there's still proprietary hardware, like TPUs or like other.
00:15:05
Speaker
And then we just check on the supply chain because there are only that many fabs who produce these chips and it's like, hey guys, you're producing these
Impact of Export Controls and Global Implications
00:15:11
Speaker
chips, where are they going? I'm just going to write it down. And I know you have this many chips, so I'm going to ask you to prove this from time to time. You know, maybe I visit you, maybe I send you a proof of work, different mechanisms to eventually achieve this, but we know how to do traditional supply chain tracking.
00:15:26
Speaker
And this is needed anyways right now, because of US export control reasons, which you might chat about later, that you just know where these types of chips go, who has access to them. So we can then do this. And this, yeah, like traditional supply chain tracking methods, or we can also just think about other methods where we just like certain people, like the hardware only works if it tells you where it is. This might be more intrusive, but like something along these lines.
00:15:52
Speaker
So geo-tracking for hardware and geo-locking basically for hardware. Yeah, you could have mentioned something like this if this eventually is guaranteed. And again, I'm only talking about a small amount of chips here. I mean, not a small amount of chips, but just a small subset of all the chips are being produced.
00:16:09
Speaker
Within AI chips, I'm talking about the chips, which are the high, best-performing chips, which go to data centers. I'm not talking about geolocking anybody's smartphone or computer or something like that. We talk about industrial hardware, which is probably owned by 80% by a handful of actors because, yeah.
00:16:25
Speaker
just have like cloud compute providers and others who just own a majority of all of this AI compute, if you want to call it this way. So one of the things that makes compute interesting from a governance perspective is that it's very concentrated. There's a huge data center. It's physical. It's large, physically large. So you can probably see it from a plane almost. What if I say that I'm a lab and I want to
00:16:47
Speaker
I want to get around certain restrictions. I don't want someone monitoring my use of compute. Could I distribute my computing over a large network and then train my AI models like that? Perhaps it would be much less efficient, but it would perhaps also allow me to circumvent certain monitoring regimes.
00:17:09
Speaker
I mean, first of all, I would wish nobody would try to circumvent this, right? Maybe it's like wishful thinking was like, hey, actually, there is no downsides to this is like, benefit, like, everybody's going to be doing this, we ideally mandate this word, right? And everybody's on the same page. But you're right, when you do restrictions, when you do policies, regulations, whatever, people are trying to evade this.
00:17:27
Speaker
I think if this type of evasion, where you just try to, in this case, I think a better term is trying to detrain decentralized, it's always distributed across many GPUs, but now we just try to think about putting some GPUs in location A and some other GPUs in location B. The question is, is this feasible? There is research on this, people are trying this, this is feasible, and there is a penalty to this. The question is, is this penalty worth it if you want to stay economically competitive?
00:17:50
Speaker
And it was just like, your competitors are not doing this and they can train like 40% cheaper, it might not be competitive, right? And then you rather follow the regulations to eventually achieve this.
00:18:00
Speaker
If we go back to the previous point where I'm just tracking initially where all the AI chips go to prevent this type of smuggling or this type of misuse, there might be two different data centers. I might ask both test centers, like, hey, who are actually your customers? And then just like, oh, you both give that much compute to customer A. So I know I have some idea what is the biggest thing which customer A tried to train. So we need different mechanisms there. But again, this is definitely one of the key things we need to look for.
00:18:25
Speaker
as a way to trying to evade this. I'm pretty pessimistic that you can just like train it like at home or something where you're just like, everybody donates their GPU at home. These systems need to talk to each other at fast speed. Eventually, then I don't know, it depends where you live, but most people are like kind of shitty internet at home. Um, so it was like really hard to translate all of this data. The data center is just like this, this, like this, this engine, which is like way more efficient. It's just always going to be like more cost effective to do it in a data center. Right.
00:18:53
Speaker
You mentioned these export restrictions that the US have recently imposed on exports to China specifically, also other countries perhaps? It's mostly China. I think it's the same. I'm not sure if the same rules apply to Russia, but there are definitely some sanctions to Russia, where more people play along. Taiwan also has sanctions to Russia. Yeah, so what are these export controls to China? I think we can scope it into roughly four buckets. They prevented the
00:19:20
Speaker
certain chips from going to China. These chips are classical AI data center chips as is A100 or H100. They're not allowed to be export to China anymore. They stop the selling of semiconductor manufacturing equipment. For example, this ASML company we discussed
00:19:36
Speaker
It's not allowed anymore to sell these to China and then the Dutch, they actually pay along, they agree with this, they stop selling it to China. So they cannot produce their own chips, right? This is, if you stop selling your chips, they incentivize to produce their own chips. So you're like, oh, I don't give you access to the machines which produce the chips. So now they're trying to build their own machines which build the chips, but now the US also has an export ban on equipment, which helps you to build the machines which build the chips, right? And lastly, they also stop actually US personnel. I think everybody with the US passport is not allowed to work
00:20:05
Speaker
at certain companies anymore, which fit into this broader scheme of the Chinese semiconductor industry. And that's a big deal. They're basically trying to cut off access to producing sovereign semiconductor chips. China can still buy most of the chips. The chips which are actually banned and its restrictions is a limited number of chips. And this is mostly chips. I mean, this is only chips used in data centers for AI workloads and maybe for some other workloads.
00:20:30
Speaker
If we want to take inspiration from these export restrictions and say that something similar could apply to a US company, what would be required for you to say, this is a good idea? How reckless would you see a company behaving before you would pull the trigger on, saying, for example, not giving them access to chips?
Compute Control and Corporate Governance
00:20:50
Speaker
Because this seems like a
00:20:52
Speaker
Pretty serious action you talked about with greater compute comes greater responsibility, but perhaps also with greater power over companies comes greater responsibility. So what should we see from a US company before you would be interested in doing something like this?
00:21:11
Speaker
As you said, this is a pretty intrusive measure, right? And it's a pretty blunt tool. This is not to forget. Sure, I pull this. I actually don't think so. The reason why the US eventually did this is because other legal tools just don't work. You can write as many angry letters as you want.
00:21:28
Speaker
It just doesn't run. It's like, hey, we're selling your chips. Please don't give them to the army. Guess what? We figured out they end up at the army anyways. So you pull this blunt tool because that's the only thing you can do. I think what's useful to think about here is I'm thinking about some kind of tech stack. At the top, I have services, AI workloads, computer, facial recognition, something. Below there, I have software, computer vision, machine learning, these kinds of things. And this builds on top of hardware.
00:21:52
Speaker
Eventually, what the US government wants to achieve here, they want to prevent certain misuse and abuse by the systems against human rights being used by the army and a bunch of other things.
00:22:01
Speaker
And they can write an angry letter. It's like, hey, please stop doing this, but you're still allowed to do this. It's not going to work. But you can do this legally in your own country. You have different tools in your own country. If the US government sends an angry letter to OpenAI or anybody else, well, they better follow it, right? Because they're sitting in the same jurisdiction. So you probably don't need to use these blunt tools where you just like cut off the access to compute.
00:22:22
Speaker
I think my argument is it's one of the most enforceable nodes. And sometimes you need enforceable nodes, as you say, if particular actors are particular reckless. Another example to think about this is you think about like people running Silk Road, like drug markets online. You send them angry letters, but first of all, like who do you send this letter to? Because you don't know actually who's running this, right? And if you figure out who runs it, you usually send them an angry letter. But sometimes they just don't know who's running this. And then they just do this blunt tool, which I call compute governance, they just unplug it.
00:22:51
Speaker
They're trying to figure out where the server sits, because this is sometimes easier than actually finding a person who runs it. And then they just unplug it. It's like, well, we couldn't find you. You're doing this illegal thing. You didn't respond to our letters. We couldn't send the letters. So we're just going to unplug the server right now. So under enforcement perspective, computers are really, really useful. And reasons for this are just as a backup option if the other things don't work, or maybe just as a layout defense to eventually have this.
00:23:18
Speaker
And again, next to this enforcement tool, there are other reasons like quantifiability and these other things, but they look a bit different than this reinforcement angle, which I just talked about.
00:23:26
Speaker
You could imagine a scenario where, say, a governance organization and a company disagrees about how powerful a model will be if it were to be trained. We could say a GPT-8 where there's a disagreement about which capabilities this model will have. And this is a general feature of these machine learning models because we don't always know what's going to arise from them.
00:23:50
Speaker
which capabilities are going to arise. We've been surprised before, certainly I have, probably you have too many people have been surprised by the progress we've seen. So given that we don't know which capabilities arise beforehand, how do we know which training runs to restrict?
00:24:05
Speaker
Yeah, that seems to be a hard question. I will point to two things, which are somewhat useful here. There is one thing where I would actually call this an open domain of research, which is like, the problem is if we try to stop a training run from happening, usually we have something, we make governance based on a model. It's like, oh, this is a high risk model because it makes decision about who gets alone, who gets which medical advice, and then we make decisions based on this and we run some eviles or whatever. We can't do this here. So what I might be interested in is maybe some properties of the training run.
00:24:34
Speaker
So one question I could ask, how big is the training run going to be? Because again, historically you've seen bigger training runs, bigger models have more capabilities. So all things being equal, those are the things which should be more governed or should be used more responsibly. In the future, we might also learn more about certain aspects of training runs which are particularly dangerous. So once you maybe invent an architecture where you have some sub-architecture which enables your AI systems to have goals,
00:25:00
Speaker
something long besides, or you have like certain reinforcement learning components, or you have like certain online learning components. These kinds of things, I'm like, well, all things being equal, systems with these kinds of properties should be more of concern, right? So I want some kind of verification of properties of training runs. And like, then I can just see what passes and this is like, people should try to learn more about this. And ideally, we also have it in a way where it can be done automated, so the company doesn't need to release their source code. Like just like, nobody sees them, we do it like in a privacy preserving way.
00:25:29
Speaker
No more way how we do it and how we usually deal with these kinds of governance things is we make it about the organization. We make it about the owner. If I'm going to sell you a gun, I'm first going to do a background check. Well, I don't know how you're going to use it, but the minimum thing which I can do is just like, well,
00:25:44
Speaker
What have you done before? Are you a responsible actor? And this, I think, opens up this whole work stream of corporate governance, which my colleague Jonas Schütter is working on. It's like, what makes up a responsible AGI lab? What are the things they should be doing so I allow them to train a certain system? I could be saying, hey, you're only allowed to train a certain system if you have an internal risk management system. This might be what good governance looks like.
00:26:07
Speaker
We have the same for banks. You're only allowed to open a bank if you announce certain regulations. This is just like, I'm not reinventing the wheel. We have this before. I'm just saying, well, maybe you should make this about AI developers, AI labs, and maybe compute as a way of verifying this. I think what I'm trying to push is like this whole notion we need some kind of AI driver's license.
00:26:28
Speaker
Not everybody, I'm not allowed to drive a truck, and I think it's a really good idea. And I should do some training before I drive a truck. The same goes for AI systems. I think not everybody should train AI systems. Some corporate structures are just more responsible than others, and we know which features make this up. This is not going to be foolproof, but we learn over time, and we could get started on this right now. I'm just like, yep, those systems have an impact. This makes you more responsible. This makes you less responsible.
00:26:54
Speaker
But these are the more heavy-handed measures that we could take.
Promoting AI Safety and Academic Involvement
00:26:57
Speaker
So monitoring and restricting compute to certain companies. You've also explored the option of promoting work in safety. So how would this go? You would subsidize compute for companies that you have reason to believe are doing important safety work. Yeah.
00:27:14
Speaker
I think this would be one example where you're just like, give companies, oh, you're the good guys. I give you more compute. The problem we just have here does safety work scale with compute? And again, we generally have this really blurry line between what does safety work and what isn't safety. I think it's definitely worth exploring. If I'm saying, well, we can restrict access, I can also give more access to others. And I think one prominent example of this is we see more and more systems, or nearly all systems, which are compute intensive, which means most systems with the most capabilities,
00:27:43
Speaker
coming out of research labs, coming out of corporations. They're not coming out of academia anymore. Academia simply doesn't have the funds, the teams, and a bunch of other things to build these systems. So what they're currently trying to do in the US is called the National AI Research Resource Denier. But trying to figure out, well, can we just give them more resources? Can we just give them more compute? And then you might be at the audience like, well, academics, they have somewhat of a different nature of research they do because compared to, for example, industry. Industry is eventually trying to make money.
00:28:13
Speaker
Whereas academics are like, well, we know that academic research is way more diverse. We know that academic research is more like to produce public goods. So maybe they would be more incentivized to just build more beneficial AI systems and take a closer look at this. It's not necessarily clear to me. It might not just be that they continue being part of this red race. We're just trying to build bigger and bigger models. I do think this is not the best idea to do because I think it's really expensive. It's not the best comparative advantage for the near. It's not the best comparative advantage for academics.
00:28:42
Speaker
whether they should be doing research which is beneficial. This could include the more diverse research which might help us with making progress on safety, particularly parts of safety, be it interpretability, whatever, pick your favorite thing and then we can fight over it if it's actually safety or not. It's always an ongoing topic. They might also just do some type of work which I call just apply scrutiny.
00:29:00
Speaker
It was like, hey, what about you download llama or you use the GPT-4 API and just like really point to the failure modes of these models and giving them compute for this might actually help, right? There's a proposal which I wrote with my colleague Marcus, where we said, well, in there, they should give access to pre-trained models from other researchers, but also academics, and then academics can take a look at this and take these models apart.
00:29:21
Speaker
I'm not only talking about computer scientists, I'm talking about the whole scientific domain, which is basically non-woven AI research, right? But it's just like, what does this model mean for my domain, take it apart and point to the failure modes, so open AI or whoever can make this model eventually better. Perhaps the top labs would be interested in having this data, because perhaps it could be seen as, you know, if we're discovering failure modes, this is one of these things that might blur the line between safety and capabilities work.
00:29:48
Speaker
where if some researcher at a university discovers a bug or a fail mode of a model,
00:29:56
Speaker
That could be used to improve the model's capabilities, but also its safety. There might be a win-win situation here where this notion of promoting safety work is less of a stick compared to restricting compute and more of a carrot approach. I think so. I mean, just look at Twitter. I mean, everybody's basically on beta testing GPT-4 and just trying to figure out what's happening there. And ideally, you should get paid for it or at least get a PhD for it, something along these lines.
00:30:25
Speaker
you should not have a better test on alpha test on the whole society. Maybe, you know, maybe start with a small amount of people. And like, I mean, opening, I did this to some degree, right? They like delayed their release. Well, maybe it is not sufficient. Maybe we need like even more responsibility in the future. And actually, I don't want them to manage this. This is not a democratic decision. I want the government to decide what means responsible release and what it doesn't mean. Could you talk a little bit about tech supported diplomacy?
00:30:50
Speaker
I imagine this means something like exporting these governance options globally. Tell me what it means.
Technology in AI Governance and Diplomacy
00:30:59
Speaker
I mean something on the lines like using tech to reduce the social costs. A lot of times we do stupid stuff because we don't have enough information because nobody wants to tell us. This is just like, how many nuclear missiles does X have? Well, if we both have credible commitments, this would just help us a lot. We invented some tools to do it.
00:31:17
Speaker
When I talk about tech assisted stuff and I'm thinking about AI, I mean something like, well, can we implement hardware enabled mechanisms which allow us for actors to make certain credible commitments? Like where they actually have like technical proofs. There's like, Hey guys, last year I trained these many systems with this much compute. And then we said, I could, this looks good.
00:31:38
Speaker
And I was like, oh, this one model, I would like to take a closer look at this. We have like credible commitments. This could enable just like labs to cooperate with each other. Whereas like, if labs not cop, like all agree, like, hey guys, let's slow down, might be a good idea, you know, let's go slower. You want a credible commitment. You want somebody to check on this, right? And what I'm imagining is that you could imagine like software enabled, hardware enabled mechanisms, which just prove like, hey, here's the monthly report, how DeepMind used their compute.
00:32:03
Speaker
And then opening your checks is like, cool, they hold up to their commitment. This is great because yeah, we trust tech more usually, at least it reduces the social costs for trusting people. Then we do, if we were just like, like tell each other and send us each like happy emails where you like do these kinds of stuff. So like generally excited about, and you could imagine this across nation states, across labs, across many different actors, but they have like credible commitments about the AI development, right? And using compute there.
00:32:29
Speaker
as a tool to eventually enable this so you cannot circumvent this. Particularly hardware is not impossible to hack, absolutely not the case, but definitely harder to hack than software stuff. And in terms of technical solutions here, so that we don't have to rely on trust, what's available. I'm imagining perhaps some cryptography where what is something that's credibly neutral between labs or between companies or between countries even
00:32:56
Speaker
Because this has been a problem for decades in the nuclear space, where it's difficult for countries to trust each other, and there is no neutral ground or technical way to prove, for example, how many nuclear warheads do you have. So what's our options here?
00:33:14
Speaker
Yeah, let's try to piggyback this nuclear thing. We technically would have possibilities to counter the number of nukes, but it's hard to hide. What we technically do in the case of Iran and other countries, we measure the level of enrichment of uranium. And if they're enriched high enough, the alarm bells go off.
00:33:31
Speaker
And then whoever does something. So we have some way. It was like, yeah, uranium disenriched, fine, only nuclear power plants. Guys, didn't we decide not to do this? That's why we have on-site inspections. And they even set up physical devices there, which continuously monitor this. And I think that's the same thing which I'm pointing towards here. They like those things, which are continuously monitored, where we trust it, which eventually need to be verified with on-site inspections, or something different along these lines.
Future of Compute in AI Advancements
00:34:00
Speaker
Okay, so we've explored compute as a way to govern AI, but there's a question of whether compute might become less and less relevant because newer models will require less and less compute. Do you think that cutting edge models will continue to be limited by computing hardware?
00:34:20
Speaker
I think so. I think we have some reasons to believe. I mean, this is historically what we've seen. Even though we don't know the exact numbers for GBD4, I'm claiming here, hereby, that it's probably used more compute than GBD3, and maybe just probably a lot of compute that I could not train in my basement. I could probably also not train it with all of my safemans.
00:34:38
Speaker
What's definitely right, what you said, over time to achieve capability X, the compute required for this goes down over time, right? But what we've historically seen, just people continue pushing, compute more and capabilities more than the other thing. The question is like, what is the capability where it starts to be like worrisome, right? Where it's like, ah, like, yeah, actually, where are we? Like, which needs to be governed, right? And like, if this goes down over time, what do we do about this? So for example, maybe in 10 years from now, I might be able to train GPT-4.
00:35:06
Speaker
And then the question is like, well, how good is GBT6 compared to GBT4? Is GBT4 just like worthless? Because already GBT6 is along there? This is an important question. Just to general notion, like how does the offense defense balance with like more cutting edge items actually balance off? And if we're lucky, these cutting edge systems can defend across the other systems. This is what we need to see, right? If this is the case for cyber weapons, is it the case for AI systems which like develop dangerous pathogens, anything along these lines?
00:35:35
Speaker
If we look at the compute trends, when do you think approximately that say I have the newest gaming computer, when would I be able to train a GPT-4 level model at home? Yeah, I mean, we can try to crunch the numbers. I don't know, where's GPT-4? It's like one to the power of 25 flop or something. Then your GPU at home currently has, what is it, like 300 teraflops, assuming you get an A100.
00:36:02
Speaker
Um, if we continue, like if most law continues, you could do it at home, probably within the next six years, if you're like happy to wait like a couple of months and algorithmic efficiency, like make sure just like this continues to go down. So algorithmic efficiency is like more important here than, than how your compute develops, right? You have a bit of hardware in like a couple of years, but you also just like algorithmic efficiency made trainings you before just like way cheaper.
00:36:28
Speaker
So I guess this will be possible at some point. Like at home, unsure, right? Like at least some point it was, it's going to be cheaper. Maybe, yeah, we could crunch the numbers. I cannot do it in my head right now, but this is definitely possible today. It's not super important whether it's at home, but say for something that's more attainable for an average person, say a training GPT, GPT four level model for $5,000 or something like this.
Towards a Global Compute Governance System
00:36:54
Speaker
That's coming within perhaps six years. This is not a long time.
00:36:58
Speaker
Of course, in AI, six years is a million years. But in terms of real-world impact, this is pretty close. So what is the hope here? Is it that GPT-6 or a model like that will be able to detect the misinformation or the phishing attacks coming out from GPT-4 models trained at home?
00:37:21
Speaker
You could imagine something on these lines, right? We're just like, well, the defense of this new model is like really, really, really good, right? You could also just imagine a road where just like AI compute becomes more specialized. So the GPU you have at home in the future is like actually not that useful for training like these new systems.
00:37:38
Speaker
It depends how this will develop. And we already see some kind of divergent there. Your GP at home looks different than this Nvidia A100. So let's not count on the six years there. The thing is with exponential growth, we also have sometimes exponential times there or something. I'm happy to follow up with the number there. Yeah, I can't do it in my head right now. This is what empiric looks like. We continue pushing, just spending more compute on these kinds of system.
00:38:03
Speaker
I never said this is going to be easy. Nobody ever said AI governance or this whole AI thing is going to be easy. All I'm saying is like, look, guys, here's an interesting governance note. I think it has some unique properties. It's good at some things. It's bad at others. Compute is not the solution to the AI governance problem. It's part of the solution. It might give you some nice tools. It might give you tools to get international agreements. And hopefully, some of my colleagues are figuring out this international agreements or agreements against lab and the people in government.
00:38:31
Speaker
We all work together. And I'm giving one piece of the puzzle. I was like, hey, look, here's compute. Here's how it might help. And here's how it might be one defense layer out of many. Yeah, let's touch upon some potential problems with compute governance. And these problems are, in a sense, just problems with governance in general.
00:38:50
Speaker
Just if we're talking about who's doing the governance of compute, US labs would probably be a bit worried if their compute was governed by China and Chinese labs might be worried if their compute was governed by the US government. But it seems like for this to work, you would have something close to a global system for compute governance. So who's doing the governance and how do you get everyone on board?
00:39:18
Speaker
Yeah. How do we get everyone on board? I mean, what we can say right now, the US is governing China's compute. This is happening right now. Anytime if you agree with it, I think it's a good idea, just like that's the status quo. How did they do this? Well, they leverage certain choke points across the supply chain, but also do with the allies. If the Dutch and the Japanese wouldn't play along, this wouldn't work.
00:39:39
Speaker
Right. So like this to somebody you like already, Oh, those three countries are like, Hey, you know what we be like, we get together and we try to do these things there. We're like trying to achieve and use. Unfortunately, this type of plan to which, which we need for this, um, who's eventually doing this type of compute governance. I think.
00:39:55
Speaker
I think, first of all, it will always be hard for countries to build their own sovereign semiconductor supply chain. A lot of times you just have like regulatory flight and they build up their own whatever. I was like, yep, for what? This is not going to work for compute. I think a bunch of people ask me like, well, if you try to like control cocaine, they're just going to do it in Mexico and other countries and then they smuggle across. I'm like, yep, seems right. But building like making cocaine is like way easier than building these types of chips.
00:40:20
Speaker
I think this will then just help. So it looks like it always will be an effort of the whole global world to build these supply chains. And eventually, we can all coordinate on this and actually agree on this. And then we just have other chalk points where if you look at the current chip designers right now who are leading there, it's NVIDIA. It's AMD.
00:40:37
Speaker
Those are the leading companies there. So I might not even need all the governments to sign up for this. I might only need a responsible Nvidia, where at least in the beginning gives people the features to eventually do something along these lines. Maybe we don't even need to mandate it in the beginning. Maybe we need to mandate it later. But for example, Nvidia should stop thinking right now about this if there's certain ways how we can do this, which might be used. Maybe not. Maybe it's all turns out going to be okay and we don't need it.
00:41:04
Speaker
But I think we should definitely prepare that these kinds of systems are being used. And eventually all of this, I'm not saying Nvidia should control it. Eventually we want like governments and democratic systems to control this, right? And I think that's a general thing. We jump in and I'm like, yeah, seems good if democratic societies eventually decide on this. Seems at least better than if a dictator deciding who's having so much compute.
00:41:25
Speaker
And lastly, there might be even ideas where just like I think a bunch of people are thinking about like, well, there is one actor or one entity who's not controlling the compute. You could set up a third party, right? We have the IEA for nuclear stuff. Maybe you have like an international compute bank in the future, which will be doing stuff like based on an agreement, maybe the UN, maybe whoever, something along these lines. And you could also enable think about mechanisms which are like self sustained.
00:41:50
Speaker
you basically say, hey, I'm selling you this chip. And this chip is going to do X if you do Y. Whereas this chip is not going to allow certain training runs across a certain size. So it's not the US government pressing the button to stop it. It's more like, no, this is what we agreed on. This is just what the chip does. And there's no way around it. Of course, this is really hard from a security perspective. You might need someone's side inspection regimes for this eventually to work.
00:42:14
Speaker
Would you be worried that a government, for example, under the pretense of being worried about AI risk, uses these AI government measures to, say, cement their military advantage?
Motives and Challenges in AI Governance
00:42:26
Speaker
So for example, this could be any government, but say the US government says that we need to restrict other governments, other countries' ability to make AI progress, because we're worried about AI risk, but really,
00:42:38
Speaker
what we are trying to do is to is to maintain you know u.s. and u.s. military power. I think that's the only thing one should be worried about right if they just cut off access and i just like then put all of the chips in their own pocket. And use it or just like every computer is monitored except the u.s. military computers not monitored i think eventually we need something like everybody signs on.
00:42:58
Speaker
And again, I'm not saying the US has the chalk points in the supply chain. No, no, no. Multiple countries have chalk points across the supply chain, right? And this is like then just how one way or it can just like pressure each other. I mean, maybe not pressure, just like talk each other into this and eventually get this.
00:43:13
Speaker
So what about on the company side? Again, I'm worried about institutions pretending to have good motives, pretending to have good intentions about preventing AI risk, but actually having underlying different motivations. So here I'm imagining the current top players in the field using this AI government regulation to avoid competition.
00:43:37
Speaker
or to just maintain their market dominance. This is an economic phenomenon that we, in my opinion, have seen in other areas. And could we also see it here? I think that's the problem with any type of regulation you usually have. Sometimes certain regulations favor big players because they just have an easier time to play it along. I think the status quo is just already that we just have certain labs leading right now. And it's not the case that
00:44:03
Speaker
Right now, anybody who can enter the field can compete. Just as an example, all the major IELTS are partnered with a cloud computing provider. So they're all just like, looks like computers important, looks like I need a special partnership, right? Open AI is with Microsoft. Microsoft has their own compute. Google has their own compute. DeepMind sits within Google. They're also using Google Cloud, hugging faces with AWS with Amazon Web Services.
00:44:25
Speaker
So this is already a barrier to entry which is like they can only have that many partners and anybody else who wants now access from cloud compute from cloud providers because this is technically the only way how you get access to this which is like somewhat price competitive they just have bigger costs because they don't have the special partnerships right so like they're already facing this dilemma right now to some degree and eventually all of this trades off right if I just have like do I want like
00:44:48
Speaker
Yeah, like a great competitive environment where everybody can compete and we get the best AI systems. And I'm like, well, actually, you know what? I don't want this competitive environment for AI systems because I think it could be a race to the precipice and this doesn't look good. So I might just actually take the cut that I have bigger companies there who have just more power. But eventually I want them to verify what they do. I want to have them prove by math, by hardware, that they have these verifiable commitments.
00:45:15
Speaker
We should not say, oh yeah, open me, I deep-minded other good guys, you know, they play along. I think we live in a happy world where they take certain risks serious. This is great. I'm not saying they're taking it serious enough. And then we just add another layer on top of it where we just mandate this kind of stuff.
00:45:30
Speaker
Do you worry that governing compute could drive innovation from more responsible countries to less responsible countries? Say that the more responsible countries implement some form of AI governance, but this just drives AI progress to less scrupulous countries.
00:45:47
Speaker
Yeah, regulatory fight is a thing. People leave countries, people live in certain places of lower Texas, but they don't live in any place, right? It's like Oatmeal and the others, they're like, they're not sitting in any, they don't sit in kind of the lowest Texas, even though you might have to have incentives there. There are incentives playing a lot. You want to sit in a bay because the talent is there, because the people are there, because people want to live there.
00:46:04
Speaker
your AI developers want to live in a nice place, right? They care about this kind of stuff. So they cannot go just anywhere, right? Then I think just some of these companies like X just like feel they are American or something, you know, like maybe that's a thing. I guess maybe doesn't stop once incentives are different.
00:46:21
Speaker
But any panel of that, there was just a case where if they don't play along, and this is like the unique thing about compute, we have this concentrated supply chain. So we just stopped the AI chips going to country X you're worried about. This is just the thing we're going to do. And if somebody sets up the AI haven with no regulations or whatever on the Bahamas or something on these lines, yep, then the chips stop going to the Bahamas. I think it's a blunt tool, but it's eventually the tool we then need to use. And this might be the only tool we have. And again, this is better than some other measures you could imagine.
00:46:50
Speaker
And of course, discussing all of this, all of this depends on how seriously we believe that we should take AI risk. I think the two of us agree that we should take AI risk pretty seriously, but not everyone agrees with this. So your willingness or the willingness that we should have to use these tools, of course, depends on the risk we see. But that's, in a sense, a whole separate discussion that I've had a number of times on this podcast.
00:47:19
Speaker
Here's a worrisome scenario. Say that in the 60s, the US was on a path to producing a lot of nuclear power plants and thereby getting clean and green and cheap energy. But this was prevented by regulation that's now made it very difficult to build new nuclear power plants.
00:47:44
Speaker
And perhaps this regulation was well-intentioned, and perhaps there was some real danger, but perhaps also some misunderstanding, conflation between nuclear weapons and nuclear power plants, or exaggerated fears about the actual dangers of nuclear and so on.
00:48:03
Speaker
Could this whole AI worry lead us to a similar situation where we could have benefited enormously from all the innovation that we would have gotten from AI, but because we were, in a sense, too worried, more worried than was actually warranted by the evidence, we killed off an industry?
00:48:23
Speaker
That's definitely a downside. I think that's a downside with anything. How can you change this? Well, you change your mind over time.
Conclusion and Future Considerations
00:48:28
Speaker
I think what I'm trying to propose is like, well, we don't go full throttle on all of these things. I'm just like, we should think about tiered system, which eventually help off with this. Like, hey, we start with this. And I think some things are just warranted right now. They're just like companies reporting their training runs and reporting their computer usage. I think it's like, can we argue it for right now, at least as an optional measure to these kinds of stuff and discuss to the government with the powerful of these AI systems.
00:48:52
Speaker
Are there more extreme things which have it as custody warranted? To some degree, we see them already playing out, right? The US is governing the compute of China. It's happening. So we rather have a good idea why it's happening and just like trying to change our mind over time. If it turns out A, this doesn't work, or B, it doesn't just shift your code. I hope they change their mind.
00:49:09
Speaker
So they get the items back and can do like whatever like the good things with them right so we just need to continue to look at like the type of evidence we have and maybe i'm like bit naive there maybe i'm too optimistic that we can just like continues to do this but this is like this is what my job is about right i continue to check the risk landscape i'm trying to prepare for like some kinds of things which might happen,
00:49:29
Speaker
If they're not having, hooray, don't get me wrong, this would be great, right? And then I try to roll back, or I have some measures which I just never activate, right? That's why I have like this t-tiered system, stepped letters, something along these lines. I think in general, we should like stay better safe than sorry. And like compared to nuclear power plants, the stakes for AI are significantly higher compared to the stakes for nuclear power plants. So I'm just like, this seems guaranteed. I'm fine with like taking some cut on these types of stuff.
00:49:58
Speaker
if we have sufficient evidence. I think what is just forgotten, right now, we're racing in these kinds of AI systems. We have no clue how they work. Twitter is on it. Twitter is currently figured out. And every day, we find something new, weird, emerging capability, and how these kinds of systems work. And as long as it's the case, it seems totally fine for me to actually push the brake pedal and think more careful about these types of stuff. Leonard, thank you for coming on. This has been super interesting for me. Thanks for having me.