Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Mark Brakel on the UK AI Summit and the Future of AI Policy image

Mark Brakel on the UK AI Summit and the Future of AI Policy

Future of Life Institute Podcast
Avatar
241 Plays1 year ago
Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems. Timestamps: 00:00 AI Safety Summit in the UK 12:18 Are officials up to date on AI? 23:22 Objections to AI policy 31:27 The EU AI Act 43:37 The right level of regulation 57:11 Risks and regulatory tools 1:04:44 Open-source AI 1:14:56 Subsidising AI safety research 1:26:29 Global institutions for safe AI 1:34:34 Autonomy in weapon systems
Recommended
Transcript

Introduction to AI Safety Summit

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker, and I'm here with Mark Brockle. Mark, welcome to the podcast. Thanks, Gus. Mark is my colleague at FLI, and he's the director of policy. Maybe you can tell us about this AI safety summit in the UK that you're attending right now.
00:00:18
Speaker
Definitely. I mean, it's usually exciting. I think quite a big moment for AI governance. There's been lots of articles coming out in the lead up to it, lots of, I think, attention on social media. And we just had day one. We're recording this on the morning of day two of the summit. So I can't give the full details because it hasn't fully happened yet.
00:00:39
Speaker
But I think there's already some exciting things to report. There were 28

Global AI Governance Initiatives

00:00:45
Speaker
countries present and they released a statement yesterday in which they released a shared understanding around the potential catastrophic risks of AI. And I think that's a milestone that we haven't seen before. And we saw some announcements sort of on the sidelines of the summit. So we had the UK already announced their Frontier Task Force.
00:01:06
Speaker
But now there's also a US AI Safety Institute that the Secretary of Commerce announced yesterday at the summit. And I think it shows a growing movement towards nations doing their own AI safety research and taking more ownership of that into the public sector. So I think that's also a huge development.
00:01:29
Speaker
And FLI, Tim Schreyer, one of our other colleagues, he put out some recommendations back in September for the summit. And one key thing we said is given the pace of the technology and the development, we need to make sure that the next summit happens in six months time. And I'm super happy that the Brits agreed to that and they got South Korea to host the next summit in six months, then France in a year's time.
00:01:55
Speaker
So those are both confirmed. And I saw a rumor this morning that Canada might be up in 18 months. So I

China's Role and International Cooperation

00:02:02
Speaker
think it's really exciting to see that this is not just a one-off event, but it's becoming a trajectory and a process that hopefully will lead us to a safer world. This sounds fantastic. This sounds pretty positive above my expectations. What about the role of China at the summit?
00:02:23
Speaker
They're not there, they're, I think, in the weeks leading up to it. And there was a lot of back and forth. FLI strongly recommended that China would be involved. We feel that AI safety is an end and sort of risks of loss of control, risks of misuse are global risks that I think we can find common ground on with China, even though there might be some elements of AI policy or many elements of AI policy that we can't.
00:02:50
Speaker
That doesn't mean that you can't also have meetings with the Chinese government. And in the end, they were invited until I think three days ago, still only for day one. But they now also get to be there for the second day, which is the day that is in many ways more important because it has a smaller number of participants. And it's also where the Prime Minister himself, Sunak, will be chairing
00:03:16
Speaker
And

Competitive AI Regulation Landscape

00:03:17
Speaker
yeah, I think it's really great that the Chinese government is involved at this level and is on a podium together with the United States, which we saw yesterday at the summit, we saw both China and the US sharing a podium. And I think, let's say Beijing hosted this or had DC hosted this, you wouldn't have seen that dynamic. So I think that is also sort of an exciting piece. And I think also the thing that really sets this summit apart from any other AI discussion.
00:03:46
Speaker
So what's the significance of the symbolism there? How much does something like that mean, sharing a stage together? I mean, you see a lot of competition emerging between governments on who can do the best AI regulation and who is defining the technology. You saw the EU come out very early with its AI act, and then I think some unease in DC about sort of, is this a repeat of data privacy?
00:04:12
Speaker
where you developed a law and sort of US states copied it and sort of the US was sort of left behind and didn't have much influence in shaping that. So I think there was, I mean, I think that in part helps for things like the insight for in the US Congress where Senate Majority Leader Schumer brought together lots of experts and where Max Tegmark our president participated.
00:04:36
Speaker
And you saw earlier this week on Monday, Biden come out with an executive order regulating artificial intelligence and putting down some, yeah, I think first significant rules. So.
00:04:52
Speaker
That's an added element of government involvement. And then China had already developed its own rules earlier in the year, has also put them out over the course of this year. So having the US and China both in the room means that in that competition, hopefully you can get a degree of convergence as well. And you avoid separate processes where China goes off on one end and tries to mobilize a group of nations and the US goes off on another.
00:05:19
Speaker
I think given the importance of the issue and the importance of global coherence and enforcement, we can't really afford having two or three different processes that exist side by side. So some organizations at the summit are interested in what's called responsible scaling of frontier models. Is that the regulatory approach

Policymaking and AI's Growing Significance

00:05:40
Speaker
that most are in favor of? And what do you think of this approach?
00:05:43
Speaker
I think the approach gained a lot of ground in the weeks before the summit and then sort of did a bit of a belly flop the week before where I think there was a lot of pushback and I think, I mean, Twitter was full of pushback and memes where people were suggesting that adding the adjective responsible in front of scaling doesn't suddenly
00:06:05
Speaker
give you AI safety. And I think that is an important message, one I agree with. I mean, if the emphasis here is on responsible, and if that is made to mean something, I think it can be quite good. And I think a company like Anthropic putting out for AI safety levels, saying, okay, if someone independently verifies that the models that we're going to put out
00:06:31
Speaker
have significant risk of misuse, then we won't put them out and we'll deploy them. I think that is a helpful commitment. It will help policymakers define how to set rules. So I think it's great that companies participate in that.
00:06:47
Speaker
But it's also a very narrow approach to AI governance. And what we've done is we've put out some analysis just in the days leading up to the summit and that was published yesterday. And you can find it on futureoflife.org slash SSP.
00:07:06
Speaker
And that takes you to a comparison table, which compares, for example, the Antropic proposal, but also many governance proposals, the EU AI Act, the Biden executive order, various other proposals put out by industry. And you can see
00:07:24
Speaker
what elements they do have, but more importantly, I think what elements they don't have. For example, responsible scaling doesn't include a mandate to register your AI model. It doesn't say anything about who's liable if you cause massive harm with your system. And those are sort of important elements of policy that we also need. So that's really critical as we sort of assess responsible scaling and maybe expand it a little bit.
00:07:53
Speaker
And I think the companies themselves, at least in my analysis, would probably want that as well. I think they realize that they're in a competitive environment. They can be committed to safety, also at a personal level, at a company level. But there's limits to that because ultimately, they also need to make a profit to stay afloat. That means that to ensure a level playing field, you need some sort of enforced regulation. And some of that will have to come from policymakers and not from the companies or their policies.
00:08:22
Speaker
Some proponents of responsible scaling frame this as the pragmatic choice, the politically feasible approach to regulation. So even though we might wish that we could get something more comprehensive, we probably can't, and so we should go for the more pragmatic choice. What do you think about that?
00:08:39
Speaker
I think the policy environment is changing very, very rapidly. If you told me in February, the month before we put out our open letter that I would be sitting here in the UK in November at a government summit about AI safety, and there would be three more coming in 18 months, I would have laughed you out of the room.
00:08:59
Speaker
So I think given that all of this is happening and you see this dynamism and you see policymakers really stepping up like two or three, like going into sort of an extra gear, I'm not sure if now is the time for pragmatism. I think we can be ambitious at the moment. We can go much further than like the absolute lowest common denominator. I think we also have an obligation to encourage policymakers, companies, academics to be ambitious.
00:09:29
Speaker
It certainly has been interesting to see King Charles and President Biden talk about AI safety, which I just hadn't seen coming even a year ago. When we look back, do you think we'll see this summit as a turning point for AI safety? I know it's early days. I know it's perhaps too early to declare victory, but it sounds pretty positive, especially with the commitments to host further summits and to get something done here.
00:09:55
Speaker
Yeah, I think maybe just to pick up on what you said earlier about Biden coming out to talk about AI safety and King Charles talking about AI safety. I was at a roundtable organized by the British Embassy in Washington two weeks ago, where also the whole British diplomatic system was explaining to American agencies what AI safety is, what their vision was for the summit.
00:10:18
Speaker
I haven't seen all the other embassies of the UK doing this in the other 27 countries but I'm sure similar things have happened and I think it really shows in a way the power of a bureaucracy when it does move into action is that you just have a multiplier effect that is so much larger than you can achieve as a sort of small community of academics or people that are concerned about this issue.
00:10:41
Speaker
In terms of whether this summit is a turning point, I'd like to think so. I mean, it's pretty exciting that we had our FLI conference back in 2015 on AI safety, which was arguably the first. And now to see this at a government level, I think is a milestone.
00:10:57
Speaker
But despite the sort of pomp and ceremony and the king being there, we shouldn't forget that that doesn't create enforcement agencies or hard law. And I think that's still work that needs to be done. And I think it's also left for the next summit to define, can we get to hard international agreements, establishment of agencies. So I think we can be pleased, but we also shouldn't celebrate everything yet.
00:11:24
Speaker
Yeah, I guess we have seen before, and now I'm thinking about climate change, a lot of strong rhetoric, but perhaps not a lot of political action following up on that rhetoric, or at least, in my opinion, not enough. To what extent are governments paying attention to AI at the moment? Where would you rank this in a ranked list of

US vs EU AI Regulation Approaches

00:11:46
Speaker
issues?
00:11:47
Speaker
It's safe to say that AI governance showed up on the list of issues. Previously, this was being discussed only by civil servants at an expert level. Now it's sometimes reached the head of state or the head of government, so that's pretty impressive. Then again, there's also other issues that governments need to worry about. The conflict in Israel
00:12:08
Speaker
Gaza is a good example of something that's currently dominating global headlines. And yeah, I find it hard to provide a perfect ranking, but it's entered the top five, I think. When you talk to policymakers and government officials, are they managing to keep up with the pace of AI development? This might be an enormous task, because I think even people who work on AI have trouble keeping up. So
00:12:34
Speaker
How informed are policymakers and how does this influence what's politically possible? I've seen a lot of pleas for people saying like, please stop publishing articles. Can we please stop doing stuff? I need a break, I need to read. I think that feeling of overwhelm is with everyone working in AI governance. Can policymakers keep up?
00:12:57
Speaker
I mean, they'll struggle to keep up with the pace of any development. I mean, they're also only people and they have a lot of responsibilities to take care of in a week. But I do think it is really impressive to see people who have been engaging deeply with AI in various jurisdictions really get to grips with it. I think a really good example is Dragos Todoraka, who's the MEP from Romania, that's one of the co-leaders on the AI Act, together with Brando Benefai from Italy. And they have been
00:13:27
Speaker
working on this proposal ever since it was introduced back in May 2021, April 2021, into the parliament by the European Commission. And it's obvious how much their understanding has grown of the issue, how much more sophisticated the debate has gone, how much clearer their perspective has become of the risks. And I think you can say something similar about
00:13:51
Speaker
the discussions that maybe the US Congress was having on AI six months ago to now where you see, okay, they've had a number of briefings and people are talking about, do we need licensing? Yes or no. Do we need registration? Do we need monitoring of computational clusters? And I think that's a whole level of sophistication that didn't exist half a year ago. And so, yeah, I think you do see a lot of development.
00:14:17
Speaker
For non-insiders, could you give us an overview of the landscape of AI policy in Brussels and in Washington?
00:14:28
Speaker
In Brussels, I think you have the traditional left-right axis where the left will be more supportive of government intervention, more worried about the harms. The right wants to ensure that the role of government is relatively limited and are more concerned about innovation.
00:14:48
Speaker
The u given its history is also ultimately regulating a market i mean that's what they know to do and what they've done for many decades and i think less about industrial policy putting lots of money behind the technology or geopolitics because the u doesn't have an army so the perspective is in a way quite narrow and quite focused on limitations.
00:15:09
Speaker
And I think a lot of the debate in Brussels is around the role of big players, big tech, to what extent do we limit them, and the role of AI in national security. Can you have real-time biometric surveillance, under what conditions, for what crimes? I think those are, I think, two additional sort of axes, the role of big tech and sort of the role of AI national security that are defining the policy landscape in Brussels.
00:15:38
Speaker
In DC, I think you don't see that level of detail yet because it's still quite early days and there isn't a law, a legal proposal or a bill introduced yet that everyone has converged around. There are bills floating around, but there isn't one game in town that everyone is trying to define a position on yet.
00:16:02
Speaker
And I think you see a development in US, I think, national debate around the role of social media in public life and whether that's a good or a bad thing. And I think potentially a realisation with both the Republican Party and the Democratic Party that they should have intervened earlier in social media and there should have been a role for government there. And I think people are trying to draw lessons from that experience for AI.
00:16:32
Speaker
Yeah, I don't think everyone's certain yet on how they want to go and do that. Yeah, what explains this difference between the EU and the US where the EU seems to be in front on the regulation, which is kind of funny since the US has most of the large AI corporations. Yeah, what explains this? Why is this how things are?
00:16:53
Speaker
I find that hard to pinpoint that exactly. I mean, in a way it might be a little bit random. I think there is an element of accident or luck in that the European Parliament set up this thematic committee on AI a few years ago. And those discussions I think led to an early awareness that legal action might be necessary. And I think
00:17:16
Speaker
It's also maybe a function of the electoral cycle, if you compare that between the US and the EU, where the US of course has midterms, every four years elects a new president, whereas the European Parliament only gets elected every five years, and we're currently at the very end of that electoral cycle, like the elections are in spring next year.
00:17:37
Speaker
So no one had to worry too much about sort of being reelected for a while. I think there was a lot of deep focus on sort of policy. And I mean, there is, of course, this meme in European politics that many people will send their national politicians to Brussels if
00:17:53
Speaker
they don't want if they want to get rid of them. And in a way, Brussels is just still less visible than maybe politics in Berlin or politics in Paris is to French or German people. And that does mean that people are maybe able to respond to these technological developments in and sort of have more time to think and consider and contemplate them than maybe in some national political environments.
00:18:19
Speaker
It's surprising to me that the US isn't in front on regulation. You can

International AI Regulation Trends

00:18:23
Speaker
set aside whether you're pro comprehensive regulation or whether you want a lighter touch regulation. It's just that it would seem obvious to me that the country that's in front on the technology side would also be in front on the regulation side. I guess that's just not how things are. Do you think the US government can figure out who is responsible for regulating AI?
00:18:47
Speaker
Who internally within the government, I mean? Yeah, I mean, maybe just to go back to that surprise, I mean, I think there is a mentality in Silicon Valley to sort of go fast and break things. And I think that
00:19:00
Speaker
that has yielded lots of companies innovation, GDP growth for the United States. So I think it's not like entirely overwhelmingly surprising that there is no regulation. I mean, I think that it's a business model that has worked well for the country. But we do see a changing world where I think it's not so obvious anymore. Like it has been for at least in the 90s and the early 2000s that the US was the only dominant rule setter. And I think
00:19:27
Speaker
The US is now waking up to the fact that the EU and its data privacy regulation, but most certainly also the Chinese in developing standards in technologies, is presenting rival options and that means that there may be more needs for US regulation and
00:19:47
Speaker
U.S. leadership if they want to influence the direction of this technology than there has been before. So I think that is now changing and that might lead to more surprise if you will.
00:19:58
Speaker
On sort of the, yeah, where in the US governments this needs to sit. I mean, that's a really tough question. And I think it's become harder and harder in the US system to introduce new agencies and set up new authorities. The executive order assigns most of the enforcement responsibility to NIST, the National Institute for Standards and Technology.
00:20:23
Speaker
that agency can't enforce anything. So that is somewhat problematic. And I think also, ideally something that either we change or we think about, is there another entity maybe within the Department of Energy that does have that enforcement authority and can make sure that rules that are eventually passed by Congress, that people also abide by them and sort of we take the laggards along.
00:20:50
Speaker
How much do we know about what's happening in China and India on the policy side?
00:20:56
Speaker
I'm not a China expert and I know far less about Chinese AI policy than I do about the EU or the US. Then again, I think we've seen a willingness with the Chinese government to regulate and they've introduced a number of guidelines and rules also about ensuring that any outputs of AI models are in line with the values that the Chinese government wants to see.
00:21:20
Speaker
And around the summit where china was participating we also saw a statement released by chinese and western academics.
00:21:33
Speaker
where the Chinese delegation and participants also really backed this concept of regulating artificial intelligence. And in a way, I think it shows that the outlier at this point is the US. And I think the executive order to some extent remedies that. But yeah, China has put hard law onto the books. And so I mean, the EU is about to do the same.
00:21:58
Speaker
Yeah, maybe we should actually talk about this executive order. What's the content of it? It's early days and we've done some analysis, but it requires things like red teaming on the part of the companies and also mandates that the companies share that information and the results of that red teaming with the US government. It talks

Balancing AI Policy Development and Innovation

00:22:19
Speaker
about training runs initiated by foreign entities and making sure that, again, the US government is notified of those large training runs.
00:22:28
Speaker
But it also goes into things like the role of AI in housing, the role of AI in employment, in education, making sure that people have the right skills. So it's a really wide ranging executive order. And what I quite like about it is that it in a way puts to bed this endless ethics versus safety debate because it shows that you can have one executive order that in a way addresses
00:22:55
Speaker
all of the whole range of risks and harms from AI. And you don't have to have an instrument that just worries about one side of it. And I think the AI acts in the EU shows something very similar in that it also tries to deal with both risks from AI in as it comes to integrating them in critical infrastructure or in hiring algorithms and what happens with the most advanced general purpose slash frontier models.
00:23:22
Speaker
So I think we should talk about some lines of skepticism about AI policy. And one of the ones I hear most often is that AI policy will just be too slow to matter. So the systems will have moved on whenever the legislation is implemented. Now, we just talked about how things are moving pretty fast and perhaps faster than they have in the past. But are you hopeful that policy will move fast enough in general to keep up with the pace of AI?
00:23:49
Speaker
So you're speaking to the head of policy. So I'm going to obviously defend the role of policy. And also, I mean, I completely grant that this is not the only thing that we need to do. And we also need to work out the technical details of how to make these systems safer or how to build safe AI. And like that, I think doesn't mean that you sort of drop policy or stop doing it.
00:24:16
Speaker
I think that the accelerating pace of governments engaging with AI governance is really promising and I think shows that much more is possible than people maybe would have assumed or would have thought a year ago. So I think that's one. But I grant that policy moves at a certain pace, especially in democratic countries where things need to be deliberated, everyone gets to.
00:24:36
Speaker
put in their perspective and that's what usually leads to better policies than if a dictator would just lay them down but it doesn't mean that you are like that you are on certain constraints and you can't maybe move as fast as you would like does that mean that if you get super harmful out of control a i.
00:24:55
Speaker
sooner than policymakers are ready for it, that you have a potentially massive problem? Yes, that is true. Does that mean that you then stop working on it? Because there is that probability, however high you think it is, a world's ending event would come before the policy is in place? Well,
00:25:17
Speaker
I mean, that doesn't seem rational to me either. I think you do want to ensure that you put the policies on the books. You hope that they're in time and you make sure that you also pursue other actions. Yeah, one point you've made is that we are racing towards advanced AI now and we want to have policies in place before we hit a point where we are unable to act. So we can't
00:25:41
Speaker
We can't begin thinking about policy when we are, say, one year away from advanced AI. And this doesn't mean that does exist with many people working on AI safety, especially when I joined FLI two and a half years ago. I was meeting a lot of people that said, well, this whole policy thing, let's wait for a couple of years, let's do lots more research, and then let's develop AI systems. And just when they get
00:26:08
Speaker
Terribly dangerous that's when we call the government and we put out one academic article and say archive and we tell everyone on the internet that this is where the government's need to step in and then we have the problem solved and I like I find that deeply frustrating because that's not how governments tend to work governments need a lot of time like as we talked about democratic governments need time to deliberate but also to build a bureaucracy or to set up a department that doesn't happen overnight.
00:26:37
Speaker
And you need to hire experts and build that government capacity way before you get to the cliff edge. This is why I think it's super important to start doing this work and to make sure you do that way before you reach a point of no return.
00:26:53
Speaker
Maybe this instinct to wait and do more research comes from a worry about the regulation we then do implement being kind of rushed and not polished and perhaps not informed. This is kind of the opposite question of what I just asked. Do you worry that we're rushing towards regulation now before we have a full overview of AI risks? I think we need to ask ourselves what we want AI for.
00:27:19
Speaker
And I think this doesn't happen enough. It's something that our executive director Anthony Gira in a recent paper also closing the gate also highlighted. You see things like alpha fold where DeepMinds developed a system that can predict protein folding that's hugely important to biology that doesn't require the sort of scaling that we've seen coming out of meta or open AI or anthropic.
00:27:49
Speaker
And as we put in regulation that potentially harms scaling or potentially makes it more difficult to build ever bigger black boxes that you can commercialize.
00:28:01
Speaker
and delays potential benefits that we will discover around those. I mean, okay, yes, that is a potential downside. I grant you that. But as long as we can still define specific benefits that we want from AI in narrow fields such as biology or healthcare or
00:28:22
Speaker
I don't know, transportation, inside driving cars, then we probably can reap most of the benefits from AI developments in ways that aren't going to be affected by regulatory efforts that target the most risky systems. So in many ways, I think you can have your cake and eat it.
00:28:42
Speaker
Yeah, we can imagine systems that are highly capable in more narrow domains and therefore do not pose the same risks of losing control of the systems that more general systems might do.
00:28:53
Speaker
Okay, here's another skeptical question for you. So you could say, open AI, deep mind and tropic and so on. These top

Corporate Influence in AI Regulation

00:29:02
Speaker
AI, AGI corporations are interested in regulation because they want to capture that regulation and then make it beneficial for themselves and keep out the competition. How much do you worry about this?
00:29:15
Speaker
I worry about this somewhat. I think to some extent it is a legitimate concern that we need to worry about. I do also compare AI policy to other policy areas. I mean, if you think of pharmaceuticals, we move beyond the world where anyone could just develop a potion and sell it to someone else and be like, oh, this will make your skin glow or make your hair go back.
00:29:38
Speaker
And we've put regulation and licensing in place so that if you get prescribed some sort of medication, you know that it tends to work or at least there's some evidence based around it. And that has led to some concentration, right? Your grandma can't put out her potion anymore and claim that it's beneficial in a medical way.
00:30:01
Speaker
I think we've accepted in that domain some cost to society and some limits and some concentration because we think the benefits are worth it. I think we need to strike that balance in AI as well, where yes, it is a trade-off, sure. People will have different opinions about that trade-off.
00:30:22
Speaker
If we want to protect our societies, we do need some entities that we can hold responsible. And this might be where collaboration is going to be really important. Can we build an institution where we have democratic oversight, we have participation of researchers from different countries who get access to systems or maybe who build systems collectively, so that you still can report about the systems transparently, you can make sure that
00:30:52
Speaker
all the information and powers and controls within private sector entities, but you still maintain some level of oversight that is much more powerful than just a free-for-all world. If top AI corporations want to collaborate around slowing down or implementing safety policies, could they be prevented from doing so by antitrust regulation?
00:31:19
Speaker
This is a really good question, and we would need an antitrust lawyer to look into that, but I don't feel expert enough to answer it. Okay.

EU AI Act and Its Implications

00:31:28
Speaker
Then let's talk about the EU-AI Act, where I at least perceive you as an expert. How is this act set up? What is it trying to achieve?
00:31:37
Speaker
So when the EU AI Act was introduced, it was introduced to identify different AI systems and what kind of risks they carried. So systems that allow you to build a social scoring system for your government or your municipality, where, for example, if you cross a red light in Copenhagen, you would lose 10 points. But if you would take care of your elderly relative, game five, those are prohibited. Similarly,
00:32:07
Speaker
Subliminal manipulation is a prohibited application. So those are examples of things that the AI act prohibits. Then the AI act also identifies a number of AI applications as high risk. So there's a list under the AI act of
00:32:24
Speaker
areas of sectors, such as, for example, the incorporation of AI in critical infrastructure or the incorporation of AI in hiring decisions, where you're allowed to use it. But if you do, you have to show what data sets that went into your system. Did you take any measures against bias? What risks do you see? What mitigation measures have been taken? So there's a burden on you to show that you've taken appropriate safeguards.
00:32:51
Speaker
And it's not that all AI systems that are using those sectors are going to be regulated under the latest compromise. It's only those that propose particularly severe risks in those sectors that are in scope. And then anything else is seen as low risk.
00:33:11
Speaker
And there will be some voluntary guidance that companies can follow if they have a low-risk application. But other than that, they are free to do whatever they want and they're not really touched by the AI act. What would be some examples of something that's in scope of the act versus something that's out of scope?
00:33:28
Speaker
Yeah, so like a hiring algorithm is a good example of something that's in scope. Biometric identification systems used by the police is something that's in scope, where you need to satisfy a number of criteria to be able to use that. And there's some that still argue that it shouldn't be allowed in any case. Then examples of things that wouldn't be in scope is
00:33:55
Speaker
AI using navigation, for example, in Google Maps, where the AI really doesn't touch it at all. For example, when it comes to generating content, such as images or voice or text, you have to disclose that it is generated by an AI system. But other than that, you also don't have to abide by any further requirements beyond the sort of message that this was generated by AI. So that is an example of where the activity has a very light touch approach.
00:34:25
Speaker
Yeah, I imagine such an act is a product of compromise. It's being stretched by one faction and pulled in another direction by another faction. What do you think of the act as it stands now? What are the pros? What are the cons in your view?
00:34:44
Speaker
Yeah, I mean, it is definitely a sausage that's being produced by many. And I mean, obviously, like knowing how the sausage gets made is not something you often want to know or should be known. I think one thing that FLI asked for from the very beginning was making sure that more general systems, such as GPT-4, those models are also regulated, and that the burden on identifying the risks that those systems pose is
00:35:13
Speaker
really placed on the big players at the source of the value chain and not towards the end of it. Initially, because of the application-based approach of the act, if you are a small, medium enterprise or entity, let's say a hospital, and you take a chatbot and build a chatbot on top of GPT-4,
00:35:33
Speaker
to do your contact with your patients than you as a hospital are liable for anything and everything that the system outputs even though you might not understand what application you have bought and i think making sure that responsibility is allocated in the right place is something we've spoken up about anything we now see the act is updated in that way.
00:35:55
Speaker
But on the process of making the act, I think it's important that people realize that the reason it takes a very long time and the reason that it sometimes gets stretched in different ways is most often because the major companies from the US have a lot of lobbyists and will try and pull it in one direction or another and delay it.
00:36:18
Speaker
I guess this is to be expected, but the EU takes a lot of blame for what is ultimately caused by corporate lobbying, namely delays or complicated law with exemptions. And I think for better or for worse, the lawmakers involved are trying their best to have and introduce the first ever AI law that is as good as possible, but it is often affected by this corporate influence.
00:36:43
Speaker
Is it a problem, in your opinion, that the EU AI Act does not regulate military applications? No. One of the other projects that we're working on is a treaty on autonomous weapons and the issue of autonomous autonomy in weapons systems. And you see a lot of division still within NATO.
00:37:03
Speaker
and within European Union members of NATO. And if you would have introduced guidelines or rules for military applications in this law, I don't think you would have reached any agreement. So I think it does make sense to exempt it so that you can focus on what you can agree on as you navigate military applications.
00:37:25
Speaker
One thing that I think is a mistake is that if the law only regulates AI applications that you put on the market in Europe. So if you're a European company and you're based in Slovakia and you export to Uganda, say, you can do whatever you want and you don't have to abide by any element of the AI Act, even though your product is made in Europe. And I think that is problematic because
00:37:49
Speaker
that I can see that technology getting incorporated in, for example, surveillance, where, yeah, it isn't quite sort of military applications, but I think it does contribute to authoritarianism and to instability, potentially in this world, where I think the EU AI could have done more. How is the act enforced? I've heard some complaints that there isn't enough third party enforcement of compliance with the act.
00:38:18
Speaker
Yes, so there is a regime for third party auditing that the act produces.
00:38:26
Speaker
but it's a very limited number of applications that are covered by that, making sure that that third party auditing is introduced for the general purpose AI systems, the GPT force plus of this world, and that that isn't left to some independent evaluators of limited status without any sort of rules or guidelines, or even to the companies themselves. And I think
00:38:51
Speaker
Within the AI safety community, we often hear this word EVALS, which if I can put this out on this podcast that I really hope that word dies very quickly. I don't understand why you would abbreviate the word evaluations, but it also is a rather meaningless term because I can evaluate your height and say that you're an X height, but then have I evaluated the right thing? I think an audit, because we see comparisons from the financial sector, for example, is a much better defined concept.
00:39:20
Speaker
And it's to me means that it's an independent third party and you have a set of metrics that independent third party evaluates you against and that you as a company can influence. And I think we really need to move to that in AI governance and away from this thing called eval. You would still need some research on developing these metrics. And I guess there you could take inspiration from the research that's being done on evaluations, which are kind of
00:39:48
Speaker
ideas about finding out in which cases are systems dangerous and how they might be dangerous. So it might not be fully incompatible, these two approaches.
00:39:57
Speaker
Oh, no, definitely. I mean, the work itself is super important. And I think we definitely need lots more of that. But I think we need to also understand the power dynamics that are at play here. It's not just about developing the concept and the metrics and the benchmarks, making sure that people actually meet them, abide by them and get punished if they don't. And that's arguably not something you can have an in like an evaluator random sort of evaluator without any status achieve.
00:40:27
Speaker
How much does the EU AI Act matter given that most of the cutting edge AI activity is in the US? I think policymakers look at other policymakers when they start to develop any legislation. And I think the EU AI Act matters because it actually gives an example and a blueprint of how you could do this thing. And yeah, if you're a low level
00:40:53
Speaker
civil servant and you're given the task of writing your legislation in the U.S. say, you're going to have a close look at what other people have written and you're going to end up with the EU AI act because it in many ways is the only game in town. There's also only so many ways you can like define a rule about transparency. You're going to run out of ways to write this down in law. So yes, some of the elements of policy, I think there will be divergence, there will be different options.
00:41:22
Speaker
But I think, yeah, for some basic concept, I think the EU will have probably arrived at some compromise or solution that many will be tempted to adopt. And you see, for example, in the Brazilian Senate, a law has been introduced that copies many, many elements of the EU AI Act. And if companies want to sell their products outside of the US, and you see more and more jurisdictions adopting elements for all of the EU AI Act,
00:41:48
Speaker
I think the incentive for them to abide by it or by parts of it becomes greater. And that becomes even more so if individual US states adopt elements of the act, as we saw with data privacy laws, where California copied the framework of the EU. So most

US AI Governance Components

00:42:06
Speaker
of the leading AI development currently happens in the US, but that doesn't mean that the EU bill doesn't have impact.
00:42:15
Speaker
How do you think it plays in US politics if you are accused, so to speak, of taking inspiration from the Europeans?
00:42:23
Speaker
You probably don't want to say inspiration to the Europeans. And I mean, you see that also around this summit now, right, like the in the same week, both the UK government saying they are leading the field on AI regulation, look at our summit, and you see Biden saying we're leading the field in AI because look at our executive order. So
00:42:45
Speaker
I think that's part of the game and it should also be possible to define a national response within certain limits where you work towards convergence. The EU and the United States together have a trade in technology council where they meet twice a year and discuss AI and they've just published a draft list of terminology in AI and definitions in AI.
00:43:13
Speaker
that I think shows that, yeah, there are just a lot of concepts and things that will be shared across the world and across multiple jurisdictions. And when we develop benchmarks and standardization, a lot of that will be held in common, even though, yeah, there might still be a national AI policy or a bill that diverges in part.
00:43:38
Speaker
If we look beyond the EU AI Act and just add regulation in general, why do you worry more about under-regulation than over-regulation?
00:43:47
Speaker
I think it's very hard to, once you have a bill on the books covering a certain area, to say that, oops, we've missed something really big. We're going to have to redo this, guys, and introduce a new AI bill, which then means you are left without the sort of oomph and
00:44:08
Speaker
and big moment and celebration of the fact that this is the first AI law. No, this was the second revised edition, but it's long with lots of time. It doesn't get you any press coverage because everyone feels it's already been done. That's super, super hard. Whereas I think having sort of
00:44:26
Speaker
made sure that we are safe and some of the harms that we fear don't materialize. That's slowing down AI development and it meaning that some of the potential benefits are only able to materialize later once we decide that that was really stupid and it took us a couple of decades but we had to roll that back to make this possible.
00:44:51
Speaker
to me, seems like a much better approach. Yes, we will lose out on some benefits, but we also didn't have to worry about some of the worst harms that we're currently discussing. I think the worry here from the, let's call it anti-regulation side, is that these regulations will stay in place long after they've stopped being useful. Regulations are difficult to get rid of.
00:45:19
Speaker
this idea of rolling it back once we feel like we've overstepped, it might be more difficult than we made. True, but I think we also need to really take a frank look at what the up and downside risks and benefits are here. If we are concerned about a substantial risk of extinction, then I think we should be willing to tolerate some delay. At least that would be my take.
00:45:46
Speaker
Let's talk more about US legislation. We talked about the executive order. There's also a Schumer bill. What's the content of the Schumer bill? How kind of fleshed out is this? My impression when doing the research is that I couldn't find that much US kind of regulation that was fleshed out. What do we know about the Schumer bill?
00:46:06
Speaker
We know very little about the Schumer bill because it doesn't exist yet. There is a set of insight for that are being organized where Senate Majority Leader Schumer brings together various experts to talk about AI policy and sort of elements of it.
00:46:22
Speaker
to try and inform this bill. And you see other senators and other members of Congress come out with proposals. So Ted Lieu came out in the House of Representatives with a proposal for an agency. We saw Senator Hawley and Blumenthal come out with a proposal where they talked about the need for a license for certain types of models, registration obligations. So I think
00:46:46
Speaker
those draft proposals will likely influence what ends up in the final bill, but it's not there yet. So I think we're going to have to wait and see a little bit. What about the AI Bill of Rights that was introduced, I think, some time ago now? What's the status on that?
00:47:04
Speaker
This exists, and so this is a piece of policy that the US federal government has imposed on itself in a way. It's asking the US federal agencies to look at that Bill of Rights and its guidance as they define and introduce algorithms, for example.
00:47:22
Speaker
And I'm from the Netherlands, where the Dutch political debate has been dominated by something that's now become known as the Dutch child benefit scandal, where your child benefits if you have a child and have a lower income.
00:47:39
Speaker
were defined by an algorithm, an algorithm that still hasn't been made public. But we do know that things like someone's second nationality fed into that algorithm. And whether or not the algorithm actually found you at fault was often deeply biased or even, I mean, there weren't a huge number of errors where people really had
00:48:05
Speaker
there was absolutely no grounds for including them on the ultimate fraud list that the algorithm generated. But it did mean that some people, Wednesday, did end up on that fraud list generated by the algorithm, were forced to pay back all of their tax rebates or benefits over nine, eight, ten years, a ten-year period.
00:48:28
Speaker
at once. With interest, it was some people to emigrate and leave the country. Child protection services sometimes took children away because families were unable to provide for their children after these measures have been taken. Some people committed suicide because of ending up in that situation. And I think the AI Bill of Rights seems to me like a great proposal to try and at least mitigate those kind of harms that are
00:48:56
Speaker
yet the government can or risks bringing over itself or sort of imposing on its people by rushing to introduce, yeah, digitalization in areas where that hasn't been thought through or not all considerations have been properly thought about. I mean, there's other examples from other countries, you have the postmaster scandal in the UK, where
00:49:17
Speaker
people running post offices were accused of committing fraud when they hadn't. There's the robo debt scandal in Australia. So these are all cases that I think the US AI Bill of Rights is trying to prevent happening in the US and the US government of course is a massive government and I think in many ways that's also a blessing because it's slower to adopt
00:49:39
Speaker
digital technology, then maybe smaller nations such as the Netherlands or Australia or Estonia have done. It means it hasn't maybe seen some of the worst harms yet. Then again, the Bill of Rights also has huge limitations. It only applies to the federal government and that's not where a lot of the risk of AI development is coming from. So it's great that we have that Bill of Rights, it's great that we have the executive order, but we also need actual enforceable
00:50:09
Speaker
law. And that's where we need to look to Congress for. Do you think

Bipartisanship in AI Policy

00:50:15
Speaker
the the AI Bill of Rights might serve as a point of agreement between the two sides of the debate in that perhaps if even if you're against regulation of private entities, you might be interested in limiting the powers of the government. And so there might be some agreement there.
00:50:35
Speaker
Yeah, I think that the Bill of Rights can feed into potential legislation and be a good starting point. I definitely don't think it would be sufficient just to put that on the books as hard.
00:50:47
Speaker
One thing I worry about is that we will ask the US government and we will ask the EU to regulate these fast-moving technologies, including AI, at a time in which they are becoming less and less functional. A lot hinges on what happens in the next US election, and I imagine that this election
00:51:09
Speaker
could have a pretty large impact on which direction US policy takes on the AI side. Do you share this worry? What do you think? Is there anything we can do about it? What stance should we take towards this? Should we just keep our heads down and keep working on what's in front of us? AI policy so far is remarkably bipartisan.
00:51:32
Speaker
I think there is a bigger cultural shift around the role of big tech in US life and how people are reevaluating and assessing that, that I think is shared on both the Republican and the Democratic side. And I think that gives me some confidence that even if political polarization continues on the trajectory that we see now, we're able to get to some agreement on core AI governance decisions.
00:52:01
Speaker
But it is definitely a worry that democratic states struggle to pass measures that are necessary and that their people want. I mean, there's been a lot of polling that came out around AI governance and risks, where you see overwhelming support from the public to tackle this issue and to have government step in. So I think there is that democratic base. So I think it
00:52:28
Speaker
Yeah, we really do need Congress to act upon that, but it is a risk. One impression I've gotten from people working on the technical side of AI safety is a sudden skepticism about policy work. But also my impression is that this skepticism has kind of dissipated lately where
00:52:47
Speaker
people on the technical side of AI safety have become more interested in policy work, perhaps because the field is moving so quickly and many complicated technical schemes to make AI safe might not be done in time. Do share that impression.
00:53:04
Speaker
I do, yes. I think there is a re-evaluation, I think, of AI policy and how quickly there can be change can happen and what can be done there. I mean, maybe

Careers in AI Safety Governance

00:53:15
Speaker
I'm going to try and defend the other side a little bit in that I think we also need moonshots in AI, technical AI safety, where I think there is a role for government here to say, this is what we need. These are the features that we're looking for in AI development.
00:53:33
Speaker
That doesn't exist at the moment go and find out how to do that so that you take away i think some of the. Talent and focus from the current paradigm which is the scale scale scale towards things like you know the concept that your russell has talked about in terms of making sure that.
00:53:52
Speaker
AI systems always apply a degree of probability towards what they know about the world around them and about what humans want. There might be many other approaches that we haven't thought of or haven't developed that could be developed, but I think to some extent you would then need to make sure that that thinking happens outside of the labs that are all sort of stuck in this one paradigm.
00:54:16
Speaker
There's obviously a collaboration here between the technical side and the policy side, where the policy side is interested in implementing some solutions that the technical side comes up with and so on. Say you're interested in working on AI safety in general, perhaps your early career. Where should you go? Where should you start? Would you go to the government? Would you go to academia? Would you go work at one of the AI companies?
00:54:42
Speaker
My recommendation is go to government because governments have woken up this year, 2023, to the UI governance and they'll be hiring lots of lots of people and they need to build that expertise. And they will probably need external people to come in to help them build that. Whereas I think the government, the companies already have AI safety teams. So in a way, like your added value is going to be
00:55:07
Speaker
I guess, more limited. And I mean, I think that you could say the same for academia as for governments, like I think it's also really important that that field grows, and that more and more people take think about these challenges. And I sort of put the AI safety institutes that were announced this week in the one in the US and sort of the expanded one in the UK, as also really good examples of places where you can work that are sort of academic government crossover places.
00:55:37
Speaker
I feel that where it made a lot of sense to work on AI safety within companies as they were developing these systems and they were just like a lot of the underlying technologies, the neural nets were just developed and released and discussed.
00:55:58
Speaker
I do think that since February marks this year when you saw the financial times do analysis about how many billions of dollars are now being poured into the AI industry the profit motive will have overtaken i think the dynamic in many of these companies perhaps not all but in many i think that means that you.
00:56:21
Speaker
do need to think carefully about whether your role in a company is the most effective way to work on safety of these systems. We can compare this to big oil or big tobacco where
00:56:37
Speaker
people that work for Exxon or Shell will be really concerned about climate change on an individual level. But that doesn't mean that they can change the way that those companies operate at the end of the day. And I don't think we can expect AI companies to be wholly different beasts where somehow employees in that industry will be so much more powerful than employees in other industries that have a clear profit motive.
00:57:04
Speaker
So with that in mind, I think my recommendation would be to join the government or to research as an academic. Let's talk about kind of which risks we see from AI and whether we have the tools, the kind of regulatory tools necessary to address these risks. So if we start with something like individualized persuasion, where you can target a person and you can say, okay,
00:57:31
Speaker
The reason why you should, and this person might be a part of a very small group, the reason why you should vote for this candidate is such and such. And this has been infinitely kind of A-B tested to be perfectly tailored to this particular person. Do we have the tools necessary to regulate something like that?
00:57:51
Speaker
I think the DEU-AI Act attempted by prohibiting subliminal manipulation. I think nobody really knows what the word subliminal means and where this begins and ends. So I think that's a real challenge. I think to some extent any law is going to have vague concepts.
00:58:10
Speaker
There are prohibitions that you can probably lay down that will address some of this. You will often want to do this at scale. I would think if you have a political motive in a democracy, you need to convince a lot of people. If you have a profit motive, you want to sell a lot of products. So your manipulation needs to affect a lot of people.
00:58:31
Speaker
which makes it potentially easier to detect or enforce. Hopefully not everyone is trying to manipulate other people, so the number of potential actors that you need to survey might be relatively limited, which makes me cautiously optimistic that this might be possible.
00:58:49
Speaker
It's funny that you should

AI Risks: Bioterrorism and Autonomy

00:58:50
Speaker
mention uncertainty about the meaning of certain words in legislation. One frustration I've had in trying to read some of this proposed legislation is just how much hinges on definitions of reasonable and proportional and necessary in a democratic state or country and so on. Is this just how law functions or could we be more precise in our legislation or would that have some downsides?
00:59:18
Speaker
I think we really don't want to be more precise. My pitch here is for constructive ambiguity, especially with a technology that's so in flux. We need to lay down what we want and what we are worried about. Yes, we want to give companies some certainty, so there is a balance to be struck.
00:59:37
Speaker
But we also need to make sure that it isn't so specific that if a new type of risk emerges or something we hadn't anticipated, that we really have to go all the way back to the drawing board. And a lot of legislation that like lays down a norm that people will try to respect.
00:59:58
Speaker
We don't rely on courts and judges and fines for most rule-abiding behaviour and I think that is really critical in that
01:00:10
Speaker
Once these concepts are laid down and people start interpreting them, most people will make the right call and will behave in the way that the lawmakers have intended them to behave. And it's going to be a very small minority that will face enforcement action. And that means that the law will bring about the intended effect.
01:00:28
Speaker
Do you think we have the regulatory tools necessary to prevent something like AI-enabled bioterrorism, where you use perhaps the next generation language model to tell you how to synthesize something that could be very dangerous to all humans?
01:00:48
Speaker
With the current generations of systems, the jury is still out to what extent they amplify these risks. And I think a lot of research is currently being done. I saw a new report from Rand come out this week as well, where I think they compare googling this information versus using a large language model. How much closer or easier does it then become for a non-specialist to develop a weapon like this?
01:01:15
Speaker
The executive order also mandates more oversight with DNA synthesis companies, and I think that seems like an obvious policy measure to take that you really want to develop. That is the first time I've seen any regulator anywhere try to attempt this, so we're not there yet in any shape or form.
01:01:41
Speaker
We know both, like, we want to make sure that I think governments improve their understanding of their national landscape, like where does, where could buyer risk emerge from in their country where the companies that could potentially build this. I think to some extent this links to the open source question where if you've put guardrails on your model to prevent this information from, or instructions from being shared with
01:02:08
Speaker
just anyone based on any prompt and you can put out a model in such a way that those guardrails can be easily removed through fine-tuning then yeah that obviously amplifies the risks and that sort of presents a whole new range of questions.
01:02:24
Speaker
Yeah, I think the worry is that some future model will contain some tacit knowledge from its training that's not available when you search on Google. It might be able to fill in the gaps in its guidance of how to synthesize something dangerous.
01:02:42
Speaker
Where

Challenges in Open Source AI Regulation

01:02:43
Speaker
do you think we might regulate this? We could regulate this by regulating the model itself. That's pretty heavy-handed, perhaps. We could regulate it by regulating these companies that synthesize DNA, or we could perhaps try to monitor wastewater for what might become pandemics. Do you have any thoughts about what might be the best approach for this?
01:03:08
Speaker
I don't have, I think, clear thoughts on where, like, out of these three options, for example, where we would want to put the heaviest focus. And yeah, I think things like the Rand study, like, are really critical at this moment to try and really nail down where we think, like, what is the chain of events and how can we best address this?
01:03:35
Speaker
What about something like rogue AI? And here I'm thinking about AI that becomes more like an agent and less like a tool, perhaps develops motives that we didn't intend to encode into it. Perhaps it begins seeking power in ways that we would like it not to. How can we regulate this? What kind of regulatory tools do we have available for this?
01:04:01
Speaker
Yeah, so here you see the, I think there's been a lot of discussion also within the partnership for AI on like, can you restrict autonomy? And can you restrict, for example, linking some of these models to external applications? Or if you do that, do that in a phased way after you release the model, don't allow sort of any random entity to build something and integrate it with your system, but
01:04:29
Speaker
like do a risk analysis and assessment before you introduce that. I think there are ways to limit this and to mandate that as well to try and address these emerging risks from increased autonomy. What do you think about open source? How can we regulate this? Because as I see it, there's a specific challenge here where open source is decentralized, it might be
01:04:55
Speaker
models uploaded to decentralized networks by autonomous people. You can't go knock on the door of some company that produced the model necessarily. How do you regulate open source AI?
01:05:10
Speaker
I find that possibly the toughest question in AI governance at the moment, how we deal with open source. Maybe just taking one step back, I think we need to identify what drives various people in the debate throughout open source. I think some people are really concerned about regulating open source because they're worried that it will limit innovation to US private sector companies and potentially Chinese companies, but it will cut off the rest of the world from participating. So I think that's one group of people.
01:05:39
Speaker
And then you have a different group of people who are worried about surveying states and sort of how much control the private sector would have if you wouldn't open source these systems. And on the other side of the debate, you have people that are really worried about, let's say, North Korea obtaining a system and using it to attack critical infrastructure, or people using a model to spread
01:06:05
Speaker
information and sort of ensure that there's a breakdown of truth and having any random terrorist group obtain a system to potentially build a bioweapon. So I think, yeah, I think we need to separate out those motivations because I think there's a lot you can do in policy once you understand how you can maybe meet some of the objectives that some of the groups in this debate have.
01:06:29
Speaker
I would really recommend everyone who hasn't done so to read the paper by Meredith Whittaker, the president of Signal, that came out a few weeks ago, where she analyzes, for example, how the push by Meta for open source is probably commercially driven and is an attempt to capture the
01:06:47
Speaker
could be an attempt to capture the open source ecosystem in a sense that you could get lots and lots of developers to contribute their knowledge for free and then try and build a proprietary product on top of that once that community has offered things that are almost ready to be built into a product. I think she highlights there as well is sort of how
01:07:08
Speaker
The term open source because of its connotations in software where it's just one guy writing an open source program and everyone feels like really like all the sympathy towards this one sort of person fighting big tech.
01:07:24
Speaker
That positive connotation is used in open source AI, where the situation is very different because you need a lot of compute and you need to spend quite a lot of money to train any system to evade regulation, where open source, and I think she calls it open source washing, is used as a way to fight back against the AI act, for example, and say, no, no, no, we need a world that's free and open. Therefore, we

Multilateral Cooperation for AI Transparency

01:07:51
Speaker
shouldn't possibly touch open source.
01:07:53
Speaker
As that argument tends to be driven by the likes of meta that sort of have a commercial or business plan underlying it rather than being motivated entirely by principle then on sort of the actual policy question of what we do about it.
01:08:10
Speaker
I think we need to be very specific about the kind of models that we're worried about. I do think we need to impose restrictions here in that we don't allow people to open. We've been through this in many ways with nuclear technology, where a lot of information around how to build a nuclear bomb isn't something that you're free to share if you're a nuclear physicist in the United States.
01:08:36
Speaker
for good reason, and I think some recognition that there are models out there that shouldn't proliferate, I think, at least from FLI's perspective, is really, really important. But I think once you strip down the debate to its essential components and understand who's driving what, I think there's also a lot of common ground that can be found between various players.
01:08:59
Speaker
in the sense that many models that people are worried about being restricted probably wouldn't face any restrictions. I think most people in this debate can probably agree that the major players shouldn't be able to offload the risks that they pose to society just by putting something out as open source. So I think there is maybe more common ground that can be found here than we see at the moment.
01:09:25
Speaker
What about the worry about centralization? I mean, we could imagine a possible future with us, one or two at Duopoly, perhaps, of models that you can access. And perhaps there's a partnership between the government and the companies running these models. And this all begins looking very autocratic, perhaps. What can we do to prevent that?
01:09:46
Speaker
I think that's where we need multilateral cooperation. I think people have talked about a CERN for AI. I think bringing together various researchers and governments working on this problem and ensuring that there is sufficient transparency around that and governmental oversight and participation I think is really key in trying to tackle this issue. But there is definitely a tension right between
01:10:14
Speaker
trying to live at harms from ever more powerful systems.
01:10:20
Speaker
openness and giving everyone an opportunity to use those systems. I mean, to some extent, you can't exactly have it both ways. If you are going to limit the number of models or sort of how powerful they are, then I think doing that in a controlled environment that is as transparent and democratic as possible is, I think, what I hope future summits, for example, the one in South Korea or the one in France will be exploring.
01:10:50
Speaker
What are your favorite regulatory tools where a regulatory tool might be something like saying that a company needs to do red teaming and then give that data about that red teaming to the government?
01:11:05
Speaker
I think my favorite is risk identification. We saw, for example, with Facebook and their social media newsfeed algorithm, that they knew that this caused mental health issues with young women and girls, but that risk and that effect was never disclosed to the public.
01:11:26
Speaker
By mandating risk identification you at least have a basis that you can start to have a discussion about and you can have policymakers but also just a general public and journalists ask questions and discuss and in many ways the terms and conditions that of the major players.
01:11:45
Speaker
highlight a number of risks already and it basically says you can't use our system for political campaigns or you can't use our system for XYZ and I think that's a good start. So I think in my mind it shows that this regulatory tool is a possible regulatory tool. You can take that inspiration from the terms and conditions that exist
01:12:06
Speaker
But you want i think that list to be as comprehensive as possible and i think here there is a real tension between what is in societal interest and in all of our collective interest and what's in the company's interest right because i was a lawyer representing any of the companies i would say the last thing we want is a list of risks online because that means that we had evidence of that risk materializing and we could be held liable for it.
01:12:30
Speaker
Yeah, I see the resistance, but I think that is something we really do need. And I think in my mind is the most important thing. Companies doing a frank risk assessment and sharing that with all of us. And then the enforcement mechanism is liability. So if they are aware of some risk, and they've published that they're aware of this risk, and then they still push ahead, then they can be legally liable with fines or what other consequences might there be.
01:12:59
Speaker
Well, I think we don't maybe even have to go there because I think if you've like shouted about the risks that you see and you've been forced to tell everyone about them, you're going to do a pretty good job mitigating them, right? Because you still want to sell your products and you still want to have people trust your products. And you also just want to avoid a PR disaster. So you're

Incentives for AI Safety

01:13:18
Speaker
going to want to really tell your safety team to work twice as hard. Now you've put that on the internet. Yes, I think ultimately, yet you also want liability.
01:13:27
Speaker
But my hope is that having forced to disclose this and having forced much more detail than maybe the current safety plans of the companies provide, yeah, you motivate these companies to change course. How much do you think companies are motivated by avoiding PR disasters versus avoiding, say, fines?
01:13:47
Speaker
Given how much money is earned in this industry and at least how much investment is being attracted, I would think companies are going to be largely motivated by PR risks and reputations.
01:14:02
Speaker
The UAEI Act, interestingly, provides for fines that are 4% to 6% of annual turnover. That means that the fine is substantially larger if you're a substantially larger company and means that a lot of the rules have teeth. I guess my answer is, it depends on how your structure defines. I think reputations do play a large role in company behaviour.
01:14:31
Speaker
What do you think of bug bounties? So setting aside a bag of money and giving that bag of money to the person who identifies a vulnerability in the system.
01:14:40
Speaker
I think that's great. It seems like

Compute Governance and Ethical Concerns

01:14:44
Speaker
something that's pretty easy for the companies to accept also, and perhaps that's why some of them have done so. Perhaps that's also a point of agreement between many sides in the debate. What do you think about governments subsidizing AI safety work, specifically technical AI safety work?
01:15:03
Speaker
I think that is also really important. I think we do need to think very carefully about where we want AI development to happen and how. And I see a much greater role for the public, for democracies, for public debate to define like
01:15:20
Speaker
what we want out of AI development. And also, I mean, that should then dictate what you subsidize. And maybe that doesn't mean that you actually subsidize people, but you also bring that into an AI safety institute, for example, that is part of the government. So you may want to go a lot further than just providing a subsidy, as well as multilaterally or nationally defining certain challenges that you set to the AI industry or to researchers.
01:15:46
Speaker
for which you potentially offer a reward if met or a monetary incentive. Yeah, I could see those kind of structures beyond just sort of expanding the number of academics, which I think would also be good to make sure that this is an issue that doesn't just happen at a few, a limited number of faculty, but truly becomes a global research problem.
01:16:07
Speaker
There's also the problem of distinguishing between safety research and capabilities research, where something like reinforcement learning from human feedback, which is what basically allowed chat TBT to work as well as it does, that originated as a safety project. But it became something that, in my opinion, advanced capabilities. So we would have to be careful about what it is that we subsidize
01:16:34
Speaker
how we test, how we judge whether this is pushing safety forward more than pushing capabilities forward. And that's a difficult call. Yeah, so there was an event organized the night before the summit with Stuart Russell, where he really highlighted that there's a difference between making AI safe and making both AI. I think that is a really essential truth, at least in my mind, that we haven't done enough with. Like, it is one thing to have
01:17:01
Speaker
companies just build whatever product they want to and then be like oh we forgot about the safety department but thank god they have a third floor let's call them and ask if they can do some reinforcement learning by human feedback or some other measure that they can put on top to make it safe.
01:17:17
Speaker
And starting and I mean, Stuart's claim here was this is never going to work. And I think I tend to agree with him. Whereas the alternative is making sure that you define from the outset, okay, we need something that's actually safe, and it needs to meet these specifications. And this is therefore what we're going to build. And I think that's where
01:17:37
Speaker
you need a sort of non-profit perspective or government public sector to step in, because incentives might just be different in a private sector that make it harder, not impossible, but harder to take that approach. What do you think of compute governance as a framework, where compute governance might mean something like putting a cap on the largest training runs you can do for advanced AI?
01:18:03
Speaker
these various policies to say, okay, we need to compute computational power caps and also different requirements if you exceed certain levels, including sort of a hard cap for the highest tier. I think this is a really important part of the conversation and we definitely need both monitoring. I think at the moment governments don't even know
01:18:27
Speaker
who has a lot of computational power in their country. So I think that that's a really important first step. We also need to make sure that we think about and discuss with hardware manufacturers like how do you build chips and how can we potentially ensure that chips can be like the most advanced AI chips
01:18:48
Speaker
can be turned off and are there hardware options that mean that you don't know what the system is doing and you don't interfere in people's privacy but you are able to shut off particularly large clusters of the most advanced technology. The question looming

International Collaboration in AI Safety

01:19:10
Speaker
over this is it going to be enough or is it going to be horrible? Probably not.
01:19:14
Speaker
And also, is it going to be long-lasting when computing hardware becomes better and better? At some point, you'll be able to train a model that has the same capabilities as GPT-4 on a gaming PC.
01:19:31
Speaker
I feel that on the one hand you see and taking another Dutch example because ASML is there as the producer of the chip manufacturing machines.
01:19:46
Speaker
they were always sort of a commercial company selling to china selling to the u.s. telling to anyone who would pay for their products and suddenly i think they found themselves in the middle of a geopolitical debate there's been an agreement reached between biden the japanese and the dutch government because that's where most of the advanced taking the supply chain comes from and like suddenly their product became this hugely political thing that
01:20:14
Speaker
I think they needed to define policy around and sort of where there are limits. So I think it does show that even compute can enter a very different paradigm. And that is not a given per se that the way that the hardware gets developed will continue in the way we do it today. And we may want to reconsider that. And I think we need to buy ourselves a little bit of time.
01:20:40
Speaker
I think we, regardless of what risk you see from AI development, there are like, I think we can all agree that there are many and they deserve a societal response that we are not ready for yet. And I think the fact that we are just at the beginning of developing a lot of legislation of educating policymakers means we just need to make sure that we have that time to put that in place.
01:21:06
Speaker
So if compute governance could help us by that time until we figured out something that's more sensible, then I think it's worth pursuing, even though it's not going to be durable. One reason it might be possible is because the supply chain of advanced chips is so concentrated in specific companies like TSMC in Taiwan, ASML in the Netherlands.
01:21:32
Speaker
and so there aren't many places that you would have to intervene in order to control, in specific ways, the most advanced chips. I think we should talk a bit about the ethics of technological transformation. How do you think about the fact that we might be rushing towards advanced AI without the agreement or without the consent of much of the world's population?
01:22:01
Speaker
leaders of the top AI companies making predictions about advanced AI, maybe this decade, perhaps next decade. And even though this is becoming a more prominent issue, I'm sure that my grandmother hasn't developed a position on this. And so is it ethically okay to develop a technology that transformational?
01:22:27
Speaker
I find this a tough question. Is there an individual right not to have your life transformed by a technology? I think that's difficult. We were living a long, long time ago and someone came around with the wheel.
01:22:44
Speaker
I mean it does really make your life very different and you know you have to find a donkey and everything. I'm not sure if I can see like this ethical baseline that like you should we should always have a global call on any technological breakthrough but I think it's different when there is a risk of significant harm to you and I think that is what we're talking about in this case where we have
01:23:09
Speaker
many of the leading AI experts put out probabilities that this could even lead to extinction, that puts us in a different ballgame. And I think that is where, yes, there is an ethical responsibility on the part of the people developing this to make sure that they only do so once they get to go ahead from other people. And I think that's not what we're currently seeing.
01:23:34
Speaker
And I think people have a right to be upset about this. I mean, I was walking past the PAWS AI protest at Bletchley Park, where the UK AI Safety Summit was held. There is justifiable anger and emotion when it comes to the lack of consultation in some of this development. So, yeah, I don't think you have a pure right to be consulted on technological development.
01:24:03
Speaker
But yeah, if it leads to significant harm, I think there is a bigger case to be made. True. And I think that's the point I wanted to get to, that this might be different than introducing a new car or a new smartphone, where it would seem pretty ridiculous that you would have to consult everyone in the world before you. Actually, these technologies actually did change people's lives. But yeah, such a right couldn't plausibly exist. But when we're talking about a substantial risk, perhaps, of extinction, then the game changes.
01:24:34
Speaker
How do we then involve people democratically in these issues? I think the UK AI Safety Summit is a really massive step forward and Carlos, another FLI colleague, has met with a number of governments participating here.
01:24:55
Speaker
That I think hadn't really thought about a safety much before and I think that is a really really important step. Similarly, the the UN Secretary General has just announced a high level advisory body on AI and our board member and co founder young talent is on that.
01:25:12
Speaker
I think that's also a place where you see a lot of, there's a broad representation, people from all around the world, very different backgrounds that are trying to grapple with risks from AI. So I think that's, I think the beginning of setting up structures that are more global and allow for more participation, but we're a long way
01:25:36
Speaker
away from that. We see it in many other debates. Nuclear weapons is a really good example in my mind, where there are lots of people that live in a country that has nuclear weapons, but if the great powers do decide to attack one another, they are going to suffer in a nuclear winter.
01:26:01
Speaker
They have no say over that and were never consulted. So it's always going to be, I think to some extent limited the extent to which we can properly consult everyone based on past examples. And I think we'll need to do better. And I'm hoping some of these structures, the UK trajectory, the UN Secretary General's initiative are going to like learn from past examples. But yeah, I'm skeptical

Ethical Concerns in Autonomous Weapons

01:26:29
Speaker
that we'll reach perfection.
01:26:30
Speaker
What's the right model for global governance of AI? What other organization could we take inspiration from? Here I'm thinking of something like the European Organization for Nuclear Research, CERN, or maybe the International Space Station, or some large collaborative global human project.
01:26:50
Speaker
Yeah, I think we need multiple features and maybe not in the same place. I think a CERN is definitely something we need. I mean, I think we need to ensure that more of the development happens in a way that there is oversight over it and governments participate in it rather than just in the private sector. And I see a CERN-like collaboration that doesn't maybe have to be in a same physical location because you don't like
01:27:17
Speaker
particle accelerator. Exactly. And that kind of thing to happen in one place in Geneva. But I do think you want that to set up that kind of international infrastructure. I think third is an interesting example here because membership is not necessarily universal. And I think to be able to move quickly, you will want to
01:27:39
Speaker
I think look at those nations that are at the forefront of developments that are exposing the rest of the world to the biggest risk and get at least them on board from the very beginning as you then expand and make sure that more people that have a stake get to have a say.
01:27:58
Speaker
You also need significantly more research and here the safety institutes seem really important in my mind and I think you also want to make sure that you build more convergence and I think, I mean, lots of people have been talking here about an IPCC for AI. I'm not too excited because I mean, look at climate change, right? I mean, if that's our beacon
01:28:21
Speaker
on the hill like we were doing so we want to make sure that whatever thing we build to get more consensus around research results and sort of what the risks are and as we build that shared understanding of risk.
01:28:37
Speaker
That that moves a lot quicker and that we don't, as in the example of climate change, allow industry to take one or two papers or in some case no paper and invent the sense of polarization around climate science. I mean we can't afford that in AI given
01:28:54
Speaker
how quickly the technology is developing. So we also definitely need that function. And then we need an enforcement function. And I think that third function, I guess, is in many ways the hardest. And again, I think we need to start from the countries that expose the world to the biggest risks, rather than perhaps aim for universality in all 193 member states of the United Nations from the very beginning, because
01:29:23
Speaker
Yeah, it seems unlikely that you'll be able to act quick enough if you go down that route. Do you think countries will begin launching Manhattan projects for AI? So you could imagine the US president coming out and saying, we're going to build AI and we're going to do so safely. This is what he's saying. And we're going to do this before it's developed by a commercial entity.
01:29:47
Speaker
It seems a pretty risky proposition to me, but I don't know, would you be excited for something like that? Or what would you think? I'd like to rebrand it the Apollo project. The Apollo project, yes. Rather than the Manhattan project, because that led to a bomb. Then again, I mean, to some extent, this is what we see from OpenAI, right? Or from Antropic, like they have already made that announcement. So the question is, how would if this could be substantially safer and safety would be
01:30:18
Speaker
the thing driving it, then I think there would potentially be benefits. I think a lot of the devil here would really be in the details and how this is done and under what conditions and how many precautions people are taking, how transparent it is.
01:30:35
Speaker
And I'd much rather see this happen at a sort of certain type institution or allowing multiple countries to work together than putting all of my eggs in the basket of one state or sort of one president.
01:30:50
Speaker
There's this dynamic in which, I mean, I think it's true to say that the leading AI companies right now were all founded with safety in mind. So Google DeepMind and OpenAI and Anthropic, now perhaps XAI that was founded by Elon Musk. These companies were founded because in response to others, perhaps acting irresponsibly. And so we are going to develop AI and we are going to do this safety. So this is potentially a
01:31:19
Speaker
Dynamic in which you step into the race and then you accelerate the race unintentionally. So I would worry about that in the situation of an Apollo project for AI, but yeah, I can see the arguments for it too.
01:31:30
Speaker
Yeah, I mean, that would have to, if done well, and you managed to do that, and you do it in cooperation with other nations, you would ideally prevent other rivals in the private sector from taking excessive risk. Either you would impose things like a hard compute limit on those players so that you
01:31:51
Speaker
make sure that the project you've carefully designed is the one that we're doing experimenting with rather than letting ever more entities impose ever more risk.
01:32:05
Speaker
We're taking some things for granted in this conversation. We're taking for granted that AI is advancing pretty quickly and that artificial general intelligence is possible and that AI could be dangerous and so on. I imagine that when you, in your daily work, you meet people who are perhaps in politics for entirely different reasons and are quite skeptical around these issues.
01:32:28
Speaker
What do you say to them? How do you pitch them, so to speak, on the importance of AI safety? I mean, I think there's many examples that people can point to when they think about AI risk. And most regulatory measures don't require you to think, to believe that artificial general intelligence is possible or even more limited as you design a response. Things like, for example, the regulatory measure we talked about, my favorite one, regulatory
01:32:58
Speaker
risk identification is something that you probably want for most AI applications anyway. So often we don't talk about very specific sources of risk. And there's also many applications of AI that I think everyone can understand very quickly and support sort of can lead to existential risks. For example, the video we put out over the summer
01:33:26
Speaker
artificial escalation where we discuss the incorporation of AI in a decision of support systems and that potentially leading to a nuclear exchange on the Taiwan Strait is a really good example of an AI application that we can build today with the technology we have today that could wipe us all out.
01:33:47
Speaker
I think there's often a lot of common ground with the people I talk to about. I do think something has changed, especially after Geoffrey Hinton left Google. He met with the President of the European Commission, Ursula von der Leyen. We've had people like Joshua Benjio speak out much more clearly over the past few months. And I think the fact that some of the most cited AI academics
01:34:15
Speaker
are so deeply concerned about what it would mean if AI would surpass or equal human abilities. It means that I think policymakers are taking this a lot more seriously and are engaging with this topic more deeply than they have done in the past. Let's end by talking about autonomy in weapons systems. Why is it that militaries want autonomous weapons? What is their perceived value?
01:34:44
Speaker
A speed, I think, is the biggest perceived value speed and skill. So, militaries feel that if an adversary, for example, would have a system that would be autonomous, then they also need a system that's autonomous so that they can strike back at the flame rate. Otherwise, a human could slow it down and that would mean they would lose the exchange. And soldiers are expensive, they need food, they need sleep. It looks very bad to have them coming back in caskets.
01:35:13
Speaker
That too, it can really undermine public support for a war if lots of soldiers die and as you say, come back in caskets or return to their family. That's what's driving a lot of interest in the technology. How do you think autonomy changes the balance of power between powerful states and weaker parties in conflict?
01:35:39
Speaker
I think what my personal view is, is that it will likely upset this balance because what we've seen over the past several decades is that entities like the United States, the Soviet Union, at the time China have a massive scale and can they use that scale to invest in the advanced fighter pilots, the VAS tanks, and those are really expensive products.
01:36:07
Speaker
that like smaller entities and smaller states can't acquire whereas with autonomous weapons you can potentially have really small drones that are relatively easy to manufacture or can even be bought commercially from Amazon and then adapted.
01:36:26
Speaker
to have a military application where what you need to ensure that it performs is potentially also software, which is obviously easy to just copy and doesn't cost you much. If you manage to acquire that type of capability,
01:36:46
Speaker
and it's good enough, then the relative advantage of a big power and their scale and sort of capital investment is likely going to matter less. So it's probably going to make smaller entities more powerful.
01:37:01
Speaker
So now it sounds like we can perhaps save some money and we can protect ourselves and we can save the lives of soldiers. Why is it that FLI is trying to, what is it that we are trying to prevent here? Maybe you can paint us a scenario of what it would mean to have unregulated autonomy in weapons systems.
01:37:21
Speaker
So I think there's three major categories of risk that we're worried about, or sort of concerns that we have around autonomy and weapon systems. One is the obvious one, which is the ethical one. Like, do we really as as a species want to yield decisions of life and death to a machine? And is there something human about
01:37:41
Speaker
great way to work that we want to preserve all sharp who's an expert who speaks a lot about this issue describes a situation where he was a soldier and he was on a hill in Afghanistan and a young girl about six seven years old was sent by the Taliban to scout out what was happening around the hill.
01:38:01
Speaker
And under the international rules of war, she is at that point a combatant. There's no age limit in the rules of war, so she should have been shot. I think if he had been in an autonomous system, he would have shot the six-year-old girl, but he felt like morally he couldn't shoot her, even though she was giving intelligence about his position to the Taliban. I think that's a good example of where saying, okay, we can program the rules of war into a system might have limitations that we do want to think about very, very carefully.
01:38:31
Speaker
Then there is the concerns we have around accountability like at the moment if someone commits a war crime you can point to that person or you can point to that leader and you say well this person entities committed a war crime but if you're a Ukrainian soldier or Russian soldier at the moment and in that war you would activate a system that consists of a swarm of say 10,000 drones and
01:38:56
Speaker
that those 10,000 systems commit war crimes at scale and the only thing you did was press the button and you were under certain assumptions about what it would do. Can you really be held accountable for every single move for this soldier that a system killed who was already raising his hands and by doing so became a non-combatant?
01:39:18
Speaker
Yeah, given that you're outsourcing, we are talking about autonomy. So you are outsourcing decision making to these systems. And so perhaps, yeah, there's uncertainty around whether you can be held liable for the initial decision to activate the system.
01:39:33
Speaker
Exactly, especially if you're activating an autonomous submarine and you activated it several months ago and it's still out there. To what extent can we really hold you accountable? And then I think there's a third category which I'm most concerned about, which is around international security.
01:39:54
Speaker
This is what most sort of keeps me up at night when thinking about AI and military integration is the Taiwan Strait in the future, where I think there is a severe risk that both the US and China might deploy autonomous systems and do so at scale, train those systems on classified data.
01:40:13
Speaker
Those systems may be interpreting a ray of sunlight or the movement of another system as an attack when it's not an attack and that leading to unintended escalation and a war that's fought at machine speed in several seconds where you then almost have to retaliate because you've lost so much equipment or you've lost so many men and I think that's
01:40:33
Speaker
terrifying prospect also because we know from civilian AI that it often malfunctions and it's okay because you can see it and you can fix it but like if you in military situations you probably have one shot and you can't really test it very well under battle-hardened conditions so you're going to have things that happen unintentionally
01:40:53
Speaker
So that's one category of sort of security risks on the sort of other categories around non-state armed groups, terrorist groups. We're now talking a lot about sort of the role of Hamas, the role of Hezbollah, and having those types of groups, for example, giving them the ability to target a specific ethnic group, let's say, or Jewish males between 18 and 22. That, like, is something I think you
01:41:21
Speaker
really want to avoid because of concerns around international security, stability, and frankly the ability to commit mass atrocities. So that's sort of why FLI is really concerned. Maybe to add to that, I think what we see a lot in the debate around the Ptolemyan weapons system is a sort of self-ownership fallacy where
01:41:47
Speaker
people believe that they are going to be the only ones that have this or they will have the best one. It is a very persuasive argument when a soldier says, I've lost a fellow soldier in battle, I want to develop autonomy and weapon system and autonomous weapons so that I can save future lives and no sort of go through my experience again. And that's
01:42:10
Speaker
Well, fine, especially if your experience was because you were an American soldier fighting in Iraq and you were faced with, let's say ISIS, that didn't have maybe the sophisticated capabilities that you are developing. But if you are faced with an opponent that has similar technological capabilities as your own, then I think you reach your whole different situation and you
01:42:34
Speaker
you reach those sort of risks of unintended escalation that would lead to a global war. And I think that's often not considered in this policy debate.
01:42:45
Speaker
We should also stress the unreliability of these systems. So we could talk about them perhaps analogously to a self-driving car that will fail. I mean, human drivers and human soldiers are also unreliable, but at least we are unreliable in ways that are recognizable to other humans, whereas these systems fail in quite unexpected ways.
01:43:10
Speaker
When i interviewed the autonomy expert frank sour he gave me some examples of adversarial inputs for a drone systems where you can.
01:43:19
Speaker
paint a certain pattern on the side of a car and it will be registered as something entirely different. And you can imagine what bad actors might do with painting this on perhaps your own infrastructure such that your weapons attack yourself and all kinds of bad things could happen there.

International Regulation Challenges

01:43:42
Speaker
There are these problems with autonomy and weapon systems. How is it difficult or why is it difficult to regulate these systems internationally?
01:43:51
Speaker
It's difficult because there is military advantages or military sea advantages. So I think that that's what makes it harder. I'm not sure if it's particularly difficult. I mean, I think there is a growing group of nations that do feel that this needs to be regulated in a new international treaty. And you see the U.S. Secretary General, the president of the Red Cross, for example, coming together several weeks ago to issue a call, which is
01:44:20
Speaker
really quite unique, saying, okay, we need to have an international treaty, we need to have it negotiated by 2026. And a resolution that tasked the Secretary General with identifying all the options that there currently are for regulation passed with, I think, six no votes and over 160 yes votes. So I think
01:44:42
Speaker
There is quite a wide range like global consensus around the need to put in limits. I think there's this agreement about where these limits need to be and I think
01:44:54
Speaker
There's also different proposals like do we mandate a principle like meaningful human control, for example, that you need to always have meaningful human control over your weapon system as a commander, in the same way as you need to abide by principles such as proportionality or distinction.
01:45:12
Speaker
when you plan an attack on international law? Or do we look at, for example, a class of anti-personnel drones or systems because systems that target military ships or naval ships or
01:45:28
Speaker
tanks might be less problematic. So I think there's different avenues to go down. There are systems that attack military objects, for example, tanks or planes or structures, and then there are systems that attack people. I think there is a difference here in that if your target profile is by definition a military target profile, chances that you're going to violate international humanitarian law are significantly diminished.
01:45:57
Speaker
and the ability to commit a genocide using systems that target structures or military objects is very different than if you are targeting individual people based on features or facial characteristics. And that would really be the nightmare scenario, having a swarm of drones flying around targeting specific people based on their
01:46:19
Speaker
inherent characteristics. Say we get an international treaty but China doesn't sign it or maybe the US doesn't sign it. Does it matter?
01:46:29
Speaker
It does. And I think this is, I think a critical, at least in my mind, a critical element that people often miss is that at the moment, there is no regulation around autonomous weapons whatsoever. And you see states, for example, Turkey will export autonomous weapons to many nations unrestricted. And it is also
01:46:49
Speaker
perfectly okay to do so. Whereas I think that the risk that that poses to stability in North Africa to other nations is significant. And I think those other nations have a like justifiable say in what happens when those kind of systems proliferate. And I think that's why you need a norm. And that norm will matter to lots of corners of the world that are in the United States, Russia or China. So that's, I think, one reason why it's important.
01:47:19
Speaker
Another is, if you have a sufficient supermajority that signs a treaty, it becomes a norm.

Influence of International Norms

01:47:26
Speaker
And you see that with the landmine treaty. The United States has never ratified the landmine treaty, but it does abide by it. It doesn't use landmines anywhere outside of the Korean Peninsula. And it made that announcement publicly because it was faced with a new international norm. And that pressure
01:47:45
Speaker
went along with it. Even if a treaty is not signed by those three players, it still has value. I think it will also enable trilateral or bilateral agreements between those players. So you can imagine having a treaty at place and then China and the US meeting potentially behind closed doors and making certain agreements about what you can and cannot do in the Taiwan straits.
01:48:06
Speaker
that borrow from concepts of that treaty or that say that, well, because this is a norm, we're going to try and have an informal agreement that we both respect without sort of going through a big international fora, where we can't be seen to do this, or where, yeah, this would receive significant pushback. So I think, yeah, there's there's significant value in having it, even if those countries don't sign. Mark, thank you for coming on the podcast. I've learned a lot. It's been a pleasure.