Introduction to Autonomy in Weapon Systems
00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker and I'm here with Dr. Frank Sauer. Frank is Head of Research at the Metis Institute for Strategy and Foresight. Frank, welcome to the podcast. Hi. Hi, Gus. Thanks for having me. Great. We're going to talk about autonomous weapons systems. So what are these systems and what would be some examples?
00:00:25
Speaker
I guess I should start by saying I usually don't use the term autonomous weapon systems. Instead, I talk about autonomy in weapon systems. Now that might sound like super nitpicky. What is the difference? It is actually a huge difference because the way
00:00:40
Speaker
I think about these things and the way I think we should all be thinking about these things is in a functional manner rather than a categorical manner. Now what does it mean? Let's start at the beginning. I think the the cleanest and shortest way to define what the whole discussion about is when we talk about autonomy weapon systems
Technological Advances and Military Interest
00:01:00
Speaker
is that there's something happening that we also see in a whole range of other aspects or fields in life because technology is advancing and what happens is that specific functions are being delegated from humans to machines.
00:01:15
Speaker
And with regard to weapon systems, the discussion is around the so-called critical functions in the targeting cycle of the weapon system, and that is the selection engagement of targets. And so when we're talking about autonomy weapon systems, we're talking about
00:01:32
Speaker
The selection and engagement of targets without human intervention. The machine is selecting and engaging. The machine is selecting where military force is to be applied. That's not necessarily a new thing. It's not necessarily an AI thing. And it's not necessarily a problematic thing.
00:01:50
Speaker
But there is a whole range of issues that's crowded around this that we can talk about. Just to make sure I get you correctly here, what is important is the autonomy, the software so to speak. And that could be implemented in ground robots or drones or
00:02:09
Speaker
perhaps vessels that function at sea. And so what's the core here is the autonomous decision making about who to target and whether to target them.
00:02:24
Speaker
That's correct. It is super important to separate the whole discussion from that concerned us, you know, some 15, 20 years ago, surrounding drones. This is not about drones. This is about any perceivable weapon system floating in space, flying through the air, being submerged under the sea, traveling the surface of the sea, traveling at land or, you know,
00:02:48
Speaker
whatever maybe even in cyberspace you know it is it is purely a thing of functionality and the question who or what human or machine is deciding what when and where. Why are militaries interested in autonomy in weapon systems?
00:03:05
Speaker
I got into the whole field in 2007. I remember this quite well because I read a paper by Ron Arkin from Georgia Tech. And this paper was about, I think the title read something like an ethical governor for battlefield robots. And I read this and I was like, this gotta be Joe. That was back in the day when we were talking about remotely piloted drones and what they mean from a legal perspective, from an ethical perspective.
00:03:33
Speaker
And here Arkinis says, hey, we will do all of this via software. We will take the human out of the equation. And one of his key arguments really was that that would make war much more humane and would reduce dramatically the amount of atrocities and war crimes being committed. So at the very beginning, that was an important argument in the whole debate, like take humans out of the equation because humans do terrible things to each other in war, which is true.
00:04:02
Speaker
And so we will end up with a much more way of war fighting. Additional arguments were put forward over time. You could say that it's much more cost effective to have one human pilot a whole range of systems, for instance. And I could go into more tangential arguments that have been put
Effectiveness and Ethical Implications of Autonomy
00:04:27
Speaker
forward. The main reason in my mind is speed.
00:04:31
Speaker
It's all about military effectiveness and military effectiveness in terms of speed and speed meaning the completion of the kill chain or the targeting cycle before your adversary has done that. That will win you, tactically speaking, every single engagement. When you are done with finding, fixing, tracking, selecting and engaging the target and applying force to it,
00:04:53
Speaker
While your, you know, adversary is still remotely piloting this thing and data is being bounced back and forth between the machine and the human, while your machine is doing all of this internally, you're winning. That's it. You're faster.
00:05:07
Speaker
And that is, I think, the main reason and the main driver behind all of it. What about the trouble that militaries have, that attracting and training humans to become soldiers? Because this is, my impression is that this might be getting harder. Perhaps people have more options than in the past. And the human cost to a returning dead soldier is enormous. And so is that also something that militaries are thinking about?
00:05:37
Speaker
Yes, I mean, if you look at a country like Japan, for instance, they're quite reluctant to be doing anything about this at the international stage at this point, because they're looking at at their at their country and they're saying we got a, you know, a problem in terms of democracy.
00:05:52
Speaker
You need fairly young, fairly fit people to build an army. And that is definitely in all the OECD countries that is an issue. And so I would say that is definitely factoring in this notion of having to have a certain mass.
00:06:10
Speaker
in your armed forces, that if you can't fill it with actual people in boots, you have to generate some other way. And that would be machine. So that's the perceived value for militaries of autonomy in weapons systems. How do you think that perceived value holds up to the actual value?
00:06:29
Speaker
So this is where we need to start differentiating. Let's start with, you know, this point mass. The importance of mass can be seen in Ukraine every day. Like it's not like they go through hundreds and thousands of artillery shells every day. They are losing about 10,000 of those quadcopter drones per month. 10,000 per month.
00:06:52
Speaker
And so you can see that even a comparably smaller fighting force like Ukraine can leverage technology, can leverage, in this case, mass, or what we would call tradable systems, systems that are throw away, basically, that you can afford to be losing in great numbers to fight a successful fight against an adversary that has more people, more tanks, more artillery, more everything.
00:07:20
Speaker
purely from a point of military effectiveness, I think there is something to that. Also, there's something to that in terms of this speeding up the targeting cycle. I mean, that is also something that Ukraine has demonstrated, you know, superbly, like all of NATO is looking at this and thinking, oh, we got to be doing this, you know, they use these drones, they do surveillance and reconnaissance, they find a target, the target is immediately, the target coordinates are immediately going to the artillery, artillery starting firing,
00:07:49
Speaker
and they're directing the fire of the artillery and within minutes they can destroy it, a dynamic target. So that is also something that has to do with autonomy, that has to do with automating specific instances in the targeting cycle and that is definitely increasing military effectiveness. And another thing that they will put forward
Defense Systems and Legal Concerns
00:08:16
Speaker
this whole notion of maybe being more precise and thus more compliant with international humanitarian law. It's extremely important to be very careful at this point and to not conflate any of these things. Precision-guided munitions are not necessarily making you more compliant with international humanitarian law. I can very precisely kill all the wrong people, as we've seen, I would say, for instance, with drones, right?
00:08:44
Speaker
If autonomy gives you, let's say, hypothetically speaking, and there are ways that you could be constructing a scenario where that actually works this way, if autonomy gives you more precision, then precision gives you the opportunity to fight in a more legally compliant manner. That is also an argument that I think you can't just wipe away and say, no, no, that's nonsense. If we were to say in a pro and con kind of way, those are the pros,
00:09:13
Speaker
Then there's a whole bunch of cons, obviously, that go against this, that are ethical in nature, that have something to do with humanitarian law, and that also touch on the military reality on the ground in terms of controlling what your weapon systems are doing, blue on blue engagements, or
00:09:33
Speaker
escalations that are unintended, that all these kinds of things. That's why we're having a debate. You know, we've had this debate for in the UN at least for nine, almost 10 years now. If all these things were just, you know, squeaky clean and easy and would, you know, be better for everybody across the board, we wouldn't be debating. Yeah, so there are costs and benefits. On balance, do you think militaries should want autonomy in weapons systems?
00:10:00
Speaker
It depends on what they use it for. I mean, as I said in the beginning, it's not new. The way I've been laying it out, this functional way of thinking about it, we've had had autonomy for decades. Okay, for instance, in specific terminal defense systems like the Phalanx fast firing gun on Navy vessels, you know, Patriot missile defense, which is just the times we live in. I have to be pointing to Ukraine again.
00:10:28
Speaker
where Patriot is on a daily basis, you know, almost on a daily basis, defending against incoming Russian missiles and, you know, saving lives, in fact. And all these systems, Patriot, for instance, or Phalanx, they can be, you know, switched to an automatic mode, where the weapon system is looking for targets, selecting those targets, engaging the targets.
00:10:50
Speaker
which can go terribly wrong, as we've seen with Patriot and Blue on Blue engagements, for instance, 2003 in Iraq. What is Blue on Blue engagement? A fretricite or friendly fire when you fire at your own forces. And so, as I said, we've had these things, we've had them for many years. I think they are valuable in terms of defending against material missiles, mortar shells, artillery shells coming in.
00:11:16
Speaker
The question really is, now the time that we live in, now with the technology advancing and with specifically object recognition being built into weapon systems, what happens if this niche capability of defending against something that is flying towards you is now moving out into every conceivable weapon system that is buzzing someplace and looking for targets? And this is obviously where the problems begin.
00:11:46
Speaker
As I said, it is not new and it is not necessarily problematic, but it can get very problematic if you start using it in an irresponsible manner, I'd say. So yes, to answer your question, militaries should want autonomy in weapons systems, but they should be quite careful in the way they deploy them and the way they use them. And we should be having lots and lots of debates, more debates, and good rules and regulations.
00:12:13
Speaker
and specific prohibitions too with regard to how we use it in a responsible manner.
00:12:19
Speaker
There's a continuous development of autonomy. It's not like there's a discrete event in which we can call this system autonomous. And also what I heard you say was that I heard you mention as positive examples of autonomy, more defensive weapons and as negative examples, more offensive weapons. Would that be a fair dichotomy to be more worried about the offensive side?
00:12:43
Speaker
Unfortunately, if you dig deeper and if you start to think about it a bit more, then the whole notion of delineating defense from offense just collapses onto itself. So, for instance, if you think about a counter battery radar for artillery, for instance, something like that, a system that would detect incoming fire and then triangulate to know where that fire is coming from,
00:13:11
Speaker
And that is coupled with some effector that would take down the incoming munition and some other effector that might be able to reach the targets. So see where I'm getting at? You have that data. The question is, what do I do with the data? And if I feed it into that other system that would then immediately launch maybe an artillery shell or something to destroy,
00:13:34
Speaker
the point of origin from where this weapon was fired at you. What kind of system is it? Is it automatic defense or are you already on the offense there? So it's not really clear that we can delineate it this way. And it's also in terms of blurry lines and delineations. This is super important, the thing that you pointed out.
00:14:00
Speaker
There is strictly speaking, and I'm saying that technically speaking, there is no such thing as autonomous weapon systems as a fixed category of weapon systems. We are not able, and we will not be able, to delineate this fixed category of autonomous weapon systems from the non-autonomous weapon systems.
00:14:19
Speaker
That renders the whole thing quite different from many other fields where we did arms control or humanitarian disarmament. I can give you a list of criteria that will pretty clearly delineate what an anti-personal landmine is. And you can then quite easily distinguish it from an anti-tank mine or an assault rifle or a nuclear weapon.
00:14:41
Speaker
whereas autonomy weapon systems is just a function. You can be a feature of almost any conceivable weapon system. And any weapon system can be autonomous now. And now I flip the switch and now it's back to remotely piloted. And you will never be able to tell from the outside. Looking at it, what is it? You know, I can see that's an anti-personal landmine. That's a nuclear warhead.
00:15:05
Speaker
This is just some ground robot. I don't know. Is it autonomous? Is it selecting and engaging targets without human intervention? I don't know. We will never know by looking at it from the outside, which makes it super hard in terms of formulating regulation, doing arms control. Like if I were to say, let's cap the number of autonomous ground robots at 300, and you build 300 additional of the same robots, and you tell me, well, those are remotely positive, how would I be able to verify this?
00:15:31
Speaker
I hope it gets clear this is where it's a bit more complicated than other things that we've dealt with in the past. And it's important to make that first step and wrap your head around the fact that this is about functionality and about basically rules in terms of application.
00:15:50
Speaker
rather than prohibitions or limits on specific categories of weapon. So if we think at the cutting edge, which autonomous systems are under development, which systems were you the most? That's an excellent question. There's two ways to answer this. On the one hand, technology is increasing the capabilities potentially of weapon systems at an amazing pace.
00:16:18
Speaker
That is the worrying part. The part that gives me some faint rest of hope is that militaries are inherently conservative organizations and especially Western militaries that try to be at least compliant with international regulations and norms and the law of war.
00:16:46
Speaker
are quite reluctant to be using this just to see what happens. So if we talk about certification, validation, and actually procuring a weapon system with autonomy,
00:17:01
Speaker
and then getting it into the hands of operators and them using it. This is a very long process and I would not be able to point to one specific system other than the ones that I've already mentioned that we've had for like 20-30 years.
00:17:19
Speaker
Where autonomy is featuring in very heavily
Ethical Challenges of Autonomous Targeting
00:17:23
Speaker
in a in one of those roles that we're imagining like for instance something like a loitering munition or some some corporate that you be using against enemy personnel.
00:17:37
Speaker
So, for instance, if you look at the great resource to gauge where we're at is always to look at ads. And if you look at, for instance, I think it's Elbit, the Israeli company, they're making something like slaughterbots, basically, you know, the slaughterbots video that I'm pretty sure that FLI, you know, produced, and I'm pretty sure most people know.
00:18:02
Speaker
depicted this dystopian future where you have like anti-personnel mini drones buzzing around killing people by you know slamming into their heads and
00:18:15
Speaker
Lanius, the system that I'm thinking of right now, is kind of like that in that it is a quadcopter that will go into buildings, it will map the building, it will find people, it will track people, and it is, you know, equipped in a manner that you be able to engage that target, that person and kill it.
00:18:33
Speaker
But it's interesting to look at the ad and the ad shows that there is an operator and the operator is authorizing the engagement of the target. It's like a pre-selection is happening. The system will basically radio back and say, hey, operator, I found a possible target.
00:18:51
Speaker
Is it okay for me to engage that target? And then the human is, you know, basically making an assessment and saying, yes, that is an enemy fighter rather than one of our guys or some civilian engage the target. That is super interesting to me. Why? Because it shows kind of this proto norm is already working. Like people are already kind of hesitant about fully automating this entire kill chain. There's no reason not to be doing it from a technical perspective. Be easy.
00:19:19
Speaker
The system is detecting something that with 86.3% it presumes is an enemy combatant.
00:19:27
Speaker
Why not have it, you know, go all the way? It turns out we're quite reluctant to be doing this. Probably like an interesting research project to approach these companies and talk to them and find out like, why are you not doing it? It turns out that there's not a lot of demand. Like when you talk to industry people, they will say, we could be doing it for sure. We're not doing it because our customers are hesitant. They don't really want that.
00:19:51
Speaker
And that tells me like there's already something at work like nobody's rushing into this at least
00:19:59
Speaker
responsible people, I think, are still hesitant about rushing into this because they know so many things could be going wrong with it. When the drone is pre-selecting a target, it must have some decision process in there. Perhaps we could imagine, and I don't know if this is a reality now or will ever become a reality, but we could imagine a set of images of faces
00:20:23
Speaker
and a drone authorized to find those faces in a war zone, and then present those enemies for a human evaluator who then makes the final decision. But as you yourself have written about, there is a
00:20:43
Speaker
In the pre-selection, there's also decisions going on. Exactly. So I think this is a perfect point in time to say the system that you've described, that would be on my prohibition list.
00:20:57
Speaker
So, I mean, so far, I think I've not really, you know, come out as someone who's like Band Killer Robots, simply because I think Band Killer Robots still is no good idea, but simply because of the way we as a global community have learned about the way this technology is developing and the way that we want to
00:21:20
Speaker
formulate some regulation around it, we found out that just coming out and saying this is the category of weapons that we want to prohibit is doing the work in this case. However, what you've described is fairly clear to me as something that should simply be prohibited. So it should not be brought into the world. The slaughter bots basically this the system where you have a biometric target signature that says
00:21:50
Speaker
all the white people in the room, all the female persons on the street, all the people with long hair, whatever, with a certain gate, whatever you want to put in there, that is purely a weapon of oppression and a weapon to terrorize people with. I'd go as far as to say there is no military
00:22:15
Speaker
value even in in having these kinds of weapons because this is the military's fight not like at least we did for the longest time and then you know we started targeting individual people with drones and so over the last 20 years obviously a lot of you know has shifted in that regard but generally speaking you would be fighting combatants and it's not relevant if this is if this is combatant g docker
00:22:43
Speaker
or some other person. It is just a combat.
00:22:48
Speaker
And so I think we'd be well advised to not be even building and, you know, fielding and using weapons that look for specific biometric target signatures to kill people with. Because, you know, that gets into the wrong hands and we will have all kinds of, you know, trouble. That will be the prohibition part. And this is basically International Committee of the Red Cross talking.
00:23:15
Speaker
They have laid this out quite clearly and they said, this is definitely the one thing that we should not be doing full stop. And all the other things have to do with meaningful human control and other things that I'm sure we will get to in this conversation. But that's something this notion of killing individual people by facial recognition, that's just awful.
00:23:33
Speaker
And is this a fundamental limit of the technology? Or could we imagine a system that is so capable and so precise that we might be interested in allowing it? So imagine that the system I described came to you, a military decision maker, and told you, I am 97% sure that I have a person here who is the leader of a terrorist organization. Do you want to kill this person?
00:24:03
Speaker
Is it a fundamental limit or is it about the technology being too brittle for us to employ it in the way that I've described? Several things to unpack.
00:24:18
Speaker
If we were to automate this and say, at the 97% threshold weapon system, if you're at this threshold, just engage, I'd say we are in this, you know, territory that I described before, where I would say we shouldn't be going there.
00:24:36
Speaker
If you have a system that is looking for specific individuals, and you want to say this is the general of the, you know, enemy armed forces, and we think we have found him, we have some sensors buzzing around there, and it tells me, you know, 97%. And I have a couple of other people, and maybe I have some signals intelligence, also, that would corroborate this,
00:25:00
Speaker
then I think we're, you know, fairly clearly back in the realm of just war fighting, where you would be authorizing a strike on that target. But you had some human judgment in there, you had humans looking at this, you're controlling the selection engagement of targets made by humans, and they hopefully have done their best to also, you know, get additional data on this rather than just saying, tells me 97% is the guy.
00:25:27
Speaker
The other thing really is, and this is where it gets quite interesting from an ethical and legal point of view, if we were to build the perfect killing machine, which is just perfect at everything, it will always know who's a combatant, who's a civilian, it will never misfire, it will do all the right things, it will function perfectly 100% IHL compliant. Should we be using it? Or some would say, aren't we then
00:25:55
Speaker
you know, under the obligation to use it to be more IHR compliant. Should we then just take all the rifles away from the stupid monkey humans who are making so many mistakes and just let the machine do all the killing? And this is really the question where this quick silvery notion of human dignity comes in and where especially I as a German, we have this as the first article in our basic law, our constitution, is about human dignity.
00:26:25
Speaker
where I get really cold feet. I do know that we basically, again, differentiate at this point between Kantian people and European continental philosophy and then utilitarian. So if I speak to maybe even British, American friends, even they'd be like, I don't get
00:26:47
Speaker
your point frame. If the system is better, we should be using the system. It will give us an advantage. Less innocent people will be killed. What is your problem? Now, the problem would be maybe from a legal point of view, we stop having an issue with the system if it works perfectly as described, because then it will be able to fulfill all the legal obligations. However, if we automate killing,
00:27:12
Speaker
in this manner so that we're not even concerning ourselves anymore with who, when and where is getting killed in war. Are we just infringing on the human dignity of the people we kill because we reduce them to anonymous data points in some ginormous killing machinery. And we do know what happens if we dehumanize people, if we make them into just a bunch of numbers or if we give them animal names or stuff like that.
00:27:42
Speaker
As I said, there's a good reason due to our history in Germany why we have this as the first rule of our basic law that human dignity must not and cannot be infringed on ever. And so this is really another thing that gives me pause, where I'd say I'm not against using autonomy, but we should still concern ourselves.
00:28:05
Speaker
with what is happening in war, and rather than decoupling ourselves from it completely, which is quite dangerous in democratic society specifically, because we are risk averse, we are casualty averse, and we've seen, politically speaking, where we end up if we completely remove the human from the equation, for instance, by only using drones, firing missiles from the sky, and no longer having any meaningful skin in the game.
00:28:33
Speaker
And I don't mean we do have to be producing body bags. I mean, we should be concerning ourselves with what is happening more, rather than just, you know, pushing a button in the morning. And then at the end of the day, you know, looking at a readout, which tells us, you know, the 16 targets that were engaged.
00:28:51
Speaker
That seems quite disturbing to me. To what extent do you think autonomy in weapons systems is analogous to nuclear weapons?
Impact and Regulation of Autonomous Systems
00:29:01
Speaker
We've already discussed two ways in which these systems are dis-analogous. First of all, nuclear weapons are fairly discrete in a way that autonomy isn't.
00:29:14
Speaker
And also nuclear weapons simply kill many more people at once. You can imagine perhaps swarms of autonomous weapon systems killing as many people, but nuclear weapons seem more destructive for one-time use. If you think in more broad terms, how is autonomy in weapon systems analogous and dis-analogous to nuclear weapons?
00:29:43
Speaker
I think it's way more dis-analogous than it is analogous. I think in many respects, it is the opposite. And I know that, for instance, there's some people who frame autonomy in weapon systems, especially when we talk about swarms, as a weapon of mass destruction. I think that is a grave mistake. I think it makes no sense, conceptually speaking, because autonomy in weapon systems, if we once again go to the slaughterbot notion,
00:30:13
Speaker
are kind of the exact opposite of what nuclear weapons are doing. Nuclear weapons kill huge amounts of people indiscriminately almost in an instant, whereas small autonomous quadcopters buzzing around killing individual people and only those people due to some biometric target profile.
00:30:34
Speaker
That seems to me like the opposite of indiscriminate. It is highly discriminant. It may just as well be against all international humanitarian law and maybe just as well a war crime, but it's just not the same thing. The actual thing that happens, the killing is completely different.
00:30:54
Speaker
So that would be my one point. And also, you know, I've ranted about this already at length. I can clearly differentiate a nuclear weapon from any other weapon. A dirty bomb is not a nuclear weapon, a chemical weapon or a biological weapon or conventional weapons are not nuclear weapons.
00:31:15
Speaker
Autonomy weapon systems, as I said, you know, it's this functionality that is just, you know, in there or maybe or maybe not, and that we just have to, you know, wrestle with to be able to use it in a in a responsible manner. It's just not the same thing. And so also which gets us to regulation. You know, I have a background like my PhD was on nuclear weapons, on nuclear use. Basically, why did we never use nuclear weapons after 1945? You know, lots of opportunities, certainly enough weapons to go around.
00:31:44
Speaker
So we could have but we didn't and so you know, there's deterrence and there's the nuclear taboo and all kinds of things that you could be talking about in that regard. And also we did arms control and we were able to do specific things like cap numbers, prohibit specific types of weapons and we were able to clearly define them and count them.
00:32:04
Speaker
And again, this is just something that we can't do with regard to autonomy work. So we need to find new ways of doing arms control rather in a quantitative rather than quantitative manner. It doesn't make sense to talk about proliferation in the same way that we talk about proliferation for nuclear weapons in the case of autonomy and weapons systems then.
00:32:29
Speaker
Corrective. I mean, the notion of proliferation is a term I think that was, you know, drawn from biology and from cancer research where you have like cancer cells, one cancer cell that is then proliferating in terms of like, it's spreading cancer at specific points in the body, you will then find body cancer in liver, for instance, where it wasn't before. And so it goes from one point to other distinct points.
00:32:56
Speaker
That would be a good analogy to nuclear weapons, because this is the way we think of the AQ-CAM network, for instance, the Pakistani network that spread weapons technology. It's like one source of origin, one or maybe two points where something ends up.
00:33:13
Speaker
and autonomy in weapon systems obviously is not like this at all. It is software, it's copyable at almost no cost. You can throw it into an existing technological ecosystem where you already have basically robots, things that sense, things that act.
00:33:30
Speaker
And so you just put in additional software that if you have to compute in the system, the system can perform more functions than it could before. And you don't need any, you don't need to be mining uranium or you don't need nuclear reactors. And yeah, it's just much, much easier. And so I speak of technology diffusion.
00:33:50
Speaker
rather than proliferation because it just diffuses from every conceivable point. Does it make sense to talk about escalation or an arms race for autonomy in weapon systems then? That is a hard question. It's way harder than you think because we've gotten so used to be using the term arms race.
00:34:12
Speaker
I fully understand why people are using it because there is clearly a rush towards more technology that in the end might enable autonomy in weapon systems in in militaries around the globe right so
00:34:28
Speaker
know civil military fusion sometimes it's called so for instance the pentagon will have offices in silicon valley to talk to startups hey what are you doing what is what is the research you're doing what are you doing with deep learning are there maybe products that we could be using that we could be like really quickly integrate into our own innovation cycle.
00:34:48
Speaker
China is doing the exact same thing. Russia is trying to do it. So clearly there is a run towards this because everybody feels like this is the step away from the force to the internal combustion engine. This is just the new engine that will power everything we're doing. And so we need to be really fast to be adopting it because our competitors might be overtaking us. And so there is a race going on. That is correct.
00:35:18
Speaker
However, if you look at the, you know, the political science definition of what an arms race is, and you look at, for instance, at things like overspending, I don't really see that. If you look at, for instance, again, example from Germany, after the Russian invasion in Ukraine in 2022, we decided to spend 100 billion euros on our military. And we're not buying
00:35:45
Speaker
autonomous straws. We're buying fighter planes, chips, like basic military stuff. And so if you were to say, are we in an arms race like the Cold War arms race where we're overspending and we're just, you know, spending 2% of our GDP just to build additional nuclear warheads that we don't need, we're not in that kind of arms race. Maybe in brackets yet.
00:36:14
Speaker
But at least we're not in that now. But there's clearly this rush and this sense of urgency around the globe where militaries are thinking, we got to get into this because this is the next big thing. Having said that, I don't know how deep we want to go into all these kinds of things. It is an endless rock, paper, scissor game. And so you might be saying, OK, now we have all these autonomous drones bothering around doing things.
00:36:42
Speaker
Well, maybe some chicken fence or maybe some other already existing quite simple technology might be able to counter this very effectively on the battlefield. And then you're back to square one and all your high tech and all your deep learning maybe didn't even do that much. That's another thing. We don't really know for sure how big the impact is because nobody has really started using it in a manner that is really increasing military effectiveness in a broader scope.
00:37:10
Speaker
And so we also don't know about how easy it might be to fool and to trick and to develop counters against it. That's another thing that I think is maybe part of this rush or this race that we're not putting enough thought into how this might be quite a foolish endeavor in terms of we're spending billions and billions of dollars to train neural nets to do this, that, and the other.
00:37:38
Speaker
assuming that we will be, of course, be able to defeat the enemy with that, but the enemy isn't stupid. Maybe they come up with something quite clever and quite easy to just make all this multi-billion dollar effort useless. With the chicken wire fence, is that an actual possibility that we might be able to stop or defeat autonomous systems with something as simple as a chicken wire fence?
00:38:06
Speaker
Yeah, I mean, slaughter bots, those slaughter bots video to me, I was like, whoa, some chicken fence. Boom. Problem solved. You know what I mean? Yeah. Although you equip it with some actuators and it cuts a hole in the chicken fence. You could see how, I mean, now we just talked about race dynamics. You would probably see a response from the developers of these autonomous systems pretty quickly.
00:38:31
Speaker
Yeah, for sure. But that's what I meant. It's rock, paper, scissor, and it just keeps on going. And I will have to come up with some system, maybe some laser that would blind those things or whatever. And this is just how it goes.
00:38:46
Speaker
There's this concept of entanglement between conventional weapons and weapons with more autonomy. What does this mean? The notion of entanglement is originally one where people became aware of the fact that
Instability and Diffusion Risks
00:39:04
Speaker
there are specific dual use assets that are relevant to both conventional war fighting and nuclear operations. Satellites would be the prime example. If you were to destroy large parts of the US military satellite network, that would be a huge problem for them to conduct conventional operations, but it would also have an impact on their nuclear space awareness, all these kinds of things, you know, early warning, etc, etc.
00:39:32
Speaker
And so people were like, oof, we've got these dual use assets, and now we have conventional weapons.
00:39:38
Speaker
because of specific technologies that have developed over the last couple of decades. And drones would be one of those things, but also cruise missiles and neuro rocket technology, hyper velocity projectiles, all these kinds of things. We have these tool use assets and they're now more vulnerable to conventional weapons. Whereas in the past, you would have to drop a nuke on this, basically.
00:40:03
Speaker
Now you can very precisely maybe target this with a conventional weapon so you're not really initiating a nuclear exchange, but you're targeting. Parts of the nuclear infrastructure of your adversary and so that's where the idea of entanglement came from that people were saying we have now where.
00:40:22
Speaker
before we had this bright line between conventional war and then the red line where we step over into nuclear war, that is getting kind of blurry because of the way technology is developing and also the way that doctrine has developed over the last couple of years and decades.
00:40:40
Speaker
And so, yeah, you can easily see, again, how autonomy might just put steroids into this, you know, because you have maybe you have, you know, uncrewed weapon systems doing things. And now they're doing them faster without being remotely piloted anymore. That might some people, you know, give some people pause and think, oof,
00:40:59
Speaker
What about my nuclear assets? One of the ideas that is being kicked around, I'm pretty skeptical about it, but just to give you an idea of what people are talking about, is this notion of the transparent ocean. So the best way from a nuclear power's perspective to keep your nuclear weapons so that you have a credible second strike capability is to put them into submarines and then have them submerged.
00:41:26
Speaker
So nobody knows where they are. And whenever you're attacked, you can always fire back. So you have this credible threat of being able to retaliate. And so if everything works out fine, then the first strike will never really materialize.
00:41:41
Speaker
But if you have like, 1000s and 1000s of, you know, submers, sensor networks, and just drones autonomously buzzing through the oceans, looking for everything with I don't know, quantum magnetometers, measuring like tiny, tiny changes in the earth, magnetic field, all of a sudden, the ocean might might become transparent. That's the notion that is being a grand chance transparent ocean. And then your second strike capability is all of a sudden gone, because
00:42:11
Speaker
you know, your submarine might be a target when the enemy decides to launch that first try. And here we can see how entanglement this is, you have to spill over from the conventional realm into nuclear thinking. And that is, of course, it's not it's not good. It's just an additional source of instability. Would nuclear facilities on the ground also be exposed potentially to autonomous systems or
00:42:38
Speaker
be threatened by new autonomous systems.
00:42:43
Speaker
Potentially, yes. I mean, we've had this conversation quite specifically. I remember that maybe 10 years ago, the Russians were quite unhappy with the way the include systems development in the US was. And they were like, what if you were developing something like a conventional first strike weapon, where you go into this first strike with all kinds of precision guided munitions, swarms of drones and cruise missiles.
00:43:13
Speaker
basically disarm our land-based component and destroy all our silos and the robot ICB atoms aren't you just you know doing terrible things to nuclear stability so we had this conversation for a fact actually
00:43:29
Speaker
I'm not sure how much paranoia is in there because I have a hard time of conceiving a scenario where that actually works and where anybody would be insane enough to do something like this. But at least we're having these conversations are happening for real. And this is something that nuclear powers think about. What do you think the world looks like if we have unregulated diffusion of autonomous systems?
00:43:57
Speaker
What's the relevance of more traditional military assets? How safe does the public feel? Yeah, what does the world look like? I remember I was in Geneva at the United Nations a couple of years ago. I try to remember which year it was exactly. It was before the pandemic, maybe 2017, 18, something like this. And we were in that room, the Convention on Certain Conventional Weapons
00:44:24
Speaker
room talking autonomous, talking autonomy weapon systems. And Russia has always been like, why the spoiler state there, they they would, for years and years, they would be saying, we don't even know why we should be talking about this. There is no issue here. Existing international law is enough. And this is basically all of this, what you're talking about is science fiction.
00:44:49
Speaker
And a journalist approached me, and she was saying, yeah, just flew in from Moscow. We were at an arms fair. Have you seen this thing? And she showed me on her smartphone a video that they took at the arms fair in Moscow where Kalashnikov, and you can look that up. It's on YouTube. They have an ad up where Kalashnikov was presenting a turret with an object recognition system, basically, which would look for silhouettes of people or a truck and then open fire.
00:45:20
Speaker
And I mean, that was quite jarring, basically the Russian delegation saying, we don't even know what you're talking about. And, you know, Kalashnikov at the same time selling this turret, which selects and engages targets without human intervention, which is an autonomous weapon system firing at basically the silhouettes of people, of human beings. And
00:45:45
Speaker
So your question is like, what would the world look like? Well, if we were to be fielding these kinds of weapons, we would basically just goodbye international humanitarian law.
00:45:56
Speaker
because never ever in a million years, no, I'm not going to say never ever because you know, chat, GBT, all kinds of things, generative AI, all kinds of things are happening dramatically fast. So I don't know where technology is taking us. But for now, let's say for now and the foreseeable future, I have no doubt that this Kalashnikov system or its predecessor would never be able to work in an IHL compliant manner.
00:46:26
Speaker
because it would have to be able to recognize, is this a combatant? Is this a civilian? What is this combatant doing? Is the combatant wounded? Is he trying to surrender? All these kinds of things that machines are not capable of at the moment. And if we were to tell them, well, if this combatant is waving a white flag and tries to surrender, don't fire, then I'd say, great, let's all get a white flag and then, you know,
00:46:54
Speaker
Just overrun the enemy because all the weapon systems wouldn't be firing because machines are dark. This is what worries me It's not like the intelligence. It's in the systems. It's how dumb these systems are and yes people make mistakes but we don't make all the same mistake at the same time at lightning speed and so stupid machines making terrible mistakes at lightning speed that is what worries me and so
00:47:20
Speaker
a world where we're just, you know, proliferating no diffusing this into every corner of the world, and everybody's using it, I would say, IHL will be a real problem. And the other thing, escalation, you know, you could easily imagine something like this just has getting out of control, just
00:47:38
Speaker
some sensor network sensing something, maybe it's a mistake. We've had this with in the nuclear realm, you know, with the sun glint on cloud covers or ease a flock of these, that satellite would pick up and say, well, there's a there's a launch in Russia.
00:47:55
Speaker
or vice versa, and all these near misses that we had because we were relying on sensor systems and automated systems that would tell us what to do. And if we just automate this all through and selection and engagement of targets is just across the board feature of everything that we're doing, then we could be in a shooting board triggered by machines quite quickly and nobody would be fast enough to be pulling the plug.
00:48:18
Speaker
And so this would be just two things that really give me concern, where I would say, not saying we shouldn't be using autonomy, but we need to be like real careful about it. How do you think autonomy changes the power balance between states? So here I'm thinking, does it make it easier for states with traditionally weaker militaries to defend or engage states with stronger traditional militaries?
00:48:46
Speaker
I think so. Again, I think Ukraine is demonstrating this the way they've been using drones for ISR and for targeting purposes clearly shows intelligence surveillance reconnaissance.
00:49:04
Speaker
Yeah, it would be basically the first part of the targeting cycle, the fighting fixing tracking. They're doing this, and they are, you know, successfully beating back on paper, at least, you know, much more, a bigger fighting force, let's say, you know, what this does to the balance of power, generally speaking, or, you know, if we zoom out, is quite unclear.
00:49:31
Speaker
To be honest, I wouldn't venture to guess at this point because of what I was saying before. We're still, which is a good thing because it gives us some more time to think about this and to put up some guardrails, but we're still at the cusp of this. I wouldn't be prognosticating in terms of, well, if China does this and that, then in 10 years,
00:49:57
Speaker
Quite unclear, also because of the fact that, like I said, some of those things, for instance, loitering munitions, right? That would be something that you deploy. It goes up in the air. It hangs around. It loiters, waits for 20 minutes, 30 minutes, maybe an hour. It looks for a specific target signature, say, a Leopard 2 main battle tank, right? It finds this thing. It has an object recognition. It sees this.
00:50:22
Speaker
piece the target, it says, you know, we're back to the 97%. It says that's 97%. You know, I'm sure that's a leper two and not a school bus full of children. And so let's, let's just say it is autonomous in the critical function. So the system goes, okay, selected, engage, and it dive bombs into the tank.
00:50:41
Speaker
And these things are nasty. We've seen this in Nagorno-Karabakh and we're now seeing it in Ukraine. Especially the Russians have, you know, ramped up production of these loader munitions and they are terrible for Ukraine in armor.
00:50:59
Speaker
Five years from now, every fighting force will have some sort of short-range air defense. So, you know, turrets, maybe lasers, maybe, you know, ten years from now, microwave beams, I don't know, should just shoot these things down.
00:51:15
Speaker
If everything goes according to, let's say, plan or the way these things usually go, then this threat for heavy armor on the battlefield right now will have stopped being a threat in five to ten years because we will have developed counter-marriages against it. And so if you look at it from the autonomy perspective, you would be saying, well, these autonomous lawyer munitions, they are terrible. They're making the tank obsolete.
00:51:40
Speaker
And I'm like, no, they're not. Give it five years and the tank will have some system going. And then, you know, this lawyer ammunition will be gone and it won't probably won't be an issue. Probably that turret will be autonomous.
00:51:53
Speaker
The crew in that tank will probably not even be dealing with this. The tank will be looking for these kinds of munitions, and it will autonomously defend against them, and the crew will be safe, and the tank will be just doing tank things. So we're using autonomy to defend against the enemy's autonomy. Is this a positive development? Can we avoid the arms race by developing counter-autonomy systems? Maybe I'm not smart enough to see how we would be avoiding the arms race.
00:52:22
Speaker
It's like if we were to avoid the arms race, then there would be a point where the rock, paper, scissor dynamic just stops and I don't see it stopping because that loitering munition that will be shot down by the autonomous turret on the tank, that won't be around for long. And then there will be loitering munitions that probably go into a wild zigzag.
00:52:45
Speaker
when diving down and to avoid the autonomous turret and then they will be useful again and then something else will come along. And you know, I wish we'd be spending all that money that we spend on the water munitions and the turret on healthcare or a million other things, but I talk to people in the industry as well.
00:53:06
Speaker
And they will tell me we need autonomy to defend against autonomy. This is already going on. And like I said, we have had autonomy in those turrets and these defensive systems shooting at incoming stuff for decades. And now, you know, some of the cruise missiles and many of the missiles, the missiles always were, are getting faster and faster. Hypersonic.
00:53:28
Speaker
you don't have a lot of time to decide. And so people will tell me we need to automate the entire thing just to be able to defend against all this incoming munitions stuff that is on the horizon. What I was thinking was defending against loitering attacks from above.
00:53:45
Speaker
with autonomy in these tank systems. That seems like a defensive use and a more positive use of autonomy. Now, of course, we can go back to your earlier point about defensive technology quickly becoming offensive technology. And so that distinction maybe isn't as meaningful. But what I was thinking was whether we could defend against autonomous systems using autonomy, but in a way that doesn't
00:54:13
Speaker
provoke the enemy or push this arms race further? Yeah, I guess so. Here's the thing. I had quite the learning curve. As I said, I started catching wind of this in 2007. It's now 16 years later.
00:54:31
Speaker
I look at stuff that I wrote maybe five to six years ago, I throw my hands up in there and I'm like, why did I over complicate this, you know, it's a functional thing, you've got to look at this from a functional perspective, and it's who or what human or machine is selecting and engaging. So it took me a while, it took all of us a while, it took the international community at the UN a while to get there.
00:54:55
Speaker
But the way that we're talking about this now is, I think, the level of differentiation that we should be having when we have this conversation. Because, yeah, I mean, this autonomous turret, which is only and solely and obviously, you know, tanks have automated protective systems like trophy and so on and so forth.
00:55:15
Speaker
that also already exists, you know, systems that would engage an incoming, you know, RPG, cutting it in half in the air before it even reaches the tank that already exists. So we're not talking about sci fi stuff that hasn't happened yet that that is already in existence, you would just be, you know, further developing these kinds of distance, like trophy 2.0, which would be no good against lawyer ammunition.
00:55:39
Speaker
And so, like I said, the level of differentiation that we should be having is, what is the use that is IHL compliant, that is not getting us into hot water from an ethical perspective, that is not infringing on human dignity, if we're shooting down loading munitions, like no one's human dignity is involved here.
00:55:58
Speaker
And which is also not accelerating battlefield tempo to a point where we're losing control of what is happening on the battlefield, what the Chinese call battlefield singularity. Like we're just the machines keep fighting and we don't even know what happens anymore. And I would say like this turret, this hypothetical turret against lorry munitions on the main battle tank,
00:56:20
Speaker
I think that that satisfies all those criteria.
Vulnerabilities and Unpredictability
00:56:24
Speaker
And that is why I'm saying I'm not against autonomous weapon systems per se. I'm for a responsible manner of using them and for good guardrails in terms of meaningful human control. Don't use them to target individual people by facial recognition or biometric signatures. And don't for Pete's sake build weapons like this Kalashnikov thing that are just, you know,
00:56:47
Speaker
never able to comply with IHL because there's no meaningful human control in there at all. They just fire it wherever looks vaguely like a human. So again, back to the chicken wire fence. What are the prospects of defending against autonomy by using dumb and cheap solutions? So there's the chicken wire fence, but I could also imagine perhaps putting masks on soldiers that make it so that you can't do facial recognition.
00:57:16
Speaker
That seems like a very cheap solution. I have no idea whether it would actually work. But solutions like that, what do you think of that kind of solution? We should workshop your idea. I don't know if it works, but I think it goes back to this whole notion of the enemy also, like, has a say. And a good anecdote, I think, in this regard is the surge in Iraq in 2000, I think six, I'm not sure.
00:57:45
Speaker
When the US just, you know, went into specific cities with much more force. And the way they did that is, or at least tried for a while is they send in like round robots. And those robots, they had a machine gun and a camera, and they were loading rolling in there.
00:58:03
Speaker
And what those things effectively did is give machine guns and ammunition to the enemy from a US perspective because obviously the insurgents weren't dumb. And obviously they just snuck away and hit while this thing was rolling by and then they went behind it, kicked it over,
00:58:22
Speaker
took the machine and took all the ammunition and, you know, went their way. And so this notion of why we're automating all these things and we're using robots and now we're putting all kinds of object recognition and you know, machine learning techniques into them and that will make it so much better. As I said, that could be a fool's errand.
00:58:43
Speaker
And so when we're talking about autonomy and selection engagement of targets, I'm drawing an analogy to self-driving cars where I'm not sure if that is still the case, but whenever I check up on it, it is still the case, which.
00:58:58
Speaker
I find insane. And that is that I'm not naming specific manufacturers now, but there's one company that makes self driving cars and they're quite advanced. The object recognition system, the computer vision system in that car, last time I checked, you know, even those very sophisticated systems are still being fooled by very, very simple countermeasures, such as a bit of reflective tape stuck onto a stop sign.
00:59:28
Speaker
any human being would still be easily be able to say, that's a stop sign. Some tape is on there, but it's a stop sign. Whereas the system, you know, is tricked by it and, you know, stumbles and is no longer able to recognize the stop sign as a stop sign. Which is, there's a term for it that is adversarial image in searching. And so I'm thinking about this now and say, okay, if those self-driving cars that are potentially in a future,
00:59:55
Speaker
No moving around in a friendly ecosystem all of them talking to each other maybe even them talking to invite to the environment if making them work is so hard.
01:00:10
Speaker
how hard will it be to make your system work in an environment that is not friendly, that is not cooperative, that has agents in there that will do whatever they can to trick you, fool you and do all these kinds of things. And so I'm thinking probably we're moving into a future where you are just you have a 3D printer. I wrote a short story on this.
01:00:32
Speaker
where basically there's a couple of soldiers, they're in a forest and they have an armored vehicle and it looks kind of weird. The one guy says, why does it look so weird? And he says, yeah, we have this three printed stuff that we glued on there.
01:00:48
Speaker
It's just really quick and dirty. It's just some knobs and stuff, and it makes the vehicle look weird. And they found out that the loitering munitions and the drones buzzing overhead of the enemy are now no longer recognizing this thing as what it is.
01:01:06
Speaker
So it trips up the algorithm, basically, and they're now safe from the spying eyes of the enemy. It's just a story, but this goes to show that maybe we will be moving into a future where you can find pretty easy solutions just to fool the system, to trick it, and to no longer trigger the 97% certainty that this is a target. Maybe, like we know from this image recognition business, it says this is a snowplow, it's a snowplow, it's a snowplow, it's a cat.
01:01:34
Speaker
Why did it say cat? These examples are all examples of adversarial input. And what you see with autonomous cars or self-driving cars is that when they encounter some image or some input that didn't exist, that wasn't repeated a lot of times in training data, they will be tripped up. So you can imagine, for example, a self-driving car in a carnival on the street, a kind of festival with people in costumes.
01:02:02
Speaker
How many times have a self-driving car encountered a person in a pink rabbit costume? Probably not that many times, and that could trip it up. As you described, you could see how that could be used on the battlefield with autonomous weapon systems.
01:02:22
Speaker
Perhaps you give your vehicle some pattern that you know trips up the enemy's system. Or more bleakly, perhaps you can glue that pattern on the enemy's vehicle or station and so have the enemy's autonomous systems attack the enemy itself, basically. So an instance of friendly fire.
01:02:47
Speaker
It sounds to me like you think this is pretty likely. I'm pretty likely. I don't know. But it's definitely, I think, something that we should be aware of. And there's not a lot of people thinking about this. It's like, all the political and military discourse is on, this will be great. We will be using this. And it will give us this fantastic advantage.
01:03:11
Speaker
And I'm like, maybe, but think one step further, what might an adversary be doing? So red team this. And yeah, I can think of scenarios like this, where you will be able to counter with fairly simple measures something that costs a billion dollars to develop. And it just evaporates in terms of effectiveness and just doesn't work the way it's intended.
01:03:37
Speaker
Yeah, and I think you made an important point by saying that in a war situation, we're dealing with an enemy that's thinking very hard about how to trip up our systems. And what we see with our kind of everyday, you know, with our self-driving cars is that they're tripped up by situations that are not where there isn't some intelligent adversary trying to trip it up. It's just being tripped up by the random effects of everyday life.
01:04:06
Speaker
I mean, people are getting really creative in war. One thing that the Russians are doing in the minefields is just to stack mines, three on top of each other.
01:04:20
Speaker
so that when the mine clearing equipment gets in, which is designed to clear basically one mine at a time to eat it up and absorb the force and then just keep rolling, they just blow up those boughs that they use to plow the minefield because they stack the mines.
01:04:38
Speaker
And, well, now what? It's a super simple solution, but you've stopped the enemy in, in this case, Ukraine, the other enemy from the Russian point of view, you stop them literally in their tracks.
01:04:53
Speaker
Just by stacking, nobody thought of this, apparently, yet. And I'm just saying, this is the way it works. People are inventive around these things. What about cyberattacks on autonomous systems? You've written about manipulating GPS data, for example. How feasible do you think that is?
01:05:12
Speaker
Well, we should be careful at this point because, I mean, in a way autonomy is pursued because you're less susceptible to cyberattack. The idea of having a weapon system that could also be operating autonomously in terms of also selecting and engaging targets without human intervention is for the system to be capable to fulfill the mission even when the connection to the operator is severed.
01:05:39
Speaker
So in that sense, autonomy gives you this comfortable situation where having a command and control link is optional. So in that sense, if you if you have a system that is also navigating not only with GPS data, but also in terms of points, you know, electro optical sensors, or maybe again, something with the you know, quantum magnetometers, I'm sorry,
01:06:02
Speaker
But I recently looked into those and I was like, wow, this is interesting. So therefore they've advanced quite substantially over the past couple of years. So if you give it additional points of data to work with and to be able to navigate, then it wouldn't be a problem if the GPS signal is either denied to the system or if there's a spoof going on. That's something that you can also do. You can feed it GPS data that's wrong.
01:06:29
Speaker
to make it think that it's places where it isn't and maybe even also like you know land in a country where it's not supposed to land and stuff supposedly like one of the very high very you know secret US drones has been captured by Iran this way just by speeding it wrong GPS info so it went down someplace where the Iranians could recover it but um
01:06:55
Speaker
Yeah, it depends. One of the key notions really is to have autonomy in there not to be having to rely on GPS and other things, but obviously feeding it, feeding the system with spoof data or just, you know, for instance, images like we were talking about before, that is something that would trip it up in a way that only an autonomous system would rip up.
01:07:17
Speaker
And cyber, you know, just to say cyber, the same thing, basically, if that system is completely sealed off and just operating on its own, and there's no communication going in and out, you won't be you won't be having any cyber, you know, issue that.
01:07:33
Speaker
unless you've compromised it beforehand in some shape or form, you know, and you know, you know, with the software, the firmware or something so that when something specific happens, something is triggered in the system. But yeah, this is not necessarily like in terms of an attack against the system from the outside when it's out there operating doing stuff.
01:07:54
Speaker
Yeah, in cybersecurity, if you want to make something very secure, you isolate it from the world, you make it impossible to extract information about it or to feed it information, but you can see how that just wouldn't work for autonomy or autonomous weapons because they need this data to function and to navigate.
01:08:18
Speaker
You've written about the unpredictability of autonomous systems. And this introduces another complexity around how potentially autonomous systems are perceived by the enemy. So we imagine a drone or a robot coming towards you, and you don't know whether this system is autonomous. What are the effects of that?
01:08:43
Speaker
Yeah, the unpredictability is what got I think most of the technical folks into this field. In the very beginning, like it seemed to me like my, my background is in political science. As I said, I've done a lot of work on nuclear weapons, I started looking into high tech weaponry in the knots, and then, you know, 2007 autonomy, all these kinds of things. But I look at it from a political science strategic stability, arms control point of view.
01:09:13
Speaker
And the tech people, most of them, they were like, you can't be serious. The technology is not reliable enough to be used in this kind of setting. There's a reason that, for instance, civil aviation is not touching deep learning with a 10 foot pole, because this is not the way air travel safety works. They don't want any probabilistic systems in there. It needs to be like watertight for them. 99.8% is not good enough for civil aviation.
01:09:42
Speaker
And so the tech people were like reliability and unpredictability. This is a huge problem for us. That's why we need to be, because we, with our technical knowledge about how these systems are actually built and how neural nets are trained and what they can do and what they cannot do and what we understand and do not understand about them.
01:10:02
Speaker
We need to be telling this to the military people because when this all started in 2010, 2013, 2014, the military seemed to be like, yeah, this is amazing. This is the silver bullet. We put, we put some magic AI sauce into every weapon system and then everything will be so much better. And the tech people were like, no, no, the reliability and the unpredictability that will like,
01:10:27
Speaker
really come back and bite you because, for instance, we have no idea how your systems that you've trained on some specific set of data would be interacting with an enemy system trained on data. And we've seen this, for instance, at the financial markets with flash crashes.
01:10:46
Speaker
where trading algorithms just trip up and get into each other's way or just encounter something on the markets that they're just misinterpreting kind of same with the self-driving cars and then we have just you know the british pound
01:11:01
Speaker
crashing down and losing, you know, many digits of its value in percent. And then usually someone pulls the plug and just takes this trading algorithm offline. And if we if we imagine the same thing, just algorithms, getting into each other's hair, then we have like, now I mentioned this before, this notion of a flash war, like a shooting war being triggered algorithmically, and maybe in a temple, that we're having a hard time to stop it. And to, you know, stop it cascading through
01:11:31
Speaker
you know, our entire military apparatus in the end. And these things, surely, I think, have been discussed a lot over the last decade or so. And I can, I can tell you one thing. As I said before, like this Kantian notion of human dignity, not a lot of people are really sharing this, at least not to the same degree that, like, for instance, I would, I wrote about this. But this notion of unpredictability of
01:12:00
Speaker
delegating functions from humans to machines, and the machines performing those functions automatically, autonomously, whatever you want to call it. And us losing control, this is widely shared as a big risk. And people know this in Beijing, in Moscow, in Berlin, in Brussels, in Paris, and in Washington, wherever you want to go, this is what gives people the creeps.
01:12:29
Speaker
And so that would be, especially in the time that we're living in now, like everything is quite polarized and, you know, the international situation is tense at the moment. You need to find specific points of like a common denominator, like points of convergence where everybody can at least have a minuscule agreement on where the problem might lie.
01:12:54
Speaker
And this is, I think, where most people can agree on that we're getting into potentially strategic instability, hot waters of the way we've never seen it before, if we're rushing into this without the guardrails and the control mechanisms in place and just automating things that then, you know, become runaway systems that we can't control anymore. Are we likely to trust autonomous systems too much?
01:13:21
Speaker
There's this concept of automation bias in which we trust the autonomous decisions too much. How much of a problem do you think this is?
01:13:34
Speaker
It's definitely a problem because we've seen the results of it. I mentioned this before, in 2003, a US Patriot battery in Iraq shot down two friendly airplanes, an F-18 and a, I think, tornado, British tornado and an F-18, US F-18.
01:13:53
Speaker
In the end, there's a great report on this written by a person who knows the Patriot missile defense system better than probably any other person on the planet. And he says that he was not surprised by this, because the problem that you're dealing with is you have this crew of operators operating that system.
01:14:16
Speaker
And it's 23 hours, 59 minutes and 59 seconds of total and utter boredom. And then there's this one second of sheer terror, where you need to be deciding what to do. And you will be under a heavy pressure to just do what the machine is suggesting. And that is automation bias, basically, you've trained it.
01:14:44
Speaker
And in all the training simulations, the machine was always right. And when the light was green, you pushed the button to shoot down the enemy aircraft or the enemy Scott missile, whatever you have. And the system works, and it's great.
01:15:00
Speaker
It is a big problem if we were to be implementing this kind of automation across the board in many, many more systems. Probably there's people who are experts on automation bias and the ergonomics of human-machine interfacing.
01:15:21
Speaker
I've, I'm not an expert on this, like at all, I've talked to people who do this, and who deal with this, it seems like there is no be all and all solution to this. But it is in the end, a question, I think of training. And of yes, I mean, UI design, and basically, trying to make the machine talk to the human
01:15:43
Speaker
in a way that makes the whole human machine team less susceptible to stuff like this. But it's definitely, it is definitely a problem. And the situation is also inherently difficult, even with the best UI design and setup for extracting the best decision making from humans, I would say. I mean, you're making decisions with uncertain information and under enormous time pressure, as you mentioned, you may have maybe have
01:16:13
Speaker
a minute to decide. Say the system, you know, we discussed 97%. Previously, say the system presents you with, there's a 97.256% probability that this is the enemy. Now, you get the impression that the system is much more precise. You think, okay, if it says 97.256, then it's definitely not 40%.
01:16:39
Speaker
But yeah, this would be an example of bad UI. If in fact, as I would say, it's overwhelmingly likely the system is not that precise. Yeah, what do we do about this, if anything? I mean, the approaches that I have seen, that I found to be perfectly honest, a bit underwhelming, were that, you know, the problem was discussed and people came in and they were like, we're going to be going to be using explainable AI. And I was like, okay,
01:17:08
Speaker
You know, what does that entail? And, you know, the answer was heat maps.
01:17:13
Speaker
So you would have an object recognition system, and it would tell you 97%, this is a S-400 enemy air defense system. And then you could basically push a button, and then the system would display a heat map telling you which parts of that picture it uses to come to this conclusion.
01:17:40
Speaker
so that the human looking at this would then have maybe an inclination of
01:17:47
Speaker
all it's drawing this conclusion from looking at the trees in the background, you know, this weird image recognition stuff that happens. And then, you know, the human may maybe might be able to say, Hmm, maybe it's not 97%. Maybe this is just, you know, a lorry like a truck with logs on it, which kind of looks like an S 400. But the system got confused. And it's looking at the trees, stuff like this.
01:18:11
Speaker
And I was like, if that's the best we can do by leveraging expandable AI, showing this kind of heat map, I'm not sure we've really solved the problem. And also, one of the things that people will always impress on me from the military point of view is that don't have time. People in these situations do not have time to look very carefully at this heat map and take 30 seconds to think about this.
01:18:39
Speaker
Yeah, we don't, we don't have it figured out. The only, the only thing that I can say is that this gives us all the more reason to pump the brakes and not be rushing into this and maybe come up with good guardrails first before implementing these, these things and trying to, trying to, you know, use them.
01:18:58
Speaker
Do you think that humans will remain relevant to decision making over the long
Human Control and Decision-Making
01:19:04
Speaker
term? Do you think it'll be important to keep a human in the loop of making these life or death decisions? Because we're discussing these systems as they exist now. Presumably they'll become more advanced, maybe it'll take longer than expected, but at some point they will probably be quite good. There are areas of life that
01:19:25
Speaker
that is automated now and we wouldn't dream of having a human do certain jobs now.
01:19:33
Speaker
Over the long term, should humans remain in the loop of these autonomous systems? I think the first thing that I would say is that in the loop, on the loop, out of the loop terminology is not really up to snuff. So I'm not using it anymore because the whole notion of having a system under meaningful human control, that is basically the state of the art, conceptually speaking, in terms of how we're talking about this,
01:20:01
Speaker
does not mean a human in the loop or remotely piloted. Meaningful human control can mean full autonomy in the weapon system. So it can mean that the whole human machine system is set up in a way that the human is no longer performing any of those functions in the targeting cycle. The machine is doing all of it.
01:20:25
Speaker
And still, you would consider this to be under meaningful human control because the system was designed to be controllable. And it is used in a way that when it's activated, the human operator can foresee what is going to happen, can administer it when, you know, something goes wrong, or when he or she wants to, and that everything that happens is, you know, retraceable to the human due to, you know, being able to have a legal, you know, chain of accountability.
01:20:53
Speaker
This is obviously a way more complex way of thinking about this than saying, well, a human should always be in control in terms of well, so remotely piloting, basically. We're doing everything remotely. That is not what people are talking about when they're saying we want meaningful human control over weapons.
01:21:08
Speaker
The question really now is, like, how do we do this? And the answer is we do it in a differentiated context-dependent manner. So let's stick with this notion of defending against incoming munitions. Say you're a Navy frigate now, and you're on the high seas, and you're scanning the horizon, and you know what's around you. And there's most definitely no bus full of nuns.
01:21:33
Speaker
by a sheer miracle flying through the air. That is not happening. Now let's say there's a couple of contacts coming your way very fast. And this is probably a bunch of anti-ship missiles.
01:21:47
Speaker
It would be, I think, arguably a very good idea, in fact, to then flip that switch and have that frigate autonomously in that way, defend itself against all these incoming targets, and save the lives of the group. And you would consider this to be unmeaningful human control. You would know what would happen. You would be able to deactivate at any point in time. And if something were to happen, and it's, as I said, quite unlikely, it would be clear who's responsible.
01:22:15
Speaker
because we had the proper tactics, techniques, procedures, and regulations in place. If you have a similar system in the US, for instance, they explained this once at the UN in Geneva, they have a system similar to Phalanx, which they use on navy vessels, which they use on a trailer basically for land operations, and they would park this in front of a forward operating base. And they took out the option of even putting this into fully autonomous mode because they were saying it's way too dangerous.
01:22:43
Speaker
You know, there's so much cluttered stuff, like people are walking by, maybe there's a boy on a bike, now there's a football, there's birds in the air, there's all kinds of things are happening. Maybe there is the bus full of nuns.
01:22:56
Speaker
driving through that whole scenario. And so you need a completely different set of rules to operate this thing to have it under meaningful human control. And lastly, let's say you're an infantry in an urban environment and you're using something like this, what I mentioned before, the Albert Lanier system, some quadcopter that is also has maybe some explosives or some weapons to kill people with even.
01:23:23
Speaker
You wouldn't be throwing this into a building and just say, well, kill everything that moves or has like roughly 37 degree body temperature. You would be doing something like looking humans, looking at the feet, making judgments about what is happening. And so to get back to the original question, what is the human's role? I would say it depends, but it should be as much as possible.
01:23:47
Speaker
depending on the situation. I'm not saying these people on the frigate need to die just because we can't be using autonomy. They can use this to defend themselves. But I think those infantry folks in the urban environment, they will have to also accept maybe some more risk to them. Maybe it would be easier to just have a bunch of slaughter bots go in there and kill everyone.
01:24:12
Speaker
But they would have to be under the obligation lawfully to accept some risk to be able to make the judgments that humans need to be making. Which is, who is a combatant? Who is a civilian? Who cannot be engaging with which amount of force? That's also another thing.
01:24:33
Speaker
that humans need to be making we need to be discriminate we need to be proportionate we need to be cautious under attack. That's stuff that is just coming out of the law and it's directed at humans, humans are supposed to be making these decisions. And they need to be doing this and I think they will be doing this for quite a while because
01:24:49
Speaker
machines, as far as I can see, are incapable of these discrimination decisions. They are most certainly unable to come up with what we would call a proportionate attack in a split second in a specific situation. We wouldn't even know how to train systems to be able to make these kinds of judgments.
01:25:08
Speaker
And so, yeah, it all goes back to who or what is deciding what, when, and where. And that depends on the operational context. And we should be having
01:25:20
Speaker
humans do the stuff that humans are good at, and we are super good at understanding situations, rather than just recognizing objects in an image. We know what happens. What is he doing over there? Why is she running out of that house? A machine will not understand. We have an immediate notion of what is happening, and humans should be doing this. And we can automate other things, stuff that machines are good at.
01:25:47
Speaker
And so it all goes back to there's no really black and white, super easy answer here. It is complicated, but it's not too complicated. I think we can figure it out. We've figured out other complicated stuff in the past and we've come up with rules to make it work in a way that we would say.
01:26:04
Speaker
is compliant with the law and in line with our ethics and is not getting us into extra risk. We can imagine looking at an enemy and then having to make the decision, okay, is this enemy intending to surrender? That's extracting information about the intention of a person in an image is probably beyond the capabilities of current autonomous systems.
01:26:31
Speaker
But it's very much something that humans evolved to do.
01:26:35
Speaker
There's a fantastic picture I saw in one of those image recognition papers. And it's like a backstage, like a hallway. And it's Barack Obama and a bunch of other people in his, you know it? Oh, listen, this is an image where Obama is playing a prank on another, I'm guessing official, government official, where Obama is leaning on the way to make the person way more than he actually does. Exactly. And everybody's laughing in the picture.
01:27:04
Speaker
And it's like you look at this and you immediately get the prank. You get why it's funny. You get why everybody's laughing. And it's just, it's a fun picture. Ask a machine what is happening in that picture. You won't get a good answer. Although I will say, I think we can try to feed that image to a large language model in the perhaps near future and things might be different. Here might be a good place to distinguish two phrases. This is a bit of legal terminology. So we have
01:27:34
Speaker
meaningful human control versus appropriate human judgment. To outsiders, this sounds like this is probably the same, but perhaps you can explain why context matters here and why these phrases are actually pretty different.
01:27:50
Speaker
They are in fact quite different and it's also the second phrase is appropriate levels of human judgment even so it makes it makes it even more I'd say abstract in a way so The US was quite early with regard to all of this also in terms of putting out regulation. They have a document With the number three thousand point or nine or directive on autonomy and weapon systems and they issued this for the first time in 2012
01:28:18
Speaker
And now they are, I think, in their third revision of this document. And this notion that you laid out this appropriate levels of human judgment is still very much like the Archimedean point in the way they think about this. Like everything revolves around these appropriate levels of human judgment.
01:28:39
Speaker
The notion is kind of the way I described it before that humans do human things and machines do machine things. And the way the doctrine thinks about this is that we will keep this in a way that is IHL compliant and a responsible way to deal with it.
01:28:59
Speaker
by making sure that we have appropriate levels of human judgment in that whole system and that basically starts with the with the conception and the initial designs of the weapon system and the prototyping and the procurement and the certification and validation and in the end the actual use of that system.
01:29:20
Speaker
And people looked at this and they were like, that's a bit too big for my taste. Basically, they were saying, what are these? Well, when do we know when human judgment, whatever that is, is appropriate? At which point? And how is the human judgment being fed into that system?
01:29:41
Speaker
And so that is how Meaningful Human Control basically came about that people were saying we need something that is more specific and impressing on everyone that we want the human to be in a stronger role in this human-machine relationship that we're talking about, which is what we're talking about because we're not talking about a category of weapons, we're talking about functions and who's doing what.
01:30:09
Speaker
And yeah, that's how it came about, meaningful human control, which kind of at the beginning was like an empty signifier. I perceived it as something that wasn't fully fleshed out when it was put out, but it turned out to be a very useful concept because the more people thought about this, that we want, okay, we want the human something,
01:30:37
Speaker
Do we want human judgment? Nah, maybe judgment is not really enough. Maybe we want control. Okay, we want human control. Now, what kind of human control do we want? Well, we don't want the human to be sitting in a booth looking at a screen and whenever the screen turns green, we want the human to be pushing about. We don't want a rubber stamp. Exactly. We want actual, like, meaningful human control. That's what we want. We want humans to be understanding what it is that they're doing.
01:31:05
Speaker
And so this is kind of how it came about. And this is how meaningful human control is conceptually speaking, a stronger idea, putting more emphasis on the human role in the human machine interfacing, and how it separates from appropriate levels of human judgment.
01:31:24
Speaker
And then all the rest came afterwards. The things that I was basically listing before when I was saying needs to be foreseeable. What happens? The system needs to be administrable. There needs to be traceability to be legally accountable. All these kinds of things, control and use, control by design, those came afterwards. This is when research groups and just different groups and people working on this
01:31:54
Speaker
came up with ideas how we how could we flash out meaningful human controls so that it is actually in the end in specific operational contexts, liable. And so we can
01:32:04
Speaker
We can make it so that the human machine team is working in a way that we would say from the outside, okay, the system as a whole is on a meaningful role. Perhaps we should discuss the political possibilities here. Why is it difficult to regulate autonomy if we think international?
International Regulation and UN Efforts
01:32:22
Speaker
I think one of the reasons should be fairly evident from our conversation that we were having. It is a bit more complicated than, for instance, counting nuclear warheads.
01:32:31
Speaker
Yeah, we might say that's complicated enough to count nuclear warheads. Yeah, it is. I mean, also doing it in a way so you can do verification without giving away any of the secrets. So you would make the yeah, that's difficult stuff. But you probably won't have as hard a time agreeing on what a nuclear war it is. And so that part definitely is easier compared to everybody agreeing on what we're talking about when we're talking about autonomy and weapons.
01:32:59
Speaker
And I know this for a fact, because I was at the UN from the from the get go. And in the beginning, people were talking about drones. And then for a while, we were talking about Terminator. And then we were talking about the category of weapons and nobody was able to define what this category is. And then finally, you know, ICRC and some other, you know, strong voices, they were saying, well, it's about selection, engagement of targets, about human intervention, and then it dawned on everybody. Well, then it's basically a functional issue. So we need to be thinking about this.
01:33:27
Speaker
in terms of the functionality and who's doing what. And so that took a while, as I said, took me quite a while and the process in the UN is, I think, super valuable and great because it gave this learning opportunity to everybody, to the states, to civil society, to academics.
01:33:45
Speaker
And so we learned about this, and we are now where we are, which is we kind of know what it is that we're talking about. We also know what we should be doing about it. I was saying, we can regulate. If we were to say we want meaningful control depending on operational context, obviously, the finer, you have to granulate this like finer and finer depending on what it is that you're talking about, like the frigate or the infantry.
01:34:10
Speaker
peeps in the in the urban environment. But you could be saying that this is what we want. And this is also how a lot of international law works. It sets a super abstract norm and says, this is the norm. This is what we want. What it specifically means in specific contexts is quite removed from that. And nobody would expect like an international legal document to spell that out for everybody in in the in its finest detail. But we could be agreeing on that norm.
01:34:38
Speaker
Now, why don't we? Why is it so hard? It's not because we haven't understood at this point what it is that we're talking about, we have. If you meet people who in 2023 say, well, we haven't defined what the lethal autonomous weapon system is, you can say, well, either you have no idea what you're talking about or you're trying to stall the process.
01:34:57
Speaker
that people will still say this in the UN and you know that you're just doing it to just pump the brakes so we're not getting anywhere. Why is that the case? Is it specifically adding the phrase legal to autonomous weapon systems? So what is it that's lethal? Lethal, yeah.
01:35:12
Speaker
Well, okay, that's another thing. Yeah, the lethality part. So it's two reasons. Why is it that you would still meet someone who would say lethal autonomous weapons systems, we don't have them defined yet. And so we can't enact any regulation. Like I said, the getting the complicated business done and understanding what it is that we're talking about, that's done. Why are we still not enacting any
01:35:35
Speaker
rules on it, because there's no political will. People will, for instance, rather spend months debating about the lethality part of autonomous weapon systems, which is the moniker that is being used in the UN laws, LA, WS, which is how it started. And the way the UN works is, you know, at some point in the dawn of time,
01:36:00
Speaker
You know, regarding the discussion of autonomous weapons systems, someone thought it would be a good idea to name it this way and then they all agreed upon this and there's no way going back from this. It's only very, very slightly that we're trying to move away from this in newer documents that I looked at, you know, just the other day that are being circulated at the UN in Geneva where they're trying to drop the L.
01:36:24
Speaker
Because you don't need lethality in there. First of all, lethality is a consequence of a use of force. You attack someone and then that ends up being lethal. It is not necessarily a characterization of the weapon system you're using. You know, non-lethal or less than lethal weapons can of course be used in a way so that they're lethal. So it doesn't make any sense to put that in there from the get-go.
01:36:48
Speaker
And also, that is something that we haven't talked about. But that is also, of course, an issue. Imagine a huge system of non-lethal weapons autonomously dealing out violence. Something like a social scoring system coupled to a drone swarm, where if you jaywalk, you immediately get tasered by a drone autonomously. This is a terrible, oppressive thing. And it is autonomy in weapon systems.
01:37:18
Speaker
But no one would be saying, well, that's OK because it's not lethal. Of course it's not OK. It's a terrible, terrible notion. So you don't need the lethal part. The AWS part of the loss moniker, the AWS part is fine as long as you translate it as autonomy in weapon systems. And why don't people do anything about it? Because I was alluding to this before.
01:37:42
Speaker
even before Russia attacked Ukraine in 2022, attacked it again, one should say, we were kind of stuck at the UN in Geneva, at the CCW. And to be perfectly honest, the pandemic
01:38:01
Speaker
I thought, well, all of a sudden the pandemic made it possible for us to watch it from remotely via UN web TV. And I was like, oh, that's an option now. Now I don't have to go there in person anymore. And it also kind of, as soon as you were removed from the room and you saw like how dismal the process really is,
01:38:24
Speaker
even though it was very valuable in terms of creating this learning curve, and it's still a valuable process to have. I'm not saying we should no longer be talking about this, but it is not a process that you can expect to produce anything tangible. The CCW will not produce any new law on this. That is fairly certain. Wrapping up here, the final question is, where are we in the process of regulating autonomous weapon systems?
01:38:53
Speaker
What are the prospects of getting some form of norm or prohibitions established, maybe a treaty, maybe via soft law? What do you think? So it's not as bleak as maybe it sounded when I was speaking about the process at the CCW and how terribly stalled it is there, because there is stuff happening at the UN General Assembly, first committee in New York.
01:39:18
Speaker
there will be a resolution hopefully that will be not a huge but an important step because while it's not necessarily opening up a new forum for this it will task the secretary general to do additional things on this and just give civil society and you know other stakeholders in this process an additional lever to pull on and say we need to be doing something about this and if the section for instance came back and were to say
01:39:46
Speaker
I recommend the international community to be doing something about this to open negotiations that would, you know, make this lever even more powerful and we could maybe, you know, get states to finally sit down and talk about the issue with a bit more substance. I mean, that's something that most people also, I think, don't understand. We have never negotiated this.
01:40:09
Speaker
It's just been talked about. So everything that has happened in the UN was just discussion. Nothing is really under negotiation in terms of we're negotiating
01:40:20
Speaker
rules or binding treaties or new international law. And so I'm not without any hope, but it is a marathon and not a sprint. I think what needs to be done is clear. We kind of put it together in our conversation quite nicely. I think there's just a thing that should be prohibited that would be weapons that select and engage targets without human intervention going after specific biometric signatures.
01:40:48
Speaker
that would just something that I would outlaw. And if we were to come up with binding rules, they should say, in a very abstract manner, no weapon systems that cannot be put under meaningful human control. That is in its most abstract sense, which I think the international community should be agreeing upon. And as long as that is not possible, because
01:41:11
Speaker
specific states just do not want to move forward on this, even an inch. I think it's important to just have domestic processes on this. I've been active in Germany trying to tell people, hey, we can do something, we can have a doctrine, we can have an official document saying,
01:41:31
Speaker
The German armed forces, the Bundeswehr is going to use this in this and that and that manner, but we will never be doing this, that and the other. And then maybe compare this to what the French are doing and the Dutch are doing and the Brits are doing and the US are doing and maybe come up with good, you know, best practices also to give to help build this, what I was calling before, like this proto norm. People know that you just don't rush into this.
01:41:56
Speaker
And to flash out this protocol and give people a more and more clear idea of how we can be using this and the things that we should definitely avoid. And I think this way we could at least be coming to something like soft law before we get to anything binding, like something like a catalog of best practices or a code of conduct or something.
01:42:21
Speaker
which is not great. Like I would hope for something that is way stronger than this. But you know, as someone who's been doing this for a decade now, I'd be happy with at least something that we then can build upon because as I said, it's a marathon, not a sprint. Unfortunately, Frank, thanks for coming on the podcast. It's been a pleasure talking to you.