Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Why Ban Lethal Autonomous Weapons image

Why Ban Lethal Autonomous Weapons

Future of Life Institute Podcast
Avatar
44 Plays6 years ago
Why are we so concerned about lethal autonomous weapons? Ariel spoke to four experts –– one physician, one lawyer, and two human rights specialists –– all of whom offered their most powerful arguments on why the world needs to ensure that algorithms are never allowed to make the decision to take a life. It was even recorded from the United Nations Convention on Conventional Weapons, where a ban on lethal autonomous weapons was under discussion. We've compiled their arguments, along with many of our own, and now, we want to turn the discussion over to you. We’ve set up a comments section on the FLI podcast page (www.futureoflife.org/whyban), and we want to know: Which argument(s) do you find most compelling? Why?
Recommended
Transcript

Introduction from the UN Convention

00:00:04
Speaker
Hey everyone, I am Arielle Kahn with the Future of Life Institute and I am very excited to be putting this podcast together from inside the United Nations at Geneva.
00:00:15
Speaker
So if the audio sounds weird or echoey or if there's people talking in the background, my apologies. That's what it is. But I am here representing the Future of Life Institute at the United Nations Convention on Conventional Weapons Group of Governmental Experts meeting on lethal autonomous weapons. And that will get shortened down later to just the CCW.

Why Ban Lethal Autonomous Weapons?

00:00:41
Speaker
I wanted to bring together a bunch of people who have been working on these issues of lethal autonomous weapons to talk about why we're all concerned about lethal autonomous weapons being developed and why we want to see a ban and what the various arguments are in favor of a ban.
00:00:57
Speaker
This is also a very special podcast because later on we're going to try to get more audience involvement in this discussion. These are pre-recorded so we can't do anything live, but we'd like to try to get everyone listening to start giving feedback about what are humans you find most convincing to ban lethal autonomous weapons.

Medical Community's Role in Weapon Ban

00:01:20
Speaker
My first interviewee is Dr. Amelia Jaworski. She is a physician scientist and entrepreneur who cares deeply about the issue of lethal autonomous weapons. And she started the nonprofit, Scientists Against Inhumane Weapons, which is working on education and advocacy around this cause. So Amelia, thank you so much for joining us.
00:01:42
Speaker
Thank you so much for having me, Arielle. So I'm going to dive right in because I know you've done a lot of work just in the last couple of weeks. Your organization is very, very young and you've hit the ground running. So I want to talk about some of the work you've done so far, especially with the medical community and why you think the medical community is so important to involve. So maybe that's actually the first question to start with is why do you think it's so important for the medical community to be involved with the issue of lethal autonomous weapons?

Humanitarian Consequences Highlighted by Doctors

00:02:10
Speaker
I think that the medical community and also more broadly the global health community is really vital to this conversation, both in terms of the roles that have historically been played by these communities in weapons bans. When you look at the bans on chemical weapons, biological weapons, nuclear weapons, or even conventional weapons like land mines,
00:02:32
Speaker
The medical community was really instrumental in demonstrating and highlighting the humanitarian consequences of these weapons being used. In this particular class of weapons, because here we're talking about prevention, there isn't necessarily something to point to that's already happened to highlight the consequences of the weapons.
00:02:52
Speaker
But there certainly is a lot that we have learned from our previous work in advocacy against specifically weapons of mass destruction of which I would classify lethal autonomous weapons and sort of highlighting why they should be banned.
00:03:07
Speaker
I think even more interestingly in this particular conversation is the role that the global health community is playing right now and making really important moral and ethical decisions around how AI should be used when it comes to applications that affect human health and well-being.
00:03:24
Speaker
So i applications in medicine are one of the most rapid areas of adoption and high area of interest and this past year the am a the american medical association actually came out with a policy on augmented intelligence.
00:03:39
Speaker
And so they purposefully coined the term augmented to talk about AI applications in medicine to really reflect that there is an irreplaceable role of human judgment when it comes to decision making that has profound consequences for the treatment and the prevention of harm in this case. So they talk a lot about how AI will work with physicians and not in replacement of clinician judgment, especially when it comes to life and death decisions.
00:04:08
Speaker
And so for me, it struck me as quite absurd that on one hand, the medical community has all this sort of policies against weapon bans, has done advocacy in banning weapons, and is also having this conversation about how AI must be used with a physician in the treatment of prevention of harm. And then on the other side, we have this conversation happening around lethal autonomous weapons, which is essentially removing human judgment in the decision to enact lethal harm.
00:04:37
Speaker
And so for me, the disparity between those two conversations was just something that was so market and we really highlighted why the medical community really needs to get involved in this conversation urgently.

Raising Awareness Among Medical Professionals

00:04:50
Speaker
I think that's a really, really nice point about how you have the medical community saying, look at all these ways we can use AI to save lives. And up until now, being fairly silent on the idea of using AI to take lives. And so I don't know if you want to go any more deeply into that before I move on or if that covers it.
00:05:09
Speaker
Surely. I think that part of the reason that those two conversations haven't connected yet up until this point is when you talk to people from the medical community about the issue of lethal autonomous weapons, they get deeply concerned very, very quickly. And what I found just in my sort of nascent stage working in this area is that it's really an issue of awareness that this conversation is even happening around lethal autonomous weapons.
00:05:36
Speaker
In my conversations with people from the medical community, once you really kind of break down this conversation that's happening, they do feel a very strong desire and urge and willingness to get involved. But I think the barrier to the connection between those two conversations up until this point has been awareness.
00:05:55
Speaker
And I think that's natural from the perspective that AI is coming into medicine. So as a medical community, you really think of how to process that technology and how it shapes your field. You're a little bit less connected to other uses and applications of AI. And even though that's
00:06:10
Speaker
completely understandable, a lot of these moral and ethical dilemmas are conserved across different applications. And so here we're seeing one, which is this question about should AI be involved in the decision about the future of a human life? And on one hand, it's in saving lives. The other hand, it's taking them. But that central sort of ethical principle is conserved, whether you're talking about medicine or about weapons.
00:06:34
Speaker
So you've got this editorial published in the BMJ, and you have just launched an open letter for the global health community. And actually, before we get into any more details about the work you've done, you were sort of broadly talking about the medical community, the global health community. Can you just mention quickly what that encapsulates? Who is considered to be part of those communities?
00:06:57
Speaker
Yeah, so I think that it's a broad definition and that's kind of why I keep it loose. But the way I like to think of it is anyone that is operating under the principle do no harm. So if your first principle is do no harm to me, you're in the global health community.
00:07:12
Speaker
physicians and nurses and healthcare workers, but it's also people who are public health professionals or people who are social workers and are working on determinants of health and wellness. And on the flip side, it also includes sort of the scientists and technologists that are developing new healthcare technologies. So these could be biomedical researchers. So I see that as a fairly broad umbrella, but the thing that they all share is the commitment to improve health and to do no harm.
00:07:42
Speaker
Okay, excellent. Can you talk a bit about what you've done so far? What's in the editorial? How can people find it? We'll link to that in the podcast as well. And what's going on with the open letter? How can people get involved?
00:07:55
Speaker
So our editorial, which recently came out, is called Lethal Autonomous Weapons. It's not too late to stop this new and potentially catastrophic force. And we chose to publish this in a medical journal because both of the points we've discussed before of why the medical community, it's so important for them to get involved in this discussion, but also to try and bridge that gap of awareness between the conversations that the healthcare community has been struggling with on sort of
00:08:22
Speaker
AI and how we use it and their previous work on disarmament issues and weapons issues. And then this conversation that's happening around lethal autonomous weapons. So we thought that by engaging a general global health audience, it would be a great tool to help both increase awareness and also engage the community on
00:08:42
Speaker
action on this issue. In order to sort of bridge that education with action in parallel, we have launched an open letter from the global health community calling for a ban on lethal autonomous weapons.

Prevention of Autonomous Weapons Development

00:08:56
Speaker
This is something that if people read this article, learn about the issue in other ways and want to lend their voice to the cause, they can sign. We've already been
00:09:05
Speaker
really excitingly had some great signatories come on board, including at the organizational level, the Organization Physicians for Social Responsibility, which is a U.S.-based organization and also the U.S. arm of international physicians for the prevention of nuclear war, which was an organization that played a vital role in the work that was done on nuclear disarmament, ultimately leading into the nuclear ban treaty.
00:09:34
Speaker
I think another key area that I don't think we've talked as much about yet is this idea that with lethal autonomous weapons, there is a very, very narrow window here for true prevention. And this is sort of another element of something that the medical community does well is obviously we want to prevent disease rather than treat it because prevention, well, it's really hard work to do. If you can do that, it's a lot easier than trying to treat a disease.
00:10:00
Speaker
And so the same is true for lethal autonomous weapons we are in this narrow window where the global community is looking like it's shaping up to get into an arms race around this issue but it hasn't happened yet and so what we have here is basically
00:10:16
Speaker
maybe a few months to a year where we can really take a stand as a global community on stigmatizing this class of weapons and really preempt an arms race from getting underway, which is why it's so important, whether you're talking about the global health community or other communities, to get involved in this conversation as soon as possible, because this policy window to really start to gain some
00:10:41
Speaker
traction and engage in meaningful prevention on this issue is closing. And as we've seen in other classes of weapons, once you let the genie out of the bottle, it's really, really hard to put it back in. Nuclear weapons are a great illustration of that. Once you have people producing weapons, there's contracts contingent on that. There's a lobby around that. There's a national defense and military interests around it. It's very, very difficult to sort of press undo.

Ethical Concerns: AI Bias and Warfare

00:11:10
Speaker
That's why we believe so strongly we need to act quickly to engage in true prevention here, because in contrast to something like nuclear weapons, you can make these things really cheaply and at scale. And so if we enter in a world where these exist, there's really, at that point, not to sound pessimistic, but I don't see a world where we can make them unexist after we make that decision.
00:11:34
Speaker
And so this is an episode about arguments against lethal autonomous weapons. And I'm curious if you could pick just one argument that you found most convincing or something you heard or read that really pushed you over the edge into wanting to actively work on this to get a ban. What's that argument for you?
00:11:55
Speaker
For me, the argument is really this idea of being to selectively target groups of people at scale. And that to me is what's terrifying is this idea that we could make essentially weapons of mass destruction that can pick and choose who to target.
00:12:14
Speaker
and just coming from a medical background and understanding how much bias is already ingrained into our profession that we're trying to work to get out, and that's unintentional. And then seeing this parallel conversation of weapons that are going to be able to actually selectively target people based on certain criteria, whether it be using facial recognition techniques or where they live, what communities they're in, that's absolutely terrifying.
00:12:44
Speaker
When I think of every human being on this planet, that could be any of us. What resonates with me the most in terms of why do I care so much about this issue is the idea that we would fundamentally dehumanize warfare.
00:13:01
Speaker
When looking at how AI has been used in medicine and how much caution has been used, how much diligence and thinking is happening right now around making sure that these systems are used ethically and fairly in treating disease and preventing disease and making sure that harm doesn't come to people.
00:13:21
Speaker
And then on the other side, there's this reckless conversation that we should take human decision making out of the loop in the decision to enact lethal harm, which is the most grave and important decision that anyone could ever make.
00:13:37
Speaker
So to me, it's both morally really disturbing that that conversation is even happening and that we're even sitting here and having to have this discussion. And then it also on a longer term scale makes me really concerned for what our future with AI is going to look like because it is arguably the most powerful technology we'll ever create.

Global Agreement for Positive AI Future

00:13:59
Speaker
And if we can't get together as a global community, as a healthcare community, as a community of citizens,
00:14:06
Speaker
and say that we should not let machines autonomously decide who lives and who dies. I'm not sure how optimistic I am about all the other uses of AI, whether it be in my field in medicine, whether it be in self-driving cars, whether it be the justice system. I just don't know how we get ourselves into a positive future with this technology if we can't all agree on this simple principle, which is we shouldn't see the decision of who lives and who dies to algorithms.
00:14:35
Speaker
the urgency of this issue cannot be stressed enough. In many ways, it's demoralizing that we have to have this conversation, but it's optimistic in the sense that we do have an opportunity to prevent this from coming to pass. And in many humanitarian issues and maybe in many issues and thinking about our future,
00:14:56
Speaker
we have to learn from mistakes. And here, we both can't afford to learn from mistakes, because once there's a mistake, there's kind of no going back. But I think that the global community has to get together and start to act on this in one way or another to ensure that these weapons never come to pass. And we have to not take this opportunity for true prevention for granted, because if we don't act now, there may never be another opportunity to act again.
00:15:25
Speaker
When you say now and when you say quickly, what sort of timeline are you hoping for? I'm speaking about something happening this year to start to generate stigma around this issue. We need to, by the end of this year, have agreed as a global community that this should never happen, that lethal autonomous weapons are immoral and that they should never come to pass.
00:15:47
Speaker
I think that if we don't do something by the end of this year, the opportunity for prevention may be over, because we are at this very fine moment where many countries could go either way on this issue. No one has yet said we're sort of laying the foundation here, and this is what the future of warfare looks like. But I don't think that will be the case if you and I are having this conversation next year at the same time. The pace of artificial intelligence
00:16:14
Speaker
development and robotics development is happening at straight to say an exponential scale. And so I really do think this is a unique but a brief moment that we have to act here.
00:16:26
Speaker
All right, maybe we should just go ahead and talk a little bit about some of the arguments that we're looking at at FLI to help make it clear why we're opposed to lethal autonomous weapons.

Dystopian AI Weapon Scenarios

00:16:41
Speaker
And so, Amelia, you've just started doing some work with focus groups to understand what resonates with people in terms of recognizing why lethal autonomous weapons could be so devastating.
00:16:56
Speaker
And I want to go through some of those arguments now on the podcast. And then as I mentioned earlier, I also want to give people a chance to give their own feedback and their own perspectives, which again, I'll explain at the end of the podcast about how listeners can respond and be part of the conversation.
00:17:15
Speaker
So first up, there's quite a few arguments. And I'm not really quite sure the best way I want to get into this, short of just going through it. And I don't know that we need to go through all of them. But there's a couple here that I think are incredibly important.
00:17:31
Speaker
Now, I'm going to actually start with the dystopian uses. For me, that's one of the arguments that resonates most strongly on a personal level. And so we've described this as lethal autonomous weapons could be sized and priced like smartphones and lethal drones with GPS and face recognition would essentially democratize this idea of a risk-free anonymous assassination.
00:17:59
Speaker
That is, anyone can get their hands on these weapons, or they can get their hands on software and download it onto what was previously a non-lethal drone, and they can use it to kill anyone. No one can defend themselves against this. Everyone seems to think they can defend themselves against it, and you just can't. It's like a hacker. In order to successfully hack someone, you only have to be successful once, whereas in order to defend against a hacker, you have to defend against every single one.
00:18:28
Speaker
And then there's also this idea that the technology is constantly changing and so you only have to be up to date on your attacking software and make modifications to that in order for it to be effective whereas whoever is trying to defend against this has to be up to date on every possible way the technology could change and it's just not possible.
00:18:50
Speaker
This is one of the ones that concerns me most. So we go on to say that it could enable political murder, ethnic cleansing, and acts that even loyal soldiers may refuse. That is, a human with judgment and values and morals might at some point say, no, what you're asking me to do is wrong, whereas a weapon that has been programmed will do whatever it is programmed to do, even if that's just something so awful no human could imagine doing it. That's the dystopian uses argument
00:19:20
Speaker
And that's one of the ones that has been most terrifying to me. So Amelia, if you want to jump in and either give feedback on that or if there's another one that you think is really important to highlight in this document. I agree that argument also resonates with me deeply as well. And I think it touches a little bit on another argument that we had that is listed here, which is this idea of being an arms race endpoint.
00:19:46
Speaker
So what we talk about is that lethal autonomous weapons will essentially trigger a global arms race where they'll become mass produced cheap.
00:19:56
Speaker
and ubiquitous since unlike nuclear weapons, they really don't require any raw materials that are hard to obtain or expensive or exotic. And because of this, they'll inevitably appear on the black market and they'll get into the hands of people that should not have them like criminals, terrorists, warlords, or even, you know, companies that may or may not have great intentions. And so this comes to the point that you are
00:20:22
Speaker
talking about Ariel, this concept of ubiquity that these could be everywhere and come into anyone's hands. The idea that because they are sort of small and cheap and easily mass-producible, that they will stay in the hands of the military and limited to very specific use cases is just unrealistic.
00:20:41
Speaker
data shown that once you sort of bring a class of weapons into existence and especially ones that are small and cheap they pretty easily fall into hands of people who were not the intended users of them and so that is another point there that can be pretty terrifying.
00:20:58
Speaker
We have these really, really terrifying ways in which these weapons can be used.

Aligning Global AI Values

00:21:04
Speaker
And I think something that the AI community might especially be able to relate to is this idea that if we can't even agree that AI should not be weaponized, that it should not be used to kill people, how are we going to find global agreement on any of the other really challenging ethical and moral issues that is currently facing the AI community?
00:21:27
Speaker
We talk a lot about this issue of AI value alignment or the value alignment problem. How can we design artificial intelligence such that a program being used by someone in the US can be used equally by someone in another culture where their values are different?
00:21:45
Speaker
How can we design AI that reflects all these different cultures globally that no one's being biased or discriminated against? And this is a really, really hard problem in artificial intelligence. And it's something that most of the community agrees we need to address.
00:22:01
Speaker
We need to try to eliminate bias. We need to try to eliminate discrimination. We need to try to develop AI that behaves fairly and ethically. And if we can't even agree that AI shouldn't kill people, it seems unclear how we can ever reach agreement on all of these other issues.
00:22:21
Speaker
Yeah, completely, Ariel. And I think that that touches on the issue, too, of how do we steer our collective shift towards a future with AI that is a positive one? Because there is tremendous amount of benefit we can have for society from AI, whether you're talking about the health care arena, lifting people out of poverty, curing diseases, making things more efficient, uncovering fundamental insights in science that before eluded us.
00:22:51
Speaker
And so if we really want to realize all of those benefits we have to stigmatize the unacceptable uses of the technology and if we don't do that it'll create a backlash that is incredibly powerful and will pretty much do all of those wonderful things that we could realize from ever kind of coming to pass.
00:23:11
Speaker
I agree completely. I think we've touched on some of the arguments that I feel most strongly about, and since I'm the host, I get that choice. But I do want to touch on some of these other arguments that are also extraordinarily concerning.
00:23:27
Speaker
Some of the things that we have listed here, there's the risk that with lethal autonomous weapons, because whoever launches an attack doesn't risk their own soldiers, it becomes much easier to have a war against some other country. It's much easier to attack another country or another group because the risk to yourself is much less. And so we run this risk of more wars and less diplomacy.
00:23:49
Speaker
There's a really big issue with loss of control, issues related to AI not always doing what we expect, the risk that any AI weapon could be hacked. But there's also this idea that humans simply can't keep up with the speed with which these weapons are used. It's physically impossible. We can't keep up with the speed with which the weapons process information and suggest options for the human to take. We can't keep up with attacks.
00:24:18
Speaker
We simply cannot keep up with the speed of technology. And I find it especially concerning this idea of teams that we could have where you have a human and the weapon working together, which sounds nice in theory. But I think even in that situation,
00:24:33
Speaker
We need to be really, really careful that we're ensuring humans actually have meaningful control over the situation. As you mentioned, Amelia, there's a risk that the weapons could be proliferated more easily. These are essentially weapons of mass destruction. They can very easily be scaled up. As we've seen with technology, it does scale. In fact, I think, Amelia, you're the one who said that to me in the past.
00:24:55
Speaker
technology scales up, it gets smaller, and we have a lot more of it. There's no reason to expect that weapons wouldn't be the same.

Ethical Concerns on AI Targeting

00:25:02
Speaker
And then there's the issue that with things like facial recognition and other data analysis, it will be much easier for weapons to selectively target groups of people based on ideology, ethnicity, social media preferences, anything. So those are a few things. Did you want to respond or list some others?
00:25:22
Speaker
No, I think that was really well said. I think you've actually really covered it all. I mean, there's always more things to cover, but in terms of the things that are the core tenets to this conversation that I personally find really compelling and the people that we've spoken to so far find really compelling, you've covered those very eloquently. So thank you for that.
00:25:45
Speaker
Honestly, I think one of the most amazing things to me is there's so many arguments against these weapons that I find it a little bit mind boggling that people are still for them. When we started this initiative and going through figuring out what are all the different ways you could talk about why it's important to ban lethal autonomous weapons, we had over a hundred. So this is the distilled version of that. But as you say, there are many ways to describe this. By our accounts, there's over a hundred of them.

Call to Action Against Autonomous Weapons

00:26:13
Speaker
We're going to have links to the complete list of the arguments that we've put together against lethal autonomous weapons. We'll link to that from the podcast. Amelia, is there anything else that you wanted to add?
00:26:25
Speaker
No, I am all set. Thank you very much, Ariel. I really appreciate you having me on to discuss this pressing issue and looking forward to all working together to make some action happen. Well, thank you so much for joining us. We're excited about the work that you're doing and we're especially excited to have the medical community and the global health community joining. I think it's a really important initiative that you've taken on. So thank you.
00:26:51
Speaker
Thank you so much, Ariel. And it's an honor and privilege to work with all of you on this.

Conclusion and Audience Engagement

00:26:56
Speaker
I'm really excited for what's ahead and hopefully we can get the ball rolling on a ban on this class of weapons. All right. So I've got two more guests with me here at the UN. They're actually in person. I'm going to just go ahead and let them introduce themselves. And then we'll get into some more arguments about why lethal autonomous weapons are a terrible idea.
00:27:19
Speaker
I'm Ray Acheson, Director of the Disarmament Program of the Women's International League for Peace and Freedom. Hi, I'm Rasha Abdul Rahim. I'm Deputy Director of Amnesty Tech, which is in Amnesty International. All right. Well, thank you both so much for joining us. I wanted to bring both of you on because you two especially focus on issues that are a little bit different than standard arguments that at least I hear at FLI about why lethal autonomous weapons are an issue.
00:27:49
Speaker
So Ray, since we had you introduce yourself first, we'll start with you. You look at it from an equality issue, from a gender issue, an ethnicity issue, if you could sort of just talk about the way that you've been looking at this.
00:28:03
Speaker
So first of all, to say, of course, that as a member of the campaign to stop killer robots, WILP is also in agreement with all of the other moral, ethical, technical, legal arguments that you've probably discussed in opposition to autonomous weapons. But in addition to some of those arguments, WILP has also been looking at the problems that are likely to arise in
00:28:24
Speaker
programming of autonomous weapons systems from the perspective of bias. So some autonomous weapons systems, if they are developed, will likely be programmed with algorithms that help in the finding and fixing and selection of targets. And what we're concerned about is the ways in which these programs are likely to be embedded with human biases from the get-go. So this could include things like targeting people on the basis of their skin color.
00:28:54
Speaker
or on the basis of religious dress, or on the basis of sex. For example, right now drone strikes are often targeted against military-aged men, which is actually a form of gender-based violence because it's categorizing all men as militants.
00:29:10
Speaker
We're also worried about targeting on the basis of sexual or gender identity. One of the things that we've seen arise since 9-11 in surveillance has been surveillance of the trans community and misidentification of trans people as terrorists from this notion that a man dressed as a woman, whatever that means.
00:29:30
Speaker
will signify some sort of terrorist trying to obscure their identity. We're concerned about also abilities, disabilities being targeted as well. So there's been some concern from people on the autism spectrum that current and predicted facial recognition software will not be able to understand their facial expressions. They won't react to situations in the same way out on the street.
00:29:53
Speaker
So we have a lot of concerns about these weapons being used deliberately or by accident to target certain communities along various identities and abilities. All right, excellent. Well, not excellent, but thank you. That was good. I actually personally had not heard the issue with the trans targeting before and that was upsetting.
00:30:16
Speaker
But Rasha, we'll move to you because we've got a little bit of a time for you. I want to make sure you guys can get back to the meeting. So all of the discussion that we hear is military based, but you're actually looking at this from sort of a police perspective and how it can be used against civilians. So I'll let you explain better. Yeah, for sure. And I'll also preface this by saying similar to what Ray was saying, that Amnesty is part of the campaign to stop killer robots. And we agree with all of the other issues that have been raised by the campaign.
00:30:42
Speaker
ethical security, IHL issues. But yeah, I mean, I think the fact that human rights isn't really spoken about is related to the fact that the CCW itself, its mandate covers only conflict situations. However, I think it would be disingenuous to think that these systems would only be used in conflict situations. And in fact, we've been
00:31:03
Speaker
looking at semi-autonomous systems that already exist. So they're not fully autonomous weapons systems, but they are semi-autonomous remote controlled air and ground vehicles, in some cases seat based vehicles that have been developed for specific use in law enforcement or domestic situations. And we see that there's a danger that these could become fully autonomous in the future and that they pose certain human rights challenges.
00:31:28
Speaker
And I can run through some of the main human rights implications that we see. And first, it's related to the right to life. So this is a fundamental human right protected under Article 6 of the International Covenant on Civil and Political Rights, or ICCPR. And basically under human rights law, killing is sanctioned only if there's an imminent threat to life or serious injury. And so that's a much higher threshold than under international humanitarian law.
00:31:54
Speaker
And so these systems, if they don't have any meaningful human control, could accidentally or deliberately kill people unlawfully. And so that's why we believe that fully autonomous weapons systems must be banned and that meaningful human control must be retained over weapons systems and the use of force. So that's one. The second is the right to freedom of assembly. Basically people going out to protest in the streets.
00:32:17
Speaker
We've seen systems that have been specifically developed for crowd control and for crowd dispersal. Again, semi-autonomous systems. There's a system that's been manufactured by a Spanish company called Riotbot, which is designed specifically to go out and police protesters. There's another one called Skunk Riotbot, which is a drone which is equipped with less lethal projectiles like pepper balls. And again, that's designed for crowd control.
00:32:45
Speaker
There are reports that this was exported to India a few years ago and that it has been used in India. I say reports because this isn't something that Amnesty is independently verified. But, you know, another example is last year during the Gaza return protests, the Israeli forces equipped drones with tear gas, which were then used to go over the Gaza border fence and shoot at protesters.
00:33:08
Speaker
And so, you know, that's something that they didn't use a specific law enforcement drone, they just equipped an existing drone that they have with TIGA. So it shows you how one, how easy it is to do, and two, how we're seeing a kind of trend in these kinds of uses. And so yes, these are semi-autonomous, but there's nothing to say that fully autonomous systems wouldn't be developed or used in such situations. And then I just wanted to touch on what Ray was saying as well. Another danger we see is in the right to privacy.
00:33:32
Speaker
Because obviously, in order to power these systems, in order to train the algorithms, a big collection of data would need to happen. And so that's going to have an implication on people's right to privacy. And it's also going to have an implication on people's right to equality and non-discrimination. So what Ray was talking about to do with algorithmic bias, with discriminatory decision-making,
00:33:51
Speaker
as we've seen with predictive policing systems and facial recognition technologies. And all these components will comprise fully autonomous weapons systems. You'll have the data collection, you'll have the algorithm, you'll have the facial recognition. And so there are massive implications there on people's data being hoovered up and how decisions are going to be made by potentially untransparent and unaccountable algorithms.
00:34:13
Speaker
And finally, just to touch on international policing standards, there are also international policing standards that apply during law enforcement situations. So the use of force has to be proportionate, it has to be legitimate, so it has to have a legal basis, it has to be accountable. So there must be somebody who is held accountable for misuse of systems or unlawful application of force, and it needs to be necessary. And without getting into the details of whether or not the technology would or would not be able to comply with these rules, which we believe they wouldn't,
00:34:42
Speaker
Just because, for example, under law enforcement situations, the use of force has to be graduated. So you can't resort to lethal force as a first resort. It always has to be a last resort. So what does that mean? That means, as a police officer, I need to use different techniques to try and manage the situation or to neutralize the threat. That means I need to negotiate with the person who's posing the threat. I need to use de-escalation methods. I need to use different levels of force. So not necessarily shoot live ammunition at somebody,
00:35:11
Speaker
you know, maybe use a baton, maybe use tear gas or something, depending on what the situation is. And that kind of graduated escalation or graduated use of force is very difficult for a machine to be able to carry out because machines work better in binaries. And so even if by a miracle, there are a system that were able to make those kinds of judgments, we still believe that there's this fundamental right to human dignity that would still be undercut through the use of these systems. And the
00:35:38
Speaker
right to human dignity is a fundamental right in human rights law and it basically means that humans appreciate the value of life and if fully autonomous weapon systems are used, machines wouldn't be able to fully appreciate the value of life but still would be able to make a decision over life and death and that's unacceptable, ethically but also in relation to the right to human dignity and I'll leave it there.
00:35:59
Speaker
Thanks, Rasha. I was just thinking when you were speaking too about some of the other examples of repression of activism in situations where we can imagine, based on the use of surveillance technology, the use of drones in non-conflict situations, where we could see lethal autonomous weapons systems, fully autonomous weapons systems being deployed. So if we think about, say, the water protectors at Standing Rock in those situations, these types of weapons
00:36:23
Speaker
targeting Indigenous First Nations communities that are working there or if we think of Black Lives Matter in the United States as well and targeting of young Black men in particular and other communities of color, of course. If we think of in Central and South America with environmental and human rights defenders and repression by police and by paramilitary, we can imagine these situations where there really is this intersection, I think, of race, class,
00:36:51
Speaker
and activism that really comes into play where we can see a lot of damage being done if these weapons systems are developed and deployed. So that's definitely something Wilp was also concerned about and trying to articulate an intersectional feminist approach to these arguments in the campaign's work. Excellent. I don't know if this is quite a direct connection. One of the things that I've seen that's worried me is with the US response to the caravan of people trying to escape Central America.
00:37:19
Speaker
you know, a lot of that is at least loosely connected to climate issues as well. And as we see climate continue to destroy locations, you're going to have a lot more refugees and I really worry about what the impact on border control would be. Yeah, I mean, I think that that's another area that we're concerned about is the increasing use of technologies to manage borders. It's not necessarily bad in and of itself and technology can play a useful role. But for example, in the EU, there's a process through which AI is being used
00:37:49
Speaker
to interview people trying to cross borders and trying to assess through their responses to questions to their facial expressions, whether or not they're telling the truth. That's very different to a fully autonomous weapon system. But you can see how these technologies are slowly being introduced into these kinds of situations. And like, could we have a scenario in which fully autonomous systems can go around and pick up people's faces using facial recognition and
00:38:17
Speaker
apply force against them if they are trying to cross the border. I mean, we have sentry weapons on the North Korea-South Korea border, and we have the Guardia, which is an Israeli system used to monitor the border there. But, you know, could we see the proliferation of the use of these systems outside of very specific contexts like North Korea, South Korea, Israel, Palestine? And also, I'd say that if we didn't manage to get a ban, but I firmly believe that we will, because that's the only effective option to deal with this problem.
00:38:45
Speaker
But if we didn't and they were used in conflict situations, they will be used in policing situations. And I know that some people have said, I just can't believe that these will be used in policing because they have a very specific role to play in conflict. But in policing, I just don't see it. I don't buy it. But if you look at drones, for example, the primary mission when they were first developed was surveillance and reconnaissance. And now they've been armed and they've been used in conflict situations, but also outside of conflict situations. There's nothing to say that the same wouldn't happen with fully autonomous systems.
00:39:15
Speaker
Absolutely, especially when you have major US cities that purchase military equipment as well. I mean, there's a direct military to law enforcement line for a lot of equipment in the United States and probably in other countries as well.
00:39:28
Speaker
All right, so if you guys have a couple more minutes, I want to ask you one more question. You've both gotten into some of the issues that you've been looking into. There's tons of other arguments about why lethal autonomous weapons are a terrible idea. And I was just sort of curious, I don't know if this applies most to when you were first getting into this issue or arguments that you've heard since, but what arguments against lethal autonomous weapons most personally resonates with you?
00:39:51
Speaker
That's a good question. I think what I've just talked about is probably one of the things that most resonates with me. But in addition to that, I would say it's this argument about human dignity. It's the idea that a machine taking a decision, in quotes, decision to kill a human being
00:40:08
Speaker
is morally disturbing. It crosses a line, I think, and it really sends us down the worst possible path as a human species that we would go so far as to completely automate or mechanize violence. I mean, we already do so much violence to each other. There's already so much war, so much repression. And I just think this technology takes us past the point of return almost.
00:40:34
Speaker
I really feel like we're at a crossroads with fully autonomous weapons systems that if we do go down that path, we're really, really going to a dark place as humanity and should avoid that at all costs. Yeah, I mean, at the risk of sounding lazy, I completely agree with that. I think the human dignity and the ethical considerations are super important. And often they're thought of as a kind of a dendum or an afterthought, but I think actually they're absolutely essential to the opposition.
00:41:02
Speaker
that we have to these systems and in addition to that I'd add that there are also really serious security concerns related to the development of these systems, the use, the proliferation, the potential for these systems to be spoofed, to be jammed, to be manipulated, to be hacked in ways that we can't control and in ways that would
00:41:21
Speaker
be counterproductive to their objective. So a state can deploy them, but then they can just as easily be deployed against the state they had. So I think that's also a really serious issue and a risk, not to us, but internationally to everyone in terms of state use, but also of these systems being used by non-state actors. All right. Well, so those are my questions. Is there anything else you wanted to add? No, just thank you. Yeah. Thanks very much. Well, thank you so much for joining us. I really appreciated the different perspectives. It was good and helpful.
00:41:54
Speaker
I'm here with my final guest, Bonnie, and I'm going to pass this over to Bonnie and let her introduce herself. My name is Bonnie Docherty. I'm a senior researcher at Human Rights Watch, as well as an associate director of armed conflict and civilian protection at Harvard Law School's Human Rights Clinic.
00:42:09
Speaker
Excellent. Well, thank you for joining us. We've been talking with others about arguments against lethal autonomous weapons and why we need to ban lethal autonomous weapons. So I wanted you to come on and talk a bit because you have a lot of experience with the legal aspect of this. We were listening today and they were talking about international humanitarian law and Martin's clause and Geneva convention and lots of other fun legalese terms.
00:42:38
Speaker
I have a specific question that we'll get to here in a minute, but first maybe you could just give a quick overview of what the legal situation surrounding lethal autonomous weapons is. So the fully autonomous weapons, or lethal autonomous weapons, whatever term you prefer to use, raise a number of concerns under international law, particularly with international humanitarian law, the law of war.
00:43:00
Speaker
serious questions about whether they could distinguish between soldiers and civilians or whether they could balance and ensure that civilian harm does not outweigh military advantage. So there's also a provision in humanitarian law called the Martin's Clause, which sets a moral baseline under the law. And we're concerned that Floyd autonomous weapons would not pass that moral test. And there's also concerns about accountability if a lethal autonomous weapon did commit a war crime or unlawful act that no human would be held responsible.
00:43:27
Speaker
Okay, great. Thank you. One of the concerns that we've heard talking to people, and this is sort of the talking to the lay public that's being introduced to the issue of lethal autonomous weapons. One of the concerns that's been expressed is this idea that if we're worried that lethal autonomous weapons are already in violation of existing laws, why do we also need a ban in addition to that?
00:43:50
Speaker
That's a very good question. We've heard over the past decades that repeated refrain that existing IHL is adequate to deal with whatever weapon we're dealing with. And we're hearing that again in the fully autonomous weapons context. And we feel strongly that there needs to be new international law. Clearly, these weapons raise concerns under international humanitarian law and international human rights law.
00:44:10
Speaker
But there are still matters of interpretation to a degree. Law is always interpreted, and if you had a new treaty specifically dealing with this issue, then it would increase the clarity and strengthen the protections. It would also expand the scope. For example, international humanitarian law generally focuses on use, and we'd like to see development, production, and use of these weapons banned before the genie gets out of the bottle.
00:44:31
Speaker
And there's also the question of once you get that clarity, making sure it sets a stronger norm. It's binding on the countries that join the treaty, but it also influences countries that do not join the treaty because it stigmatizes the weapon more by having it specifically prohibited under the law. And so the last question that I want to ask you is.
00:44:49
Speaker
You've been working on the issue of lethal autonomous weapons for a while. I believe you've been reaching a lot of other weapons issues. When it comes to lethal autonomous weapons, not necessarily taking into account the legal aspect, but everything about them that you've learned, what has resonated with you most as an argument against developing them? What worries you the most?
00:45:10
Speaker
Well, as you know, the lethal autonomous weapons raise a whole host of concerns, legal, moral, security, technological, et cetera. And it's different than the many other weapons I've dealt with, which in some ways usually have been focused more on legal and humanitarian concerns. And these have a much broader range of concerns, in part because they don't exist yet. So they are a little more abstract. And I think while in conversations that I've found people latch on to different ones of these concerns, for me, what's the most compelling is the cumulative effect.
00:45:36
Speaker
that even if a technological fix could resolve illegal concerns, you still have moral problems with the threat to human dignity. Even if you can resolve that, there's still accountability problems and so forth. So to me, it's the package that I find particularly compelling rather than one particular issue.
00:45:51
Speaker
Yeah, I like that a lot because there's definitely arguments for me that I find much more compelling than others. But one of the most disconcerting things I see is just how long the list is for why we don't want to be taught those weapons. I think it's important to remember because fortunately, these weapons don't exist. So we don't have human victims yet. And that's what we want to prevent. When you have human victims, it's more straightforward why you need to ban a certain weapon. In this case, it's less straightforward. But
00:46:18
Speaker
Because of the host of problems, it shows why we need to take a precautionary approach and act now rather than wait and see what the technology will bring. All right. Is there anything else you'd like to add?
00:46:29
Speaker
I think one of the things I'd mentioned that comes up in the legal side, but as well as all these other issues is the importance of human control as sort of a foundational element of this whole debate, whether you're banning weapons without human control or requiring human control over the use of the force, their amount to the same thing. But that element of meaningful human control would address basically all of the concerns we have, the legal, the moral, the accountability, and so forth. So I think it's really important for listeners to keep that in mind that that is sort of at the core of this whole issue.
00:46:56
Speaker
meaningful human control? Yes. Do we have a definition for that? Meaningful human control isn't a term that's been defined yet. And there's a general convergence of views that human control or the use of force is crucial. States call it different things, meaningful, effective control, judgment. They have different terms. And I think that's something that can be sorted out the negotiating table. Right now, I think there's no strict legal definition, but criteria that would be important include giving an operator enough information on which to base decision about whether to use a weapon.
00:47:25
Speaker
predictability and reliability, having a temporal constraints to ensure that the machine is not operating too far away in time or space from the human that is deploying it. Certain elements are starting to emerge, but that's why we have negotiations and why we need to move on from these general conversations to specific discussions of texts. All right. Excellent. Well, thank you so much. Thank you for having me.
00:47:53
Speaker
So we have now heard quite a few arguments from many people about why they think lethal autonomous weapons are a bad idea and should be banned. And now it's your turn. We want to hear from you. What arguments against lethal autonomous weapons have you heard that resonate with you the most? Why do you think lethal autonomous weapons should be banned?
00:48:14
Speaker
We'll have a comment section on the podcast page which you can find at futureoflife.org slash yban. And there you can also find a list of many of the arguments, links to more arguments, and a link to the transcript of this podcast. So you'll have plenty of resources if you need reminders about what we talked about today. You can also leave us comments on Facebook, Instagram, or Twitter using the hashtag Lethal Autonomous Weapons.
00:48:40
Speaker
This is a conversation that needs to happen, and we need lots of voices, so please take a few minutes to let us know what you think. I hope you enjoyed this episode, and as always, if you did enjoy it, please take a moment to like it, share it, and maybe even leave a good review. And I'll be back again next month with more conversations with experts.