Introduction to AI's Legal and Ethical Issues
00:00:03
Speaker
from the FLI audio files. I'm Arielle Kahn with the Future of Life Institute. Today, I'm joined by Matt Shearer and Ryan Jenkins to discuss some of the legal and ethical issues facing AI, especially regarding things like autonomous weapons and self-driving cars. Matt is an attorney and legal scholar based in Portland, Oregon, whose scholarship focuses on the intersection between law and artificial intelligence.
00:00:32
Speaker
He maintains an occasional blog on the subject at lawandai.com, and he also practices employment at Buchanan, Angeli, Altshull, and Sullivan LLP. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and Emerging Sciences Group at California Polytechnic State in San Luis Obispo, California. He studies the ethics of technologies like cyberwar, autonomous weapons, driverless cars, and algorithms. Matt and Ryan, thank you both for joining me today.
00:01:01
Speaker
Yeah, thank you. Glad to be here. Great.
The Relationship Between Ethics and Law in AI
00:01:05
Speaker
So I typically think of ethics as the driving force behind law, which may or may not be true. But as such, I wanted to start with you, Ryan. I was hoping you could talk a little bit about some of the big ethical issues you think are facing us today when it comes to artificial intelligence. Yeah, I think that the relationship between ethics and law is complicated. And I think that
00:01:27
Speaker
The missions of the two of them run in parallel. And I think very broadly speaking, the mission of both ethics and law might be to discover how to best structure life within a community and to see to it that that community does flourish once we know those certain truths. So ethics does some of the sort of investigation about what kinds of things matter morally, what kinds of lives are valuable.
00:01:54
Speaker
how should we treat other people, and how should we ourselves live? And law does an excellent job of codifying those things and enforcing those things. But there's really an interplay between the two. I think that we see law replying to or responding to ethical arguments, and we see ethicists certainly prodded on in their mission by some of the things that lawyers and legal scholars say too. So it's a sort of give and take. It's a reciprocal relationship.
AI Decision-Making and Bias in Algorithms
00:02:21
Speaker
In terms of artificial intelligence,
00:02:24
Speaker
Well, I think that we're undergoing a pretty significant shift in a lot of different areas of society, and we're already employing artificial intelligence into a lot of spheres of human activity that are morally important, spheres that are morally laden, which is to say they have consequences for people's lives that really significantly matter. One of the easiest ways of telling whether a decision is a moral decision
00:02:49
Speaker
is whether it concerns the distribution of benefits and burdens or whether it stands to make some people better off and some people worse off. And we're seeing that taking place right now with artificial intelligence. That adds a lot of new wrinkles to these decisions because oftentimes the decisions of AI are inscrutable. They're opaque to us. They're difficult to understand. They might be totally mysterious. And while we're fascinated by what AI can do,
00:03:17
Speaker
I think oftentimes the developers of AI have gotten out ahead of their skis to borrow a phrase from former vice president Joe Biden and have implemented some of these technologies before we fully understand what they're capable of and what they're actually doing and how they're making decisions. And that seems problematic. That's just one of the reasons why ethicists have been concerned about the development and the deployment of artificial intelligence. And can you give some examples of that? Yeah, absolutely. So.
00:03:46
Speaker
There was an excellent piece by ProPublica, an investigative piece I think last year, about bias in the criminal justice system where they use so-called risk assessment algorithms to judge, for example, a person's probability of recommitting crime after they're released from prison. So a couple of companies produce algorithms that take in several data points, over 100 data points.
00:04:11
Speaker
and then spit out an estimate. They literally predict this person's future behavior. It's like out of something in a minority report. And they try to guess how likely this defendant is, say, after they've been convicted of a crime. How likely is this defendant of committing another crime? And then they turn this information over to the judge, and the judge can incorporate this kind of verdict along with other things, other pieces of information, and their own human judgment and intuition
00:04:39
Speaker
to make a judgment, for example, about how long this person should serve in prison, what kind of prison they should serve in, what their bail should be set at, or what kind of parole they should be subject to, that kind of thing. And ProPublica did an audit of this software and they found that it's actually pretty troublingly unreliable. And not only does it make mistakes about half the time, they said it was slightly better than a coin flip at predicting whether someone would recommit a crime.
00:05:08
Speaker
slightly better than a coin flip, but interestingly and most troublingly, it made different kinds of mistakes when it came to white defendants and black defendants. So it was
00:05:20
Speaker
systematically underestimating the threat from white defendants and systematically overestimating the threat to society from black defendants. Now, what this means is that white defendants were being given leaner sentences or being let off early. Black defendants as a group were being given harsher sentences or longer sentences. And this is really tremendously worrisome. And when they were asked about this, when the company that produced the algorithm was asked about this, they said, well, look,
00:05:48
Speaker
It takes in something like 137 factors, but race is not one of them. Now, if we just had the artificial intelligence check a box that said, oh, by the way, what's the race of this defendant? That would clearly raise some pretty significant red flags and that would raise some clear constitutional issues too about equal protection. And Matt, I would defer to you on that, on that kind of stuff, because that's your expertise. But as a matter of fact, the AI didn't ask that kind of question. So it was making.
00:06:16
Speaker
making mistakes that were systematically biased in a way that was race-based, and it was difficult to explain why it was happening. These are the kinds of problems. This is the kind of opaque decision making that's taking place by artificial intelligence in a lot of different contexts. And when it comes to things like distributing benefits and burdens, when it comes to deciding prison sentences, this is something that we should be taking a really close and careful look at.
Future Ethical Issues of AI
00:06:43
Speaker
Okay, so I want to stick with you for just a minute or two longer. As AI advances and as we're seeing it become even more capable in coming years, what are some of the ethical issues that you anticipate cropping up? Besides the question of transparency versus opacity, the question of whether we can understand and scrutinize and interrogate the way that artificial intelligence is making these decisions, there are some other concerns, I think, with AI.
00:07:12
Speaker
One of these is about the way the benefits of AI will be distributed. So there's been a lot of ink spilled, especially recently, just in the last couple of years about automation and the threat that automation poses to unemployment. And some of the numbers that are being reported here, even by studies coming out of places like Oxford, some of the numbers being reported are quite alarming. They say, for example, as many as 50% of American jobs could be eliminated by automation just in the next couple of decades.
00:07:40
Speaker
Now, even if those estimates are off by an order of magnitude, even if it's merely 5% of jobs, we're still talking about several million people or tens of millions of people being automated out of a job in a very short span. And that's a kind of economic shock that we're not always used to responding to. So it'll be an open question about how society, how the government, how the economy is able to respond to that. And to get to the ethical point,
00:08:07
Speaker
besides the obvious fact that being unemployed is bad and having unemployed people bad for society in several ways. It raises more foundational questions, I think. Questions I've been thinking about a bit recently about, for example, the way that we think about work, the way that we think about wages, the way that we think about people having to, quote unquote, earn a living or, quote unquote, contribute to society.
00:08:32
Speaker
These are moral claims, claims about when someone should be kept alive, basically, the idea that someone needs to work in order to be kept alive. And many of us or most of us walk around with some kind of moral claim like this in our back pocket without fully considering it, I think, and considering the implications. And I think that automation, just to give one more example, is really going to pose some challenges to that.
00:08:56
Speaker
So I think that those are some pretty clear concerns. There are other concerns with specific examples of artificial intelligence in different contexts. So I suppose later today we'll talk about driverless cars or autonomous weapons or quote unquote killer robots. And those raise their own interesting ethical problems.
00:09:15
Speaker
Even farther down the line, if we want to get really wild and outlandish, there are questions about whether artificial intelligences could ever become artificially conscious. And if that's the case, would robots be entitled to the same kinds of legal rights or moral rights that you and I have? That question is a bit more far fetched and a bit more science fiction. But many people think that that kind of artificial consciousness is something that we might see sometime in the century.
AI as a Legal Blank Slate: Challenges and Integration
00:09:44
Speaker
And Matt, I want to, going back, yes, I do want to get into autonomous vehicles and weapons and all of that stuff here soon. But first, Matt, I wanted to ask you very similar questions. What are some of the big legal issues facing us today when it comes to artificial intelligence? One interesting thing about artificial intelligence in the legal sphere is that it's still largely a blank slate.
00:10:10
Speaker
And we are just now starting to see kind of the first sets of what you might call hard law coming down that relates to artificial intelligence. And that's specifically in the area of autonomous vehicles. Up until now, really, there's been lots of artificial intelligence systems that have been operating, particularly in the internet. But there's really been no government regulation that
00:10:39
Speaker
treats artificial intelligence as in some way different from any other product or technology that has been developed in the past. The law has basically been able to operate under the old assumptions and principles of our legal system. And I think that eventually that's going to become very difficult. And the reason for that is several fold. The first I'd actually say is that
00:11:09
Speaker
simply that machines aren't people. And the way that legal systems across the entire world work is by assigning legal rights and legal responsibilities to people. The assumption is that any sort of decision that has an impact on the life of another person is going to be made by a person.
00:11:33
Speaker
So when you have a machine that is making the decisions rather than humans, one of the fundamental assumptions of our legal system goes away. Right now, it's not that big a deal because in most spheres, we are not delegating very important decisions to AI systems. That's starting to change a little bit, but we seem to be content right now with kind of taking a wait and see approach.
00:12:00
Speaker
I think eventually that's just going to become very difficult because certainly there seems to be the promise of AI disrupting and displacing human decision makers out of a wide variety of sectors and industries. And as that happens, it's going to be much more complicated to come up with lines of legal responsibility and reliability.
The Importance of AI Transparency
00:12:22
Speaker
Another major issue that I think is going to come up is one
00:12:26
Speaker
that Ryan didn't just touch on, but very much highlighted. And that is transparency. And that is already becoming, I think, kind of a critical focus. One of the, perhaps the issue on which people who have concerns or an interest in the safety ethics and law related to AI have focused on. And transparency is, I think,
00:12:54
Speaker
something that is a natural response. You want transparency for things that you don't understand. That's one reason why I think a lot of people who are interested in this space have kind of focused on transparency as a
00:13:13
Speaker
kind of control method or a way of ensuring the safety of AI without directly regulating it. They're hoping to convince manufacturers and designers of these systems to make them more transparent. And I think that that's a great idea. And I really do think that kind of in the modern information age, transparency is perhaps the best guarantor of safety that we have for technologies. But I don't think that it's a cure-all.
00:13:42
Speaker
One of the issues that we're going to run into is that I don't even think that we can really comprehend at this point what our society is going to be like 50 years from now if a huge number of industries ranging from medicine to law to financial services to you name it is in large part being run by the decisions of machines.
00:14:08
Speaker
We just don't know what that society will look like. And at some point, even if the humans understand how these systems work and have a decent understanding of their methods of operation, once we live in a society where critical decisions are routinely made by machines, the question is how much control can humans really say that they still have in that circumstance? And that's going to create all sorts of ethical and legal challenges down the road.
00:14:38
Speaker
So even just with this example that Ryan gave, what happens legally with, say, the defendants who get harsher sentences or something?
AI in Autonomous Vehicles and Weapons
00:14:48
Speaker
Can they sue someone? Do you know what's happened with that? I have not heard specifically about whether there's been legal action taken against the manufacturers, say, of the systems that were involved in that. Obviously, the defendants have the option of individually appealing their sentences and pointing out that
00:15:07
Speaker
These systems are subject to systematic biases and errors. This is one reason why I'm glad to have the opportunity to speak to other people who are working in this space, because this is actually an issue that hadn't been brought to my attention. And right now, no, I don't think that there is any kind of clear line of accountability for the people who designed and operated the AI system.
00:15:34
Speaker
I think that our legal system probably is operating under the assumption that the existing remedies for criminal defendants of appeal and if that fails, habeas corpus and other forms of post-conviction relief that use Latin terms that I won't bore you with. But I don't think that there's any system in place that really addresses, well, we need to not just redress these sorts of biases and errors after the fact for the individual cases, we need to
00:16:04
Speaker
somehow hold someone accountable for allowing these mistakes to happen in the first place. I don't think that we're there right now. Okay, and then I want to go back to you were talking earlier about decision making with autonomous technologies. And one of the areas where we see this starting to happen and likely happening more is of course, with both self driving cars and autonomous weapons. And
00:16:29
Speaker
Ryan, I know that's an area of focus for you and Matt, I think these are areas where you've also done some research. So I was hoping you could both sort of talk a little bit about what some of the ethical and legal implications are in those two specific spheres. I actually would like to back up a quick second and just comment on something that Ryan said at the very beginning of our conversation. And that's that I completely agree that there's an interplay between law and ethics and that they
00:16:58
Speaker
kind of work in parallel towards the same goal. And I actually would bring in yet another field to explain what the goal of both law and ethics is, and that's to maximize society-wide utility. The idea, I think, behind both law and ethics is how do we make a better society and how do we structure people's behavior in the way that makes everybody most satisfied with being in that society? One reason why I think autonomous vehicles is
00:17:28
Speaker
such a hot topic and is such a huge issue is that for the past century, motor vehicles has been by far the dominant way in which people in the industrialized world move around. And with the advent of autonomous systems taking over those functions, you're basically taking the entire transportation system of the world and putting a large amount of the control of it in the hands of
00:17:58
Speaker
autonomous machines. It's fascinating in a way that that's one of the first areas where it seems like AI is going to have its breakthrough moment in the public consciousness. It's going to be in an area that arguably is one of the most high visibility industries in the world.
00:18:20
Speaker
Right now, we are just starting to see regulations that kind of get rolled out in the space of autonomous vehicles. California released just, I think, about two weeks ago their final set of draft regulations for autonomous vehicles. And I actually was starting to read through them this morning. And it's a very fast moving field. And part of the problem with relying on law to kind of set standards of behavior
00:18:47
Speaker
is that law does not move as fast as technology does. And I think that it's going to be a long time still before the really critical changes in our legal systems and the various legal regimes governing automobiles are changed in a way that allows for the widespread deployment of autonomous vehicles. Now, what happens in the meantime,
00:19:12
Speaker
is that that means that human drivers will continue operating motor vehicles for probably the next decade in the vast majority of cases. But I think that we're going to see a lot of specific driving functions being taken over by automated systems. And I think that one thing that I could certainly envision happening in the next 10 years is that pretty much all new vehicles, while they're on an expressway, are controlled by an autonomous system. And it's only when
00:19:42
Speaker
They get off an expressway and onto a surface street that they switch to having a human driver in control of the vehicle. So, you know, kind of little by little, we're going to see this sector of our economy get changed radically. And because I don't think that the law is well-equipped to move fast enough to manage the risks associated with it, I think that it's important to talk about the ethical issues involved because
Moral Decisions in Autonomous Vehicles
00:20:09
Speaker
In many ways, I think the designers of these systems are going to need to be proactive in ensuring that their products work to the benefit of the consumers who purchase them and the public around them that is interacting with that vehicle. And so on that note, Ryan.
00:20:28
Speaker
Yeah, I think that's supremely well put. And there's something that you said, Matt, that I want to highlight and reiterate, which is that technology moves faster than the law does. And I'll give my own mea culpa here on behalf of the field of ethics and behalf of moral philosophy, because certainly technology often moves faster than moral philosophy moves to. And we run into this problem again and again.
00:20:51
Speaker
One of my favorite philosophers of technology, Langdon winner as professor in New York. And his famous view is that we are sleepwalking into the future of technology. We're continually rewriting and recreating these structures that affect human life and how we'll live, how we'll interact with each other, what our relationships with each other are like, what we're able to do, what we're encouraged to do, what we're discouraged from doing. We continually recreate these kinds of.
00:21:21
Speaker
constraints on our world, and we do it oftentimes without thinking very carefully about it. Although I might try to even heighten what he said by saying that we're not just sleepwalking into the future, but sometimes it seems like we're trying to sleep run into the future. If such a thing is possible, just because technology seems to move so fast, technology, to paraphrase or to steal a line from Winston Churchill, technology seems to get halfway around the world before moral philosophy can put its pants on.
00:21:49
Speaker
And we're seeing that happening here, I think, with autonomous vehicles. I think that there's a lot of serious ethical issues that the creation and the deployment of autonomous vehicles raise. And the tragedy to my mind is that manufacturers are still being very glib about these.
00:22:10
Speaker
For example, they find it hard to believe that the decision of how and when to break or accelerate or steer is a morally loaded decision.
00:22:20
Speaker
But to reiterate something that I said earlier in this interview, any decision that has an effect on another person. And Matt, you said something similar about the law. What kinds of decisions is the law worried about? Well, any kind of decision that a human being makes that affects another decision, that's something about which the law might have something to say. And the same is true for moral philosophy. Any kind of decision that has an impact on someone else's well-being, especially when it's something like trying to avoid a crash, you're talking about
00:22:47
Speaker
causing or preventing serious injury to someone or maybe even death. We know that tens of thousands of people die on US roads every year and oftentimes those crashes involve choices about who's going to be harmed and who's not, even if that's, for example, a trade-off between someone outside the car and a passenger or a driver inside the car. These are clearly morally important decisions and it seems to me that manufacturers are
00:23:16
Speaker
still trying to brush these aside. They're either saying that these are not morally important decisions, or they're saying that the answers to them are obvious, to which the hundreds of moral philosophers in the country would protest. They're certainly not always questions with obvious answers. Or if they're difficult answers, if the manufacturers admit that they're difficult answers, then they think, well, the decisions are rare enough that to agonize over them,
00:23:47
Speaker
might postpone other advancements in the technology. And that's a legitimate concern if it were true that these decisions were rare, but there are tens of thousands of people killed on US roads and hundreds of thousands who are injured every year. So these are not rare occurrences that involve moral trade-offs between people.
00:24:05
Speaker
Okay. So I'd like to also look at autonomous weapons, which pose their own interesting ethical and legal dilemmas, I'm sure.
The Complexity of Autonomous Weapons
00:24:14
Speaker
Ryan, can you start off on that a little bit, talking about what your take on some of the ethical issues are? Sure. Autonomous weapons are
00:24:22
Speaker
interesting and fascinating and they have perhaps an unmatched ability to captivate the public interest in the public imagination or at least the public nightmares. I think that's because pretty much all of the situations with which we're familiar with autonomous weapons are things like Terminator or 2001. If you consider HAL to be a killer robot,
00:24:44
Speaker
These are cases in which autonomous weapons are being portrayed as sort of harbingers of doom or these unstoppable and cold unthinking killing machines and so the public has a great deal of anxiety and trepidation about autonomous weapons and i think that a lot of that is merited so i begin with an open mind and i begin by.
00:25:04
Speaker
assuming that the public could very well be right here. There could very well be something that's uniquely troubling, uniquely morally problematic about delegating the task of who should live and who should die to a machine. But once we dig into these arguments, my colleagues and I are the people that I co-author with. It's hard to pinpoint. It's extremely difficult to pinpoint exactly what's problematic about killer robots. And once again, we find ourselves
00:25:34
Speaker
plumbing the depths of our deepest moral commitments and our deepest moral beliefs, beliefs about what kinds of things are valuable and how we should treat other people and what the value of human life is and what makes war humane or inhumane. These are the questions that autonomous weapons raise. So there are some very obvious sort of practical concerns.
00:25:56
Speaker
We might think, for example, we'd be right to think today that machines probably aren't reliable enough to make decisions in the heat of battle, to make discernments in the heat of battle about which people are legitimate combatants, which people are legitimate targets, and which people are not. What kinds of people are civilians or non-combatants who should be spared?
00:26:16
Speaker
But if we imagine a far off future, if we imagine a future where robots don't make those kinds of mistakes, those kinds of empirical mistakes where they're trying to determine the state of affairs around them, where they're trying to determine not just whether someone's wearing a uniform, but for example, whether they're actively contributing to hostilities. This is the kind of language that international law uses.
00:26:39
Speaker
If we imagine a situation where robots are actually pretty good at making those kinds of decisions, where they're perhaps even better behaved than human soldiers, where they don't get confused, they don't get angry or vengeful, they don't see their comrade killed right next to them and then go on a killing spree or go into some sort of like berserker rage. And we imagine a situation where they're not racist or they don't have the kinds of biases that humans are often vulnerable to. In short, if we imagine
00:27:07
Speaker
a scenario where we can greatly reduce the number of innocent people killed in war, the number of people killed by collateral damage. This starts to exert a lot of pressure on that widely held public intuition that autonomous weapons are bad in themselves because it puts us in the position then of
00:27:28
Speaker
insisting that we continue to use human war fighters to wage war, even when we know that will contribute to many more people dying from collateral damage. And to put it simply, that's a very uncomfortable position for someone to be in. That's an uncomfortable position to defend. So those are the kinds of questions that we investigate when we think about the morality of autonomous weapons. And of course,
00:27:52
Speaker
I could, if you're interested, I could go over lots of the moral arguments on either side, but that's a very broad bird's eye view of where the conversation stands now. So actually, one question that comes to mind when you're talking about this is I would be worried that what would happen, even if you can make an argument that there's ethical reasons for using autonomous weapons, I would be worried that you're going to get situations that are very similar to what we have now.
00:28:18
Speaker
where the richer, more powerful countries have these advanced technologies, and the poorer countries that don't have them are the ones that are getting attacked. I think you're absolutely right about that. That is a very real concern. It's what we might call a secondary concern about autonomous weapons, because if you had a position like that, you might say something like this. Well, there's nothing wrong intrinsically with using a robot or using a machine to decide who should live and who should die.
00:28:48
Speaker
We'll put that question to the side, but we still prefer to live in a world where nobody has autonomous weapons rather than a world in which they are unequally distributed and where this leads to problematic differentials in power or domination and where it cements those kinds of asymmetries on the international stage. If you had that position, I think you'd be quite reasonable. That could very well be my position. That's a position that I'm very, very sympathetic to, but you'll notice that
00:29:17
Speaker
That's a position that sidesteps the more fundamental question, the more fundamental moral question of what's wrong with using killer robots in warfare. Although I wholeheartedly agree that I think a world where no autonomous weapons might very much be better than a world in which some people have them and some people don't.
00:29:37
Speaker
And so one of my other questions, I think Matt is going to be more directed towards you and that is, especially as we're transitioning into autonomous weapons, that would be more advanced.
Accountability in Autonomous Warfare
00:29:47
Speaker
How do we deal with accountability? Well, first, if, if you don't mind, I'd like to talk about a few of the points that Ryan made. First off, Ryan, there was almost nothing that you said that I disagree with. In fact, there was nothing that you said that I noticed that I disagree with. One thing that I want to kind of highlight is.
00:30:04
Speaker
It really seems to me that many of the arguments against autonomous weapons are arguments that could be applied equally to almost any other type of military technology. The potential for misuse, the fact that wealthier countries are going to have easier access to them than poorer countries. The only unique to autonomous weapons arguments that I hear are one that
00:30:34
Speaker
It's just morally wrong to delegate decisions about who lives and dies to a machine. But of course, that's going to be an issue with autonomous vehicles too. Autonomous vehicles will have to make split-second decisions in all likelihood about whether to take a course of action that will result in the death of a passenger in their car or passengers in another car.
00:30:56
Speaker
there's all sorts of moral trade off. So I don't think that we necessarily have an inherent issue in letting a machine decide whether a human life should end. And of course, Ryan almost took the words right out of my mouth when he described, there's plenty of reasons to think that in a lot of ways, autonomous weapons would be superior at military decisions in terms of rash decisions that result in the loss of human life. That being said, one
00:31:25
Speaker
very real fear that I have about the rise of autonomous weapons is that they are going to inherently be capable of reacting on timescales that are shorter than humans' timescales in which they can react. I can very easily imagine it reaching the point very quickly where the only way that you can counteract an attack by an autonomous weapon is with another autonomous weapon.
00:31:55
Speaker
And eventually having humans involved in the military conflict will kind of be the equivalent of, you know, bringing bows and arrows to a battle in World War II. Just there's no way that humans with their slow reaction times will be able to effectively participate in warfare. And that is a very scary scenario, because at that point,
00:32:21
Speaker
you start to wonder where in the process human decision makers can enter into the military decision-making process that goes on in warfare. And that, I think, goes back to the accountability issue that you brought up. The issue with accountability right now is that there's very clear, well-established laws in place about who is responsible for specific military decisions
00:32:51
Speaker
Under what circumstances a soldier is held accountable, under what circumstances their commander is held accountable, on what circumstances the nation is held accountable. That's going to become much blurrier when the decisions are not being made by human soldiers at the ground level, but rather by autonomous systems. And it's going to get even more complicated as machine learning technology is incorporated into these systems where they
00:33:20
Speaker
learn from their observations and experiences, if you want to call it that, in the field on the best way to react to different sorts of military situations. At some point, it would seem to me to be almost unfair to hold the original manufacturer of an autonomous system responsible for the things that that system learned after it was outside the creator's control.
00:33:50
Speaker
That issue is just especially palpable in the autonomous weapons sphere, because there's obviously no more stark and disturbing consequence of a bad decision than the death of a human being. And so I think that as with all other areas of autonomous system decision-making, it isn't clear where the lines of accountability will lead. And we are going to need to think about
00:34:20
Speaker
how we want to develop very specific rules, assuming that there isn't a complete ban on autonomous weapons, come up with very specific rules that make it clear where the lines of responsibility lie. And I suspect that that is going to be a very, very vigorously disputed conversation between the different interest groups that are involved in establishing the rules of warfare.
00:34:46
Speaker
And so I want to sort of keeping in line with this, but also sort of changing.
Corporate Self-Regulation and Government Use of AI
00:34:51
Speaker
Matt, in recent talks, you've mentioned that you're less concerned now about regulations for corporations because it seems like at the moment, corporations are making some effort to essentially self-regulate.
00:35:04
Speaker
I was hoping you could talk a little bit about that. But I'm also really interested in how that compares to, say, concerns about government misusing AI and whether self-regulation is possible with government or whether we need to worry about that at all. Right. This is actually a subject that's very much at the forefront of my mind at the moment because
00:35:27
Speaker
that's the next paper that I'm planning to write is kind of a follow-up to a paper I wrote a couple of years ago on regulating artificial intelligence systems. It's not so much that I think that I have great faith in corporations and businesses to act in the best interest of society. There are serious problems with self-regulation that are difficult to overcome. And one of those is what I call the fox guarding, the hen house problem.
00:35:55
Speaker
an industry is going to come up with rules that govern it, well, they're going to come up with rules that benefit the industry rather than rules that benefit the broader public, or at least that's where the incentives for them lie. And that's proven to be basically an insurmountable obstacle for self-regulation in the vast majority of sectors really over the past couple of centuries. But that timescale
00:36:22
Speaker
the past couple of centuries is very important because the past couple of centuries is when the industrial revolution happened. And that was really a sea change moment, not just in our economy and society, but also in the law. Basically every regulatory institution that you think of today, whether it is a government agency, whether it is kind of the modern system of products liability,
00:36:51
Speaker
in the United States, whether it's legislative oversight of particular industries, that is all essentially a creation of the post-industrial world. Governments realize that as large companies started to increasingly dominate sectors of the economy, the only way to effectively regulate an increasingly centralized economy is to have a centralized form of regulation.
00:37:21
Speaker
Now, we are living, I think, in an age with the advent of the internet that is an inherently decentralizing force. And so I wouldn't say that it is so much that I'm confident that companies are going to be able to self-regulate. But in a decentralizing world, we're going to have to think of new paradigms on how we want to regulate and govern the behavior of economic actors. And some of the old
00:37:50
Speaker
systems of regulation or risk management that existed before the Industrial Revolution might make more sense now that we're having a trend towards decentralization rather than centralization. It might make sense to re-examine some of those decentralized forms of regulation, and one of those is industry standards and self-regulation. And I think that one reason why I am particularly hopeful in the sphere of AI
00:38:20
Speaker
is that there really does seem to be kind of a broad interest among the largest players in the AI world to proactively come up with rules of ethics and transparency in many ways that we generally just haven't seen in the age since the Industrial Revolution.
00:38:44
Speaker
One of the reasons that self-regulation hasn't traditionally worked isn't just the fox guarding the hen house problem. It's also that companies are inherently skeptical of letting on anything that might let other companies know what they're planning to do. And there seems to be, there obviously is certainly a good deal of that. People are not making all AI's code open source right now, but there is a much more, there's much higher tolerance for transparency, it seems like in AI.
00:39:14
Speaker
than there is in previous generations of technology. And I think that's good because, again, in an increasingly decentralized world, we're going to need to come up with decentralized forms of risk management. And the only way to effectively kind of have decentralized risk management
00:39:33
Speaker
is to make sure that the key safety critical aspects of the technology are understandable and known by the individuals or groups that are tasked with regulating it.
00:39:48
Speaker
And so does that also translate to concerns about say government misusing autonomous weapons or misusing AI to spy on their citizens or something? How can we make sure that governments aren't also causing more problems than they're helping? Well, that is a question that humans have been asking themselves, I think for the past 5,000 plus years. And I don't think we're going to have a much easier time with it
00:40:18
Speaker
at least in the early days of the age of intelligent machines than we have in the past couple of centuries.
Need for International AI Agreements
00:40:26
Speaker
Governments have a very strong interest in basically looking out for the interests of their countries. And one kind of macro trend, unfortunately, in the world stage today is kind of an increasing nationalist tendencies.
00:40:45
Speaker
And that kind of leads me to be more concerned than I would have been, say, 10 years ago, that these technologies are going to be co-opted by governments, and kind of ironically, that it's going to be the governments rather than the companies that are the greatest obstacle to transparency, because they will want to establish some sort of national monopoly on the technologies within their borders.
00:41:15
Speaker
It's kind of an interesting dynamic. I feel like Google and Google DeepMind and Microsoft and Facebook and Amazon and a lot of these companies are much higher on the idea of encouraging transparency and multinational cooperation than governments are. And that's exactly the opposite trend of kind of what we have come to expect over at least the last several decades. And Ryan, did you want to weigh in on some of that?
00:41:45
Speaker
Yeah, I think that it is an interesting reversal in the trend and it makes me wonder how much of that is due to the diffusion of the internet and its ability to make that kind of decentralized management or decentralized cooperation possible. There's one thing that I would like to add and it's sort of double-edged. I think that international norms of cooperation can be valuable. They're not totally toothless. We see this, for example, when it comes to the Ottawa Treaty, the treaty that banned
00:42:15
Speaker
landmines. So the United States is not a signatory to the Ottawa Treaty that banned anti-personnel landmines, but because so many other countries are
00:42:25
Speaker
there exists a very strong norm, the sort of informal stigma that's attached to it, even for the United States, that if we used something like anti-personal landmines in battle, we'd face the kind of backlash or the kind of criticism that's probably equivalent to if we had been signatories of that treaty or roughly equivalent to it. So international norms of cooperation and these sorts of international agreements, they're good for something. But we often find that they're also fragile at the same time.
00:42:56
Speaker
So for example, in each of the Western world, there has existed and there still exists a kind of informal agreement that we're not going to experiment on human embryos or we're not going to experiment by modifying the genetics of human embryos. Say, for example, with the CRISPR enzyme that we know can be used to modify the genetic sequence of embryos. So it was a bit of a shock a year or two ago when some Chinese scientists announced that they were doing just that.
00:43:25
Speaker
And I think it was a bit of a wake-up call to the West to realize, oh, well, we have this sort of shared understanding of moral values and this shared understanding of things like we'd call it the dignity of human life or something like that. And it's deeply rooted probably in the Judeo-Christian tradition and it goes back several thousands of years and it unites these different nation states because it's part of our shared cultural heritage.
00:43:52
Speaker
But those understandings aren't universal, and those norms aren't universal. And I think it was a valuable reminder that when it comes to things that are as significant as, say, modifying the human genome with enzymes like CRISPR, or probably with autonomous weapons and artificial intelligence more generally, those kinds of inventions, they're
00:44:15
Speaker
so significant and they have such profound possibilities for reshaping human life that I think we should be working very stridently to try to arrive at some international agreements that are not just toothless and not just informal.
00:44:30
Speaker
So I want to, well, I sort of want to go in a different direction and ask about fake news, which seems like it should have been a trivial issue and yet it's being attributed to impacting things like presidential elections.
AI's Role in Fake News and Public Discourse
00:44:44
Speaker
And so obviously there are things like
00:44:46
Speaker
liable and slander laws, but I don't really know how those apply to issues like fake news. And I'm interested, fearful about the idea that as AI technologies improve, we're going to be able to do things like take videos of people speaking and really change what they're saying. So it sounds like someone said on video something completely different, which will exacerbate the fake news problem. I was really interested in what you both think of this from a legal standpoint and an ethical standpoint.
00:45:16
Speaker
Faith news is kind of a very, it almost is distilled to the essence, the issue of decentralization and the problems that kind of the democratization of society that the internet has brought about and the risks associated with that. And the reason is that not that long ago, in my parents' generation, the way that you got the news was from the newspaper.
00:45:43
Speaker
That was the kind of sole trusted source of news for many people. And in many ways that was a great system because everybody kind of had this shared set of facts, this shared set of ideas about what was going on in the world. And over time that got diluted somewhat with the advent of TV and with, you know, increasing reliance on radio news as well.
00:46:09
Speaker
But by and large, there was still a fairly limited number of outlets, all governed by similar journalistic standards and all adhering to broadly shared norms of conduct. The rise of the internet has kind of opened up an opportunity for that to get completely tossed on the wayside. And I think that the fake news debacle that really happened in the presidential election last year is a perfect example of that.
00:46:36
Speaker
because there are now so many different sources for news. There are so many news outlets available on the internet. There are so many different sources of information that people can access that don't even report to be news and those that do report to be news, but really are just opinions or in some cases completely made up stories. In that sort of environment, it becomes increasingly difficult to
00:47:04
Speaker
decide what is real and what is not and what is true and what is not. And there is a loss that we are starting to see in our society of kind of that shared knowledge of facts. We're not starting to see it. We've already lost a good bit of that. There are literally different sets of not just worldviews, but of worlds that people see around them. And
00:47:30
Speaker
The fake news problem is really that a lot of people are going to believe any story that they read that fits with their preexisting conception of what the world is. And I don't know that the law has a good answer to that. And part of the reason is that a lot of these fake news websites aren't commercial in nature. They're not intentionally trying to make large amounts of money on this. So even if a fake news story does monumental damage,
00:47:58
Speaker
The person who created that content is probably not going to be a source of accountability. You're not going to be able to recoup the damages to your reputation from that person or that entity. And it seems kind of unfair to blame platforms like Facebook, frankly, because it would almost be useless to have a Facebook where Facebook had to vet every link that somebody sent out before it could get sent out over their servers.
00:48:28
Speaker
eliminate kind of the real time community building that Facebook tries to encourage. So it's really an insurmountable problem. And it's really, I think, an area where it's just difficult for me to envision how the law can manage that. And at least unless we come up with kind of these new regulatory paradigms that reflect the fact that our world is going to be increasingly less centralized than it has been during the industrial age. And Ryan?
00:48:57
Speaker
Yeah, I can say a little bit about that and I'll try to keep it at arm's length because at least for me, it's a very frustrating topic. I remember going back to the 2008 election is where I can pinpoint at least my awareness of this and my worry about this where I think the New York Times ran a story about John McCain that was not very favorable. And I think it was a story that alleged that he had a secret mistress.
00:49:23
Speaker
And I remember John McCain's campaign manager going on TV and saying that the New York Times was hardly a journalistic organization. And I thought, this is not a good sign. This is not a good sign if we're no longer agreeing on the facts of the situation.
00:49:40
Speaker
and then disagreeing about what is to be done. Say, disagreeing about how we make trade-offs between personal initiative versus redistribution to help the less fortunate or something. I mean, the sort of classic conversations that America's been having with itself for hundreds of years. We're not even having those kinds of conversations because we can't even agree on what the world looks like out there, and we can't even agree about who's a trusted messenger. The paper of record, The New York Times, The Washington Post, they've been just relentlessly
00:50:09
Speaker
assailed as unreliable sources and that's really a troubling development. I think that it crested or it came to a head, although that implies that it's now
00:50:21
Speaker
entering a trough that it's now on the downswing. And I'm not sure that that's true. But at least there was a kind of climax of this in the 2016 election where fake news stories were some of the most widely shared articles on Facebook, for example. And I think that this plays into some human weaknesses like confirmation bias and the other cognitive biases that we've all heard of. And personally, I think it's just an unmitigated catastrophe for the public discourse in the country.
00:50:48
Speaker
I think we might have our first official disagreement between me and Matt in the time that we've been speaking because I'm slightly less sympathetic to Facebook and their defense. Mark Zuckerberg has said, for example, that he wants Facebook to be everyone's quote, primary news experience, and they have
00:51:05
Speaker
the possibility to control not which news stories appear on their site, but which news stories are promoted on their site, and they've exercised that capacity. They exercised that capacity last year in the early days of the campaign, and they attracted a great deal of controversy, and they backed off from that. They removed their human moderators from the trending news section, and three days later, we find fake news stories in the top trending news sections when they're being moderated by algorithms.
00:51:33
Speaker
I'm a little less sympathetic to Facebook because I don't think that they can play the role that would have traditionally been filled by a newspaper editor and profit off it and declare it as their intention to fill that role in society and then totally wipe their hands of any kind of obligation to shape the public discourse responsibly. So I wish that they were doing more. I wish that they were at least accepting the responsibility for that.
00:51:59
Speaker
hurry to add that they have accepted responsibility recently and they've implemented some new features to try to curtail this. But as some of my other colleagues have pointed out, it's not obvious how optimistic we should be about those features. So, for example, I laugh, but it's a grim, it's a grim ironic laughter because one of Facebook's features to try to combat this is to allow users to flag stories as suspicious. But of course, if users were reliable detectors of which stories were
00:52:29
Speaker
accurate and which were fake, we wouldn't be in this predicament. So it's not clear how much that can really do to solve the problem. So I think that this is
00:52:38
Speaker
a pretty significant tragedy for American political discourse, especially when the stakes are so high and they're only getting higher with things like climate change, for example, or income inequality or the kinds of things that we've been talking about today. It's more important than ever that Americans are able to have mature, intelligent, informed, careful conversations about the matters that affect them and affect several billion other people that we share this planet with.
00:53:04
Speaker
I don't, however, have a quick and easy solution. I'm afraid that for now I'm just left sort of wringing my hands with worry and there's not much else I can think to do about it, at least for the time being. And Matt? I think that kind of what Facebook is going to have a problem with is the same issue that the operators of any internet site have with hacker bots. You know, those internet captchas that
00:53:33
Speaker
you see when you try to log into a website that tries to test if you're human. There's a reason that they change every few months, it seemed like, and that's because people have figured out how to get past the previous ways of filtering out these bots that were not actual people trying to log in. And I think that you're gonna see, unfortunately, a similar phenomenon with Facebook. They could make their best possible efforts to root out fake news
00:54:00
Speaker
And I could easily see that not being enough. And it's because their user base is so filled with people who are only really looking to see affirmation of their own world. And if that's kind of the mindset that we have in society, I don't know that Facebook is really going to be able to design their way around that. And in many ways, I think that kind of the loss of a shared world
00:54:29
Speaker
is going to raise even more thorny and difficult to resolve legal and ethical questions than the rise of artificial intelligence. Okay, is there anything else that you think is important for people to know about that you're thinking about or something along those lines?
Conclusion: Ongoing Ethical and Legal Discussions About AI
00:54:48
Speaker
Ryan? Yeah, I think if I could just leave people with a sort of generic injunction or just a sort of generic
00:54:56
Speaker
piece of advice. One of the contributions that we in moral philosophy see ourselves making to these conversations is not always offering clear-cut answers to things. You'll find that philosophers, surprise, surprise, often disagree among themselves about the right answers to these questions. But there is still a great deal of value in appreciating when we're running roughshod over questions that we didn't even know existed.
00:55:21
Speaker
That is what I think is one of the one of the valuable contributions that we can make here is to think carefully about the way that we behave, the way that we design our machines to interact with one another and the kinds of effects that they'll have on society. And I would just caution people to be on the lookout for moral questions and moral assumptions that are being made that are lurking in places where we didn't expect them to be hiding.
00:55:49
Speaker
And it's been a continual frustration that pops up every now and then to hear people sort of wave their hand at these things or to try to wave them away when the moral philosophers are busy pounding their fists.
00:56:01
Speaker
It's something that we've been trying to do is to get out and engage with the public and engage with manufacturers more to help them realize or creators of artificial intelligence more to help them realize that these are very serious questions. They're not easy to answer. They're controversial and they raise some of the deepest questions.
00:56:19
Speaker
that we've been dedicating ourselves to and our profession has been focused on for thousands of years. And they're worth taking seriously and they're worth thinking about. And I will say that it's endearing and reassuring that people are taking these questions very seriously when it comes to artificial intelligence. And I think that the advances that we've seen in artificial intelligence in the last couple of years have been the impetus for that, the impetus for the sort of turn towards the ethical implications of the things that we create
00:56:49
Speaker
Thank you. And Matt? I'm also heartened by the amount of interest that not just people in the legal world, but in many different disciplines have taken in the legal, ethical and policy implications of AI. And I think that it's important to have kind of these open dialogues where the issues that are created by not just artificial intelligence, but
00:57:15
Speaker
by the kind of parallel changes in society that are occurring, how that impacts people's lives and what we can do to make sure that it doesn't do us more harm than good. And I'm very glad that I got to hear Ryan's point of view on this. I think that a lot of these issues, lawyers and legal scholars could very much use to think about the broader ethical questions behind it.
00:57:42
Speaker
for no other reason than because I think that the law is becoming a less effective tool for kind of managing the societal changes that are happening. And I don't think that that will change unless we think through the ethical questions and the moral dilemmas that are going to be presented by a world in which decisions and actions are increasingly undertaken by machines rather than people.
00:58:10
Speaker
Excellent. Well, thank you very much. I really enjoyed talking with both of you. Yeah, thank you. Thanks. To learn more, visit futureoflife.org.