Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene And Iyad Rahwan image

AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene And Iyad Rahwan

Future of Life Institute Podcast
Avatar
62 Plays7 years ago
As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. How do we teach machines to be moral when people can't even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI? To help consider these questions, Joshua Greene and Iyad Rawhan kindly agreed to join the podcast. Josh is a professor of psychology and member of the Center for Brain Science Faculty at Harvard University. Iyad is the AT&T Career Development Professor and an associate professor of Media Arts and Sciences at the MIT Media Lab.
Recommended
Transcript

Ethical Challenges in AI

00:00:07
Speaker
I'm Arielle Kahn with the Future of Life Institute. As most of our listeners will know, we are especially concerned with ensuring AI is developed safely and beneficially. But as technically challenging as that may be, it also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. Two of the biggest questions we face are,
00:00:29
Speaker
How do we teach machines to be moral when people can't even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI?

Moral Judgment in AI

00:00:42
Speaker
To help consider these questions, Joshua Green and Iad Rowan kindly agreed to join the show today. Josh is a professor of psychology and member of the Center for Brain Science faculty at Harvard University. For over a decade, his lab has used behavioral and neuroscientific methods to study moral judgment, focusing on the interplay between emotion and reason in moral dilemmas.
00:01:04
Speaker
His more recent work examines how the brain combines concepts to form thoughts and how thoughts are manipulated into reasoning and imagination. Other interests include conflict resolution and the social implications of advancing artificial intelligence.

Scalable Cooperation in AI

00:01:18
Speaker
He's the author of Moral Tribes, Emotion, Reason, and the Gap Between Us and Them.
00:01:24
Speaker
Ied is the AT&T Career Development Professor and an Associate Professor of Media Arts and Sciences at the MIT Media Lab, where he leads the Scalable Cooperation Group. A native of Aleppo, Syria, Ied holds a PhD from the University of Melbourne, Australia and is an affiliate faculty at the MIT Institute of Data Systems and Society. Josh and Ied, thank you so much for being here. Thanks for having us. Thanks for having us.

AI and Decision-Making

00:01:52
Speaker
So the first thing I want to start with is very broadly and somewhat quickly, how do we anticipate that AI and automation will impact society, especially in the coming years, in the next few years?
00:02:07
Speaker
I think that obviously there are long-term implications of artificial intelligence technology, which are very difficult to anticipate at the moment. But if we think more in the short term, I think AI basically has the potential to extract better value from the data we already have and that we're collecting from all the gadgets and devices and sensors around us. And the idea is that we could use this data to make better decisions.
00:02:35
Speaker
whether it's micro decisions in an autonomous car that takes us from A to B, safer and faster, or whether it's medical decision making that enables us to diagnose diseases better, or whether it's even scientific discovery allowing us to do science more effectively and efficiently and more intelligently. So I think AI is basically a decision technology. It's a technology that will enable us to make
00:03:05
Speaker
better use of the data we have and make more informed decisions. Josh, did you want to add something to that?
00:03:14
Speaker
Yeah, so I agree with what Iod said, putting it a different way. You can think artificial intelligence adds value by, as he said, enabling us to extract more information and make better use of that information in building things and making decisions.

Impact on Jobs and Economy

00:03:29
Speaker
It also has the capacity to displace human value. You know, to take one of the most widely discussed examples these days of
00:03:37
Speaker
using artificial intelligence to promote medicine, to diagnose disease. On the one hand, it's wonderful if you have a system that has taken in all of the medical knowledge we have in a way that no human could and uses it to make better decisions. That's a wonderful thing.
00:03:52
Speaker
But at the same time, that also means that lots of doctors might be out of a job or have a lot less to do than they otherwise might. So this is the double-edged sword of artificial intelligence, the value it creates and the human value that it displaces. So I'm going to want to come back to that because I think there's a lot of interesting questions that I have surrounding

The Trolley Problem in AI

00:04:14
Speaker
that. But first, I want to dive into what I consider something of a sticky subject, and that is the trolley problem.
00:04:22
Speaker
and how it relates to autonomous vehicles. And so a little bit of background for me first, for listeners who aren't familiar with this, we anticipate autonomous vehicles will be on the road more and more. And one of the big questions is what do they do? How do they decide who gets injured if they know an accident is coming up?
00:04:42
Speaker
And I would love for one of you to explain what the trolley problem is and how that connects to this question of what do autonomous vehicles do in situations where there is no real good option.
00:04:57
Speaker
So the trolley problem is a set of moral dilemmas that philosophers have been thinking about, arguing about for many decades. And it also is served as a kind of platform for thinking about moral decision making in psychology and neuroscience. So one of the original versions of the trolley problem goes like this. We'll call it the switch case.
00:05:15
Speaker
The trolley is headed towards five people, and if you don't do anything, they're going to be killed. But you can hit a switch that will turn the trolley away from the five and onto a sidetrack. However, on that sidetrack, there's one unsuspecting person, and if you do that, that person will be killed. And so the question is, is it okay to hit the switch to save those five people's lives, but at the cost of saving one life? And in this case, most people tend to say, yes, it's okay to hit the switch. Some people would say, you must hit the switch.
00:05:43
Speaker
And then we can vary it a little bit. So in one of the best known variations, which we'll call the footbridge case, the situation is different as follows. The trolley is now headed towards five people on a single track. Over that track is a footbridge and on that footbridge is a large person, or if we don't want to talk about large people, say a person wearing a very large backpack.
00:06:01
Speaker
And you're also on the bridge. And the only way that you can save those five people from being hit by the trolley is to push that big person off of the footbridge and onto the tracks below. And of course, this isn't, you may think, why can't I jump myself? Well, the answer is you're not big enough to stop a train because you're not wearing a big backpack like that.
00:06:21
Speaker
How do I know this will work? And the answer is, this is the movies. Let's say we know you can suspend disbelief, assume that it will work. Do you think it's okay, even making those unrealistic assumptions, to push the guy off the footbridge in order to save five lives? Here, most people say no. And so we have this interesting paradox that if we accept our assumptions, in both cases, you're trading one life for five, yet in one case, it seems like it's the right thing to do. In the other case, it seems like it's the wrong thing to do.
00:06:48
Speaker
at least to most people. So philosophers have gone back and forth on these cases and tried to use them as a way to articulate a moral theory that would get the right answer, or in particular try to come up with a justification or an explanation for why it is wrong to push the guy off the footbridge, but not wrong to hit the switch. And so this has been a kind of moral paradox.
00:07:12
Speaker
I and other researchers in psychology and neuroscience have said, hmm, while independent of what's actually right or wrong in these cases, there's an interesting bit of psychology here. What's going on in people's heads that makes them say that it's wrong to push the guy off the footbridge? And that's different from the switch case where people are willing to go with the utilitarian answer. That is the answer that produces the best overall consequences, in this case, saving five lives, albeit at the cost of one.
00:07:37
Speaker
So we've learned a lot about moral thinking by studying how people respond to variations on these dilemmas. One of the classic objections to these dilemmas, to using them for philosophical or psychological purposes, is that they're somewhat or even very unrealistic. My view as someone who's been doing this for a long time is that the point is not that they're realistic, but instead that they
00:08:00
Speaker
They function like high contrast stimuli. Like if you're a vision researcher and you're using something like flashing black and white checkerboards to study the visual system, you're not using that because that's a typical thing that you look at. You're using it because it's something that drives the visual system in a way that reveals its structure and dispositions.

Ethical Programming in Autonomous Vehicles

00:08:16
Speaker
And in the same way, I think that these high contrast extreme moral dilemmas can be useful to sort of sharpen our understanding of the more ordinary processes that we bring to moral thinking.
00:08:26
Speaker
Now, fast forward from those early days when I first started doing this research, and now actually, trolley cases are, at least according to some of us, a bit closer to being realistic. That is to say, autonomous vehicles have to make decisions that are in some ways similar to these trolley cases, although in other ways, I would say that they're quite different. And this is where Iod's lovely work comes in.
00:08:51
Speaker
I can, I guess, take things of this point. Thank you, Josh, for a very eloquent explanation and concise explanation of the trolley problem. Now, when it comes to autonomous vehicles, this is, I think, a very new kind of product, which has two kinds of features. One is that autonomous vehicles are, you know, at least promised to be intelligent, adaptive,
00:09:16
Speaker
entities that have a mind of their own, so they have some sort of agency. And they also make decisions that have life or death consequences on people, whether it's people in the car or people on the road. And I think as a result, people are really concerned that product safety standards and old ways of regulating products that we have
00:09:40
Speaker
may not work in this situation, in part because the behavior of the vehicle may eventually become even different from the person who programmed it, or that the programming just has to deal with such a large number of possibilities that it's really difficult to trust a programmer with making these kind of ethical judgments or morally consequential judgments, at least without supervision or input from other people.
00:10:04
Speaker
So in the case of autonomous cars, obviously the trolley problem can translate in a cartoonish way to a scenario in which an autonomous car is faced with only two options, you know, the car may
00:10:19
Speaker
Let's say going at the speed limit or below the speed limit down the street and for some reason due to mechanical failure or something like that is unable to stop and is going to hit a group of pedestrians, let's say five pedestrians. The car can swerve and hit a bystander. Should the car swerve or should it just plow through the five pedestrians? This has a structure that is very similar to the trolley problem.
00:10:46
Speaker
because you're making similar trade-offs between one and five people, and the programming is not happening. The decision is not being taken on the spot. It's actually happening at the time of the programming of the car, which I think can make things a little bit different.
00:11:00
Speaker
There is another complication, which is that we can imagine situations in which the person being sacrificed to save a greater number of people is the person in the car. For instance, suppose the car can swerve to avoid the five pedestrians, but as a result falls off a cliff or crashes into a wall harming the person in the car.
00:11:20
Speaker
So that, I think, also adds another complication, especially that programmers are going to have to program these cars to appeal to customers. And if the customers don't feel safe in those cars because of some hypothetical situation that may take place in which they're sacrificed, that pits, I think, the financial incentives against the potentially socially desirable outcome, which can create problems. And I think these are some of the reasons why people are concerned about these scenarios.
00:11:49
Speaker
Now, I think obviously a question that raises itself is, is this a good idea? Is this going to ever happen? One can argue it's going to be extremely unlikely. How many times do we face these kinds of situations as we drive today?
00:12:05
Speaker
So the argument goes these situations are going to be so rare that they are irrelevant and that autonomous cars promise to be substantially safer than human driven cars that we have today that the benefits significantly outweigh the costs. And I think there's obviously truth to this argument if you take the trolley problem scenario literally.
00:12:29
Speaker
But I think what the trolley problem is doing, or the autonomous car version of the trolley problem is doing, is it's abstracting the trade-offs that are taking place every microsecond, even now. So imagine at the moment you're driving on the road and then there is a large truck on the lane to your left.
00:12:49
Speaker
And as a result, you choose to just stick a little bit further to the right just to minimize risk in case this car gets off its tracks or gets off its lane, for instance. And we do these things without even noticing. We do it kind of instinctively.
00:13:05
Speaker
Now, suppose that there are cyclists on, you know, there is a cyclist on the right-hand side or there could be a cyclist later on on the right-hand side. What you're effectively doing in this small maneuver is you're slightly reducing risk to yourself, but slightly increasing risk to the cyclist. And we do this all the time and you can imagine a whole bunch of situations in which, you know, these sorts of decisions are being made millions and millions of times every day. For example, do you stay closer to the car in front of you or closer to the car
00:13:34
Speaker
behind you as you are on the highway. If you're faced with a difficult maneuver, do you break the law by moving across the other lane? Or do you kind of stick it out and just smash into the car in front of you, if the car in front of you stops all of a sudden? We just use instinct nowadays to deal with these situations, which is why we don't really think about it a lot.
00:13:55
Speaker
And it's also in part because we can't reasonably expect humans to make reasoned, well thought out judgments in these split second situations. But now we have the luxury of deliberation about these problems. And with the luxury of deliberation comes the responsibility of deliberation. And I think this is the situation that we're facing ourselves at the moment.
00:14:16
Speaker
So one of the issues that I have with applying the trolley problem to self-driving cars, at least from what I've heard of other people talking about it, is it so often it seems to be forcing the vehicle and thus the programmer of the vehicle to make a judgment call about whose life is more valuable.

Societal Implications of AI in Vehicles

00:14:35
Speaker
And I'm wondering, are those the parameters that we actually have to use? Can we not come up with some other parameters that people would agree or
00:14:43
Speaker
moral and ethical and don't necessarily have to say that one person's life is more valuable than someone else's. I don't think that there is any way to avoid doing that. I think the question is just how directly and explicitly are you going to take on the question or are you just going to set things in motion and hope that they turn out well. But I think the example that Iad gave was lovely.
00:15:07
Speaker
that you're in that situation and the question is, do you pass or get closer to the cyclist versus get closer to the truck that might hurt you? There is no way to avoid answering that question. Another way of putting it is, if you're a driver, there's no way to avoid answering the question, how cautious or how aggressive am I going to be?
00:15:24
Speaker
how worried about my own safety or worried about the safety of other people or worried about my own convenience and getting to where I want to go or am I going to be? You can't not answer the question. You cannot explicitly answer the question. You can say, I don't want to think about that. I just want to drive and see what happens. But you are going to be implicitly answering that question through your behavior. And in the same way, autonomous vehicles can't avoid the question. Either the people who are designing the machines
00:15:51
Speaker
training the machines or explicitly programming to behave in certain ways, they are going to do things that are going to affect the outcome, either with a specific outcome in mind, or without a specific outcome in mind, but knowing that outcomes will follow from the choices that they're making. So I think that we may not face often situations that are
00:16:11
Speaker
very starkly cartoonishly trolley-like, but as Iad said, the cars will constantly be making decisions that are not just about navigation and control of the vehicle, that they inevitably involve value judgments of some kind.
00:16:27
Speaker
And to what extent have we actually asked customers what it is that they want from the car? I mean, I personally, I would swerve a little bit away from the truck if it's a narrow lane. I grip the steering wheel tighter when there is a cyclist on the other side of me and just try to get past as quickly as possible. I guess in a completely ethical world, my personal position is I would like the car to protect the person who's more vulnerable, who would be the cyclist.
00:16:57
Speaker
In practice, what I would actually do if I were in that situation, I have a bad feeling I'd probably protect myself. But I personally would prefer that the car protect whoever is most vulnerable. Have other people been asked this question, what sort of results are we getting?
00:17:11
Speaker
Well, I think that your response actually very much makes the point that it's not really obvious what the right answer is, because on one hand we could say we want to treat everyone equally. On the other hand, you have this self-protective instinct, which presumably as a consumer, that's what you want to buy for yourself and your family. On the other hand, you also care for vulnerable people.
00:17:36
Speaker
And different reasonable and moral people can disagree on what the more important factors and considerations should be. And I think this is precisely why we have to think about this problem explicitly, rather than leave it purely to, you know, whether it's programmers or car companies or anyone else, any particular single group of people to decide.
00:17:59
Speaker
I think when when we think about problems like this, we have a tendency to binaurize it and say, you know, so Ariel, you said, Well, I think I would want to protect the most vulnerable person. But it's not a binary choice between protecting that person or not.
00:18:16
Speaker
it's really going to be matters of degrees, right? So imagine there's a cyclist in front of you going at cyclist speed, and you either have to wait behind this person for another five minutes, creeping along much slower than you would ordinarily go, or you have to swerve into the other lane where there's oncoming traffic at various distances.
00:18:35
Speaker
Very few people might say, I will sit behind this cyclist for 10 minutes before I would go into the other lane and risk damage to myself or another car. But very few people would just, we'd hope, just blow by the cyclist in a way that really puts that person's life in peril. So the point here is that it's a very hard question to answer because
00:18:58
Speaker
the answers that we have to give either implicitly or explicitly don't come in the form of something that you can write out in a sentence like give priority to the cyclist. You have to say exactly how much priority in contrast to the other factors that will be in play for this decision, right? And that's what makes this problem so interesting and also devilishly hard to think about.
00:19:23
Speaker
And I guess maybe this is sort of an interest to me psychologically, but why do you think this is something that we have to deal with when we're programming something in advance and not something that we as society should be addressing when it's people driving?
00:19:38
Speaker
I think that we basically have an unfortunate situation today in our society in which we very much value the convenience of getting from A to B and we know that people, you know, there are many, you know, very frequent situations that put our own lives and other people's lives at risk as we conduct this activity.
00:20:01
Speaker
In fact, our lifetime odds of dying from a car accident is more than 1%, which I find like an extremely scary number, given all of the other things that can kill us. Yet somehow we've put up, we've decided to put up with this because of the convenience, and we also at the same time cannot really blame people for not making a considered judgment in a, you know, as they make all of these maneuvers. So we sort of learn to live with it as long as people don't
00:20:30
Speaker
you know, run through a red light or are not drunk. We don't really blame them for fatal accidents, we just call them accidents. But now we have the luxury, thanks to autonomous vehicles that can make decisions and re-evaluate situations, you know, hundreds or thousands of times per second and adjust their plan and so on. We potentially have the luxury to make those decisions a bit better. And I think this is why things are different now.
00:20:57
Speaker
I also think we have a behavioral option with humans that we don't have with self-driving cars, at least for a very long time, which is with a human, we can say, look, you're driving, you're responsible, and if you make a mistake and hurt somebody, you're going to be in trouble and you're going to pay the cost. You can't say that to a car, even a car that's very smart by 2017 standards.
00:21:22
Speaker
the car isn't going to be incentivized to behave better, it's going to have to have the capacity, the motivation has to be explicitly trained or programmed in.

AI's Economic Impact

00:21:37
Speaker
So you don't have to
00:21:39
Speaker
With humans, you just tell them what the outcome expectation is. And humans, after millions of years of biological evolution and cultural evolution, thousands of years, are able to do a not great but good enough job at that. But in some ways, because the next generation of self-driving cars, they'll be very intelligent as cars go, but they're not going to be as intellectually developed as humans. We have to make things explicit in a way that we don't for people.
00:22:08
Speaker
And to follow up on that, I've spoken to some economists about this and they think in terms of liability. So they say, well, you can incentivize the people who make the cars to program them appropriately by finding them and engineering the product liability law in such a way that will hold them accountable and responsible for damages in case something wrong happens.
00:22:35
Speaker
But I think that could very well work, and this may be the way in which we implement this feedback loop. But I think the question remains, what should the standards be against which we hold those cars accountable?
00:22:48
Speaker
That's an important question. And also let's say somebody says, okay, I make self-driving cars and I want to be accountable. I want to, I want to, I want to make them safe because I know I'm accountable. They still have to program or train the car. So, so there's no, there's no avoiding that step, whether it's done through traditional legalistic incentives or, or other kinds of incentives. Okay. So I would love to keep asking questions about this, but there are other areas of research that you both cover that I want to get into.
00:23:23
Speaker
We know that, well, we have very good reason to expect AI to significantly impact everything about life in the future. And it looks like Ed, based on some of your research that how AI and automation impact us could be influenced by where we live and more specifically, whether we live in smaller towns or larger cities. And I was hoping you could talk a little bit about your work there and what you've found.
00:23:44
Speaker
So I'm going to move on, but thank you for the discussion. That was awesome.
00:23:52
Speaker
Sure. So obviously a lot of people are talking about the potential impact of AI on the labor market and on employment. And I think this is a very complex topic that requires people, both labor economists as well as people from other fields to chip in on. But I think one of the challenges that I find interesting is, is the impact going to be equal? Or is it going to be equally distributed across, for example, the entire United States or the entire world?
00:24:23
Speaker
Clearly, there are areas that may potentially benefit from AI because it improves productivity and it may lead to greater wealth, but it can also in the process lead to labor displacement. It could in the extreme case cause unemployment, of course, if people aren't able to retool and improve their skills so that they can work with these new AI tools and find employment opportunities.
00:24:49
Speaker
So it's a complex question that I think needs to take the whole economy into account. But if you try to just quantify, well, how large is this adjustment? Regardless of how this adjustment takes place, are we expected to experience this in a greater way or in a smaller magnitude in smaller versus bigger cities? And I thought that the answer wasn't really obvious a priori. And here's why. So on one hand, we could say that, you know, big cities are
00:25:17
Speaker
the hub in which the creative class lives and this is where a lot of creative work happens and this is where a lot of science and innovation and media production takes place and so on. So there are lots of creative jobs in big cities and that should make, because creativity is so hard to automate, they should make big cities
00:25:39
Speaker
more resilient to these shocks. On the other hand, if you go all the way back to Adam Smith and the idea of the division of labor, the whole idea of the division of labor is that individuals become really good at one thing. So this is precisely what spurred urbanization in the first industrial revolution, because instead of having to train extremely well-skilled artisans,
00:26:03
Speaker
you bring a whole bunch of people and one person just makes pins and the other person just makes threads and the other person just puts the thread in the pin and so on. So even though the system is collectively more productive, individuals may be more automatable in terms of their tasks because they have very narrowly defined tasks. So it could be that on average there are more of these people in big cities and that this outweighs the
00:26:30
Speaker
the role of creative, you know, small creative class in big cities. So this is why the answer was not obvious for us, but when we did the analysis, we found that indeed larger cities are more resilient in relative terms. And we were now trying to understand why that is and what is the composition of skills and jobs in larger cities that makes them different. And do you have ideas on what that is right now or are you still
00:26:58
Speaker
Yes, this research is still ongoing, but the preliminary findings are that basically in bigger cities, there is more production that requires social interaction and very advanced skills like scientific and engineering skills. So the idea is that in larger cities, people are better able to complement the machines because they have the technical knowledge.
00:27:22
Speaker
So they're able to use new tools and new intelligent tools that are becoming available, but they also work in larger teams on more complex products and services. And as a result, social interaction, managing people are skills that are more difficult to automate as well. And I think this goes, this continues a process that we're already seeing now.
00:27:47
Speaker
Jobs in bigger cities and urban areas are better paid and they involve, you know, greater collaborative work than in smaller or rural areas.

AI and Social Divide

00:27:58
Speaker
So Josh, you've done a lot of work with the idea of us versus them. And especially as we're looking in this country and others, I think at the political situation where it's increasingly polarized.
00:28:14
Speaker
And it's also increasingly polarized along this line of city versus smaller town. Do you anticipate some of what Iyad is talking about making the situation worse? Do you think there's any chance AI could actually help improve that?
00:28:27
Speaker
I certainly think we should be prepared for the possibility that it will make the situation worse. I think that's the most natural extrapolation, although I don't consider myself enough of an expert to have a definitive opinion about this. But the essential idea is that as technology advances, you can produce more and more value with less and less human input, although the human input that you need is more and more highly skilled. So if you look at something like
00:28:55
Speaker
TurboTax and other automated tax production systems or tax form production systems. Before you had lots and lots and lots of accountants and many of those accountants are being replaced by a smaller number of programmers and super expert accountants and people on the business side of these enterprises. And if that continues, then yes, you have more and more wealth
00:29:22
Speaker
being concentrated in the hands of the people whose high skill levels complement the technology, and there's less and less for people with lower skill levels to do. Not everybody agrees with that argument, but I think it's one that we ignore in our peril, that at least it's plausible enough that we should be taking seriously the possibility that increased technology is going to be driving up inequality economically and continue to create an even more stark contrast between the
00:29:52
Speaker
centers of innovation and technology-based business and the rest of the country and the world.
00:30:00
Speaker
And as we continue to develop AI, still looking at this sort of idea of us versus them, do you anticipate that AI itself would become a them? Or do you think it would be people working with AI versus people who don't have access to AI? Or do you envision other divisions basically forming? How do you see that playing out?
00:30:24
Speaker
Well, I think that the idea of the AI itself becoming the then, I mean, that is really a sort of science fiction kind of scenario. That's the Terminator sort of scenario. Perhaps there are more plausible versions of that. I am agnostic as to whether or not that could happen eventually, but this would involve advances in artificial intelligence that are beyond anything we understand right now.
00:30:49
Speaker
Whereas the problem that we were talking about earlier, this is to say humans being divided into a technological, educated, and highly paid elite as one group, and then the larger group of people who are not doing as well financially, that us-them divide, but you don't need to look into the future. You can see it right now.
00:31:12
Speaker
I would follow up on that by saying that I think the us versus them, I don't think that the robot will be the them on their own, but I think the machines and the people who are very good at using the machines to their advantage.
00:31:28
Speaker
whether it's economic or otherwise, will collectively be a them. It's the people who are extremely tech savvy, who are using those machines to be more productive or to win wars and things like that. I think that is a more real possibility.
00:31:47
Speaker
So it doesn't really matter if the machines have so much agency in this regard, but there would be, I think, some sort of evolutionary race between human-machine collectives. I wonder if Josh agrees with this.
00:32:02
Speaker
I certainly think that that's possible, right? I mean, that is a tighter integration in the distant, but maybe not as distant as some people think. Future, if humans can enhance themselves in a very direct way, we're talking about things like brain machine interfaces and cognitive prostheses of various kinds. Yeah, I think it's possible that people who are technologically enhanced could
00:32:25
Speaker
have a competitive advantage and set off a kind of economic arms race or perhaps even literal arms race of a kind that we haven't seen. Um, you know, I hesitate to say, Oh, that's definitely going to happen. I'm saying it's a possibility that makes a certain kind of sense. And do either of you have ideas on how we can continue to advance AI and address these issues, the sort of divisive issues that we're talking about

AI Regulation and Public Interest

00:32:51
Speaker
here? Um, or do you think they're just sort of bound to occur?
00:32:56
Speaker
I think there are two new tools at our disposal. I think one of them is experimentation and the other one is some kind of augmented regulation, so machine augmented regulation. So with experimentation, I think we need more openness to trying things out so that we know and understand the kinds of trade-offs that are being made by machines.
00:33:19
Speaker
Let's take the case of autonomous cars just as an example, that if all cars have one single algorithm and a certain number of pedestrians die and a certain number of cyclists die and a certain number of passengers die, we won't really understand whether there are trade-offs that are caused by particular features of the algorithms running those cars. Contrast this today with
00:33:45
Speaker
You know, SUVs versus regular cars, or for instance, you know, cars with a bull bar in front of them. You know, these bull bars, which are like metallic, basically metallic things, bars in front of the car that increase safety for the passenger, especially, you know, in the case of collision, but they have disproportionate impact on other cars, also on pedestrians and cyclists.
00:34:10
Speaker
And they're much more likely to kill them in the case of an accident. And as a result, by making this comparison, by identifying that cars with bull bars with this physical feature are actually worse for certain group,
00:34:24
Speaker
trade-off was not acceptable and many countries have banned them. For example, the UK, Australia, many European countries have banned them, but the US hasn't banned them, as far as I know. So if there was a similar trade-off being caused by a software feature, then we wouldn't even know unless we allowed for experimentation as well as monitoring. So if we looked at the data,
00:34:47
Speaker
to identify whether a particular algorithm is making for very safe cars for customers, but at the expense of a particular group. And the other issue was the idea of machine-augmented regulation, that I think in some cases these systems are going to be so sophisticated and the data is going to be so abundant that we won't really be able to observe them and regulate them in time.
00:35:16
Speaker
Think of algorithmic trading programs that are causing flash crashes because they trade at sub-millisecond and doing arbitrage against each other. Now, no human being is able to observe these things fast enough to intervene, but you could potentially insert another algorithm, a regulatory algorithm or what some people have called an oversight algorithm that will observe other AI systems in real time
00:35:44
Speaker
on our behalf to make sure that they behave. And Josh, did you have anything you wanted to add? Yeah, well, I mean, I think that there are sort of two general categories of strategies for making things go well. I mean, there are technical solutions to things. And then there's the broader social problem of having a system of governance that is generally designed and can be counted on to
00:36:11
Speaker
most of the time produce outcomes that are good for the public in general. Iyad gave some nice examples of technical solutions that may end up playing a very important role as things develop. Right now, I guess the thing that I'm most worried about is that if we don't get our politics in order, especially in the United States, we're not going to have a system in place that's going to be able to put the public's interest first. Ultimately, it's going to come down to
00:36:41
Speaker
the quality of the government that we have in place. Quality means having a government that distributes benefits to people in what we would consider a fair way and takes care to make sure that things don't go terribly wrong in unexpected ways and generally represents the interests of the people. I worry that instead of getting closer to that ideal, we're getting farther away.
00:37:05
Speaker
You know, I think we should be working on both of these paths in parallel. We should be developing technical solutions to more localized problems where you need an AI solution to solve a problem created by AI. But I also think we have to get back to basics when it comes to the fundamental principles of our democracy and preserving them. All right. So then the last question for both of you is as we move towards smarter and more ubiquitous AI,

Balancing AI's Impact on Society

00:37:35
Speaker
What worries you most and what are you most excited about? I think the thing that I'm most concerned about are the effects on labor and then the broader political and social ramifications of that. I think that there are even bigger things that you can worry about the kinds of, you know,
00:37:55
Speaker
existential risks of machines taking over and things like that. And I take those more seriously than some people, but I also think that there's just a huge amount of uncertainty about whether we're, in our lifetimes at least, going to be dealing with those kinds of problems. But I'm pretty confident that a lot of labor is going to be displaced by artificial intelligence
00:38:16
Speaker
And I think it is going to be enormously politically and socially disruptive. And I think we need to plan now and start thinking, not just everybody saying, well, just relying on their prejudices to say, this is what I think is most likely and only preparing for that. Instead, we need to consider a range of possibilities and be prepared for the worst of them.
00:38:39
Speaker
But I think that the displacement of labor that's coming with self-driving cars, especially in the trucking industry, I think that's just going to be the first and most obvious place where millions of people are going to be out of work and it's not going to be clear what's going to replace it for them. And what are you excited about?
00:38:58
Speaker
Oh, and what am I excited about? I forgot about that part. I'm excited about the possibility of AI producing value for people in a way that has not been possible before on a large scale. We talked about medicine. Imagine if anywhere in the world that's connected to the internet, you could get the best possible medical diagnosis for whatever is ailing you.
00:39:23
Speaker
That would be an incredible life-saving thing. And I think that that's something that we could hope to see in our lifetimes. When I think about education, part of why education is so costly is because you can only get so much just from reading. You really need someone to interact with you and train you and guide you and answer your questions and say, well, you've got this part right, but what about that?
00:39:45
Speaker
And as AI teaching and learning systems get more sophisticated, I think it's possible that people could actually get very high quality educations with minimal human involvement. And that means that people all over the world could unlock their potential. And I think that that would be a wonderful transformative thing. So I'm very optimistic about the value that AI can produce, but I am also very concerned about the
00:40:11
Speaker
human value and therefore human potential for making one's own livelihood that it can displace. And Iad, what do you think? What are you worried about and what are you excited about?

AI in Warfare and Resource Management

00:40:22
Speaker
So one of the things that I'm worried about is the way in which AI and specifically autonomous weapons are going to alter the calculus of war.
00:40:34
Speaker
So I think at the moment, in order to mobilize troops to war, to aggress on another nation for whatever it is, be it to acquire resources or to spread influence and so on, today you have to mobilize humans. You have to get political support from the electorate, you have to handle the very difficult
00:40:59
Speaker
you know, process of bringing back people in coffins and, you know, the impact that this has on electorate. So I think this creates a big check on power and it makes people think very hard about making these kinds of decisions. I think with AI, when you're able to wage wars with very little loss to life, especially if you're a very advanced nation that is at the forefront of this technology,
00:41:27
Speaker
And I think you have a disproportionate power. It's kind of like a nuclear weapon, but maybe more, because it's much more customizable. It's not an all-out or nothing, total annihilation or not. You could go into and start all sorts of wars everywhere, and all you have to do is just provide more resources.
00:41:48
Speaker
but you're acquiring more resources. So I think there's going to be a very interesting shift in the way that people like superpowers think about wars and I worry that this might make them trigger happy and this may cause a new arms race and other problems. So I think there's a new social contract needs to be written so that this power is skipped in Czech and that there's a bit more thought that goes into this.
00:42:16
Speaker
On the other hand, I'm very excited about the abundance that will be created by AI technologies because we're just going to optimize our resources, the use of our resources in many ways, in health and in transportation and energy consumption and so on. There are so many examples in recent years in which AI systems are able to discover ways in which even the smartest humans haven't been able to optimize
00:42:46
Speaker
energy consumption maximally in, you know, server farms, for example. But now, you know, recently DeepMind has done this for Google. And I think this is just the beginning. And so I think we'll have great abundance. We just need to learn how to share it as well. All right. So one final thought before I let you both go, this podcast is going live on

Introducing Shelly: AI for Horror Stories

00:43:09
Speaker
Halloween. So I want to end on a spooky note and quite conveniently,
00:43:14
Speaker
E.O.D.'s group has created Shelly, which, if I'm understanding it correctly, is a Twitter chatbot that will help you craft scary ghost stories. And Shelly, I'm assuming, is a nod to Mary Shelly, who wrote Frankenstein, which is, of course, the most famous horror story about technology. So, E.O.D., I was hoping you could tell us a bit about how Shelly works.
00:43:36
Speaker
Yes, well, I mean, this is our second attempt at doing something spooky for Halloween. Last year we launched the nightmare machine, which was using deep neural networks and style transfer algorithms to take ordinary photos and convert them into haunted houses.
00:43:55
Speaker
and, you know, zombie infested places. And this was quite interesting. It was a lot of fun. More recently, you know, now we've done, we've launched Shelly, which, you know, people can visit on Shelly.ai and it is named after Mary Shelley, who authored Frankenstein.
00:44:14
Speaker
And this is a neural network that generates text and it's been trained on a very large data set of over 100,000 short horror stories from a subreddit called NoSleep. So it's basically got a lot of human knowledge about what makes things spooky and scary.
00:44:35
Speaker
And the nice thing is that it generates parts of the story and people can tweet back at it, the continuation of the story, and then basically take turns with the AI to craft stories. And we feature those stories on the website afterwards. So I think this is the, if I'm correct, this is the first collaborative human AI horror writing exercise ever.
00:44:59
Speaker
Well, I think that's great. We will link to that on the site and we'll also link to Morrill Machines, which is your autonomous vehicles trolley problem. And Josh, if you don't mind, I'd love to link to your book as well. Is there anything else? That sounds great. All right. Well, thank you both so much for being here. This was a lot of fun. Oh, thanks for having us. Thank you. It was great. To learn more, visit futureoflife.org.