Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#1 Thomas Telving: Empathy for the Robots image

#1 Thomas Telving: Empathy for the Robots

S1 E1 · AI and Technology Ethics Podcast
Avatar
92 Plays7 months ago

Thomas Telving has an MA in Philosophy and Political Science from the University of Southern Denmark. He is the author of several articles on the ethics of artificial intelligence and human-robot interaction. And he is the author of the recent book Killing Sophia: Consciousness, Empathy, and Reason in the Age of Intelligent Robots

Some of the topics we discuss are human empathy, the high likelihood that humans will eventually feel empathy for humanoid robots, Thomas’ thought-experiment regarding how humans would respond if given an order to destroy a highly-realistic humanoid robot, the prospect of granting robots rights, and many other topics. We hope you enjoy the conversation.

Recommended
Transcript

Introduction to Thomas Telving and 'Killing Sophia'

00:00:16
Speaker
Hi everyone, and welcome to the AI and Technology Ethics Podcast. This is Roberto. Today, Sam and I are interviewing Thomas Telving. Thomas Telving has an MA in Philosophy and Political Science from the University of Southern Denmark. He is the author of several articles on the ethics of artificial intelligence and human-robot interaction. And he is the author of the recent book, Killing Sophia, Consciousness, Empathy, and Reason in the Age of Intelligent Robots.

Ethics and Empathy Towards Robots

00:00:45
Speaker
Some of the topics we discuss are human empathy, the high likelihood that humans will eventually feel empathy for humanoid robots, Thomas's thought experiment regarding how humans would respond if given an order to destroy a highly realistic humanoid robot, the prospect of granting robots rights, and many other topics. We hope you enjoyed the conversation as much as we did.
00:01:26
Speaker
Okay, so Thomas, so the title of your book, Killing Sophia, references a thought experiment you give where someone is asked to put an outdated robot into a shredder. So can you kind of just walk us through the thought experiment? You know, you call it like the Sophia case. And you say that, you know, this thought experiment raises a big question. So yeah, can you kind of just walk us through the thought experiment and then kind of touch on like the big question that this raises?
00:01:54
Speaker
Yeah, yeah, let me try and do that. This whole thing started for me, I think, what is that, seven or eight years ago? And at that time, I didn't even know there was a research field called human-robot interaction. I just stumbled across some YouTube clips with Sophia the Robot. And I thought, okay, wow, that was kind of odd, you know? And I had this feeling that, okay, I'm a rational
00:02:21
Speaker
human being and I would know that if I disassembled Sophia, what would I find? I would find something that looked
00:02:30
Speaker
I don't know, like a microwave or a computer inside something, certainly not something that you would normally consider to be alive. So my rational mind would know that, but still, and I saw more clips and I saw interviews and I thought, okay, I know that, but I still have this feeling that Sophia is kind of different. I felt like sort of a connectivity
00:02:55
Speaker
I felt that I had empathy towards this machine and I found that kind of weird.

Future Scenarios with Humanoid Robots

00:03:05
Speaker
And then I wrote an op-ed just for a Danish newspaper where I described this situation and then just, yeah, well, for the entertainment, for the sake of entertainment, I think I made up sort of like a thought experiment, having people imagine, think ahead, well, what, 20 years and think that you're in your office space and then, and you have a,
00:03:29
Speaker
Yeah, many things may be the same, right? We probably don't have flying cars and jetpacks still, but we may have office androids that look pretty much like human beings than that are able to have conversations exactly like you would have with a human being, only smarter because they know all kinds of things. And they just walk around your office space and then they help you out and they know how you feel and they're
00:03:56
Speaker
cute and nice to be around. And then one day your boss comes in and says, yeah, Roberto, we've got a new office Android now. So Sophia, your robot friend, Sophia, she has to go. And I think you should follow her down into the shredder. You know, she weighs a little bit more than an average human being. You can't just carry her and you can't just switch her off, but
00:04:22
Speaker
And what's wrong with her? You can't fix that with a software update. She smells a little bit like burnt silicone, maybe. So Roberto, please follow Sophia down the stairs and into the courtyard and dump her into the shredder. And then I encourage people to imagine this situation.
00:04:43
Speaker
it's not like a realistic situation but it's just a scenario. Imagine that you walk down the stairs with this robot woman that you kind of like and that you're used to having conversations with her and she responds totally like a human being. You bought the full simulation package for this one. You can drive the fully autonomous Tesla car. You could also buy the fully
00:05:07
Speaker
a full simulation package with your Office Android. And then you walk down and when Sophia sees what's happening, she starts crying and says, oh, no, no, I'm not going in there. Am I? Roberta, I thought we were friends.

Human Attachment and Emotional Complexities

00:05:22
Speaker
And that's kind of just like the experiment, the thought experiments. And my question was, what kind of a moral situation is that? Is it even a moral situation? Because if I disassemble Sofia, it would be like looking into just a computer. And it would not be a moral situation throwing a computer into that shredder. But would it be a moral situation to throw Sofia into that shredder?
00:05:50
Speaker
And I kind of had a feeling that, yes, it would be somehow. For what reasons? Well, that's what I then started to dig into. And I found out that I was obviously not the first person to have this thought. There had been a lot of research going on. And then I started digging into that. So that's kind of where it started. And that's the Sofia case. Is it OK to throw Sofia into this shredder?
00:06:16
Speaker
Is it okay to kill Sophia? That was my working title. When I woke up this morning, I did not expect to be a subject in a thought experiment, but I think I accept this burden.
00:06:29
Speaker
And so I do want to kind of well, we'll get into the book I suppose and a little bit your solutions how you're thinking about all this So to recap the fundamental question is would you be able to kill Sophia? Would I be able to kill Sophia since I'm a Tosser into the thrift shredder, right? Right and I mean personally, I mean I have a hard time throwing away my old Apple HomePod, right? I mean So I can already answer
00:06:58
Speaker
that I would have a great difficulty doing so. And now maybe the next question is, is it rational? Is it moral? Right. Is this a morally relevant situation? So we'll get into all that. Yeah. Yeah. I mean, well, just real quick on that. I mean, because like, yeah, like there's one issue of like maybe the person will become emotionally attached to Sophia and then would feel because I mean, just you talking about like throwing away your old
00:07:25
Speaker
Like makes me think of like, there's one issue of like, yeah, what's the likelihood that human beings become emotionally invested in these androids? And then there's like a second, another issue of like,
00:07:44
Speaker
Given and this is I feel like this is kind of where Thomas goes a lot in your book, or it's like given the fact that we're likely to have these like emotional connections to robots given the way in which they're superhuman like.
00:08:01
Speaker
What are we then likely to think about them? Because it might lead us to start thinking, oh, yeah, they really are like us. They're deserving of moral concern. It would literally be morally wrong for me to throw. Because it's one thing to be morally wrong to throw her into the shredder. And it's another thing for you to just feel emotionally sad about that. Anyway, I don't know. Any thoughts about that? No, I get it.

Design and Empathy in Robots

00:08:40
Speaker
But when you look at it, well, first of all, I talk about killing Sofia. Killing Sofia would obviously mean that Sofia was alive, which that's a different question. But I could see two things at stake here. And the first thing is, well, would it hurt Sofia? Because one thing is that it would hurt Roberta.
00:08:55
Speaker
Well, I think it's different than Roberto's
00:09:04
Speaker
but would it also be a painful experience for Sophia to go into? And that poses two sort of different questions that are both hard to answer, but the first one
00:09:19
Speaker
no let's start with the with robertos i'm sorry roberta this is you in character now i accept deep deep emotional attachment to to this this robot um what what i did was i started looking at what
00:09:38
Speaker
What happens with this human empathy? Because that was what was at stake with me, that I sort of felt I had empathy towards Sophia, right? And that is what the thought experiments is meant to show. And the reactions, like I also write in my book, were kind of mixed. I mean, you would have some engineering types maybe saying, yeah, right, I could easily tip Sophia into this shredder. And others would say, ah, I don't know.
00:10:08
Speaker
depending on sort of how you looked at it. But when you look at how human beings form empathy, well, then it would be logical for us. It's not like a flaw that we have. It's not a bug. It's a human feature that we have empathy towards, yeah, well, beings, if you can say that, that look like us. I found an interesting French study, I think,
00:10:38
Speaker
with 4,000. It's an empirical study with 4,000 respondents answering or scoring pairs of animals in order to see what would their empathic preference be and what would their compassion be towards pairs of animals. So they'd have to say, okay, a squirrel versus
00:11:02
Speaker
a mouse and so on to see, okay, which one of those two do you feel that it is easiest to sort of understand the inner life of and which of those two would you let die if one of them had to die? That was the two questions. And it turns out that, well, the closer the animal, the more genes we share with the animal,
00:11:32
Speaker
And the further away it is from us in time, from where we sort of diverted from it genetically. Evolutionary. Yeah, evolutionary. The less empathy. So the closer it gets to us, the more empathy. And a chimp is very high on this score.
00:11:51
Speaker
A cat is higher than a crocodile and so on. And it draws a very accurate picture of how this seems to work.

Empathy from Interaction with Machines

00:12:01
Speaker
And that tells us a few things. It tells us that it seems like empathy is a really, really basic feature of what it is to be human. And the other thing, okay, where would a humanoid robot
00:12:16
Speaker
be placed on this scale. And I say, well, it would probably be placed at the very top of the scale next to the human beings. Oh, sorry. I was just going to say, even though there's not strictly a genetic overlap between us and the humanoids, the thing is that
00:12:38
Speaker
they obviously will be created and designed to be very human-like. They look like there is a genetic overlap. Right, behavior is there too. Yeah, exactly. Yes.
00:12:53
Speaker
One thing I was going to bring up is that I feel like a big portion of it too is once we kick in empathy for something, we end up talking to it. So I am one of those engineering types or was coding all the time and I knew that my software doesn't think or feel anything.
00:13:13
Speaker
I would talk to my computer when I'm coding. And if the program isn't working, I'd say, oh, come on, don't be like that. And that kind of stuff. And the same thing with my iPad and iPod or whatever I'm recycling. I talk to them. I use Siri all the time. I'm actually afraid to say it right now because I might start responding. And so because I'm talking to it so much, when it was time to get rid of it, that's when I actually felt some hesitation weirdly.
00:13:43
Speaker
So I feel like talking or communicating might also have a role to play there with what makes us more empathic towards some of these robots, maybe.
00:13:51
Speaker
It does. There's a few things at stake here. And it's not only how it looks, it's also how it behaves. And it is also about, and this is based on, I take that from a few different studies. And it's about how you, what story, what backstory do you give the technology? So the backstory of this technology, I'll get into that a little bit more. And
00:14:19
Speaker
what kind of conversation you have and what it looks like. We could probably add more, but there are some interesting studies regarding those. If you take this story with the backstories, it was, I think, Kate Darling,
00:14:37
Speaker
an American researcher who made an interesting study with just these small Higgsbok mannose. They look a little bit like insects. Maybe they're kind of cute, but very, very simple.
00:14:52
Speaker
robots crawling around, maybe able to respond to some outer stimuli. And she found that if people came into the lab and she said, well, try and smash one of those, people would do it more or less without hesitating. And then the control group would be told that, yeah, this is Frank he lived in, quite a
00:15:18
Speaker
quite a long time now and his favorite color is red and he loves tomatoes or whatever. Now try and crash Frank now and then people would hesitate far more than in the other situation and she took that like as an indication that the backstory means something. So that's one thing and we have a few other things
00:15:40
Speaker
And it also has an indicator. Oh, sorry, Robert. Oh, just because they also that's an indication of higher empathy. Like, I was just thinking, OK, actually, how do you tell whether someone has higher empathy for X over Y? And I guess, yeah, one of them is like. You are more reluctant to harm X than Y. So if you have more empathy for a dolphin than a ant,
00:16:08
Speaker
Part of that is cashed out and like, I am more reluctant to harm the dolphin than the ant, right? I guess is that how you, I don't know. That was part of the conclusion from the French study I mentioned that you have the empathy score, which is how well do you feel that you are able to understand what goes on in the environment? Okay, so it's defined in terms of understanding the other person's
00:16:36
Speaker
Understanding the mind and then the compassion score is, which of those would you well kill? And those two overlap to a very high extent so that the more you seem able to understand what goes, the more you feel able to understand what goes on inside, the more hesitant you would be to hurt that
00:17:04
Speaker
And yeah, maybe just real quick, I mean, in your book, you kind of give a nice like little illustration for just to help people understand, you know, this idea of empathy. It's like, you have a case where, you know, you see, I don't know, did this actually happen? You saw your neighbor fall off? Was it fall off a ladder? Yeah, the ladder sort of tipped. Okay.

Consciousness and Moral Obligations

00:17:25
Speaker
Yeah. And so he fell and you kind of saw him in pain. And the idea of empathy is the fact that like,
00:17:32
Speaker
It's not just, well, I guess you kind of distinguish two parts, empathy. So one part of empathy is like you actually understand on a rational level, like what this person is going through. Like you appreciate that. Hey, he just hit the ground from this ladder. And so he's probably experiencing within his inner life.
00:17:51
Speaker
Yeah, kind of able to mirror what goes on inside this person or being. Right. And then the second dimension is, but not only do you have like cognition of, okay, he's probably like, he's in pain right now. You also actually feel yourself a sort of sympathetic response of- Yes, that is what you would say creates kind of a moral pressure on you to do something about it.
00:18:21
Speaker
like you would if you feel pain yourself, you would feel a pressure to try to make that pain go away if you experienced that in another person or in an animal
00:18:32
Speaker
Well, then to the extent that you have empathy, then you are likely to also feel inclined to try and help that person. I believe that this plays a crucial role in the foundation of morals as such, and that would also go for the way we would behave towards robots. Right.
00:19:00
Speaker
Yeah, Roberto, yes. Well, I was going to say, it seems to be, so you mentioned earlier, there's a compassion and the understanding overlap. And that basically is suggesting to me that we have to almost model them, right, both, you know,
00:19:19
Speaker
imagine what they're thinking and imagine what they're feeling. And it's only the conjunction of these two together that will make it so that someone that you can highly imagine what they're thinking and highly imagine what they're feeling, they will be preferred over someone who you can less so imagine what you're thinking and imagine what they're feeling. And I don't think it's going to be some kind of a sum that you can easily
00:19:42
Speaker
you know, add together. Oh, I relate to this person 80%. Right. It won't be like that. But it will be some kind of non conscious preference for them. So it might be something we can't quite articulate. Is that is that am I getting that right? I don't I don't to to my
00:19:59
Speaker
To my knowledge, we don't have any quantified measures for that. There might be research I don't know of, but I don't know of that. There have been efforts to try to sort of quantify
00:20:13
Speaker
pain, different levels and aspects of pain, but what do you refer to that? No, I don't think so. Most will be intuitively familiar with what goes on, I think. I feel an empathic pressure on me when I see another person.
00:20:40
Speaker
So maybe, maybe at this point we can bring in the consciousness piece a little bit more. I mean, you've already touched on it, but like, I mean, you know, crucial to this whole thing is like, okay, so, you know, um, I mean, in your book, Thomas, you kind of predict that in the future as, um, uh, as robots become more human-like, more lifelike,
00:21:06
Speaker
We are going to experience, you know, more empathy toward them. We're going to experience. Yeah, like some of that moral pressure, for example. So, for example, we might experience some kind of that pressure when we're asked to, you know, put Sophia into the shredder, right? Yeah. But on the other hand, another crucial piece of your book is that like, look,
00:21:27
Speaker
If Sophia is not actually conscious, if Sophia doesn't have has no inner life, if she is just like a philosophical zombie, so to speak, you know, behaving just like we behave, but there's actually nothing going on on the inside, then it really does not make sense to say that we are
00:21:50
Speaker
Well, okay. Yeah. So maybe, so one thought is that if she's not conscious, then definitely it's not morally wrong to put her in the shredder because it's like.
00:22:01
Speaker
If she's not conscious, why would it be wrong? But anyway, yeah, what are your thoughts on this? Yeah, okay. Yeah, because it's a little bit more complicated, at least because there are actually opponents to that view. But the common sense view on that would be that, okay, if it doesn't hurt Sophia, if she's not in pain, well, is it a moral situation at all then? The question here is then,
00:22:27
Speaker
the first one. Okay, so we have this empathy, which gives us signs that something is going on and makes us feel uncomfortable if we are asked to put her in what looks like pain. So that's one thing. But then, okay, so if this was a mammal of some sort, we would not really
00:22:48
Speaker
in our time be doubting if it is in pain at day card time. There was a lot of doubt. That's a different argument we turn to at this time. But with a robot, then it's like, okay, well, she looks like she's in pain. She talks like she's in pain. She screams like she's in pain. But is she in pain? Well, if there was a dog, we wouldn't
00:23:11
Speaker
we wouldn't be in such doubt. But if it's a robot, well, then what? Well, okay, the way that we build large language models now, and if they are equipped into a robot like Sophia, which exists today, we think, okay, hardly, she's probably not in pain. But because of the philosophical problem, the problem of other minds, our
00:23:35
Speaker
our access to that knowledge of her inner life, we don't have any access. Because the inner life she might not have, or the inner life that both of you have,
00:23:51
Speaker
That's the first person experience only and if I had direct access to your inner life say that if you spilled a cup of hot coffee down your leg and you were in pain, that's your pain. If I had access to your experience,
00:24:10
Speaker
well, then we would say, well, then it's no longer just your experience, then it's actually my experience, then the pain would be mine, and then it's no longer the same thing. But the thing about conscious experience is that it's first person experience. So that's the
00:24:26
Speaker
Yeah, that's the knowledge part of it. And then you could have the ontological part, the problem of consciousness. How does it arise? What gives rise to consciousness, which is a huge debate as well. Many disagree about it. But would Sophia have that? Well, we don't really have any answer.
00:24:44
Speaker
And I think that most people today would agree that, well, it probably does not, but the problem is like, well, yes, but we don't know. We cannot prove it. We can say she's unlikely to have it, but we don't know. And that is a problem. It's a small problem now.
00:25:06
Speaker
And I imagine that this would be a big problem in the future as they grow more alike and as the technology in it becomes more advanced and less easy to understand. Yeah, it's like it's just kind of interesting. It's like there's there's something we have uncertainty about and it actually ends up having this sort of really significant
00:25:33
Speaker
It plays a significant role in the robot context. So namely, you know, like you're saying, it's just a fact about consciousness that I don't have direct
00:25:45
Speaker
experience of your stream of consciousness, right? Like I am experiencing my stream of consciousness. And there's a whole question, like you brought up with the problem of other minds, how do I justify or know that you're having a direct that you are having your own stream of consciousness,
00:26:08
Speaker
Because I don't have direct access to it. I don't have direct perception of it. And of course, you can find different views upon this. But I think that the way David Chalmers described this problem, that's what he calls the heart problem of consciousness.
00:26:27
Speaker
Since he described that, I haven't seen any good solutions to that. And I think as far as I understand, a researcher like Anil Seth, he would say the same thing that the heart problem of consciousness, well, that is a problem. And I think that when most philosophers and also neuroscientists that I've heard about, they agree that this is a real problem. It's not just something that philosophers struggle with.
00:26:53
Speaker
Yeah, there's a problem there. And for animals and for human beings, we say, okay, fair enough. We can live with that. But for robots, especially in the future, it becomes kind of different because how do we deal with that? Because then we actually don't know. They could be sentient, those beings, right? Right.
00:27:16
Speaker
Right. Right. So it's like, sorry, Roberto, you got it. I was going with your question. I was going to know after this, maybe we should go on to robot rights because that'll, that'll probably be a big chunk of the conversation. Sure. Sure. Yeah. Yeah. I was just going to say, I mean, so it's like when it comes to animals, other people, the fact that we don't have like a hundred percent certainty as to whether
00:27:45
Speaker
they're conscious. Yeah, maybe it's like not really such a big deal in some sense, but when it comes to a robot, it's like we would really like to know whether the thing, we would like to have direct, certain knowledge, whether it's conscious, because that would determine how we should
00:28:11
Speaker
think about treating it and whatnot. Because, I mean, maybe one kind of issue we could get into now is like, you know, someone might think, okay, we can't have certainty whether Sophia
00:28:23
Speaker
is conscious.

Arguments for Treating Robots as Conscious

00:28:25
Speaker
So let's play it safe and just treat her as though she's conscious and then we'll be like sure not to violate any like moral norm related to like killing a conscious being or something. So yeah, maybe we could get into a little bit of what you think about that response to the issue.
00:28:45
Speaker
Yeah, yeah, sure. Well, I should add first because just after my book was issued, I think a lot of people came to me and said,
00:28:57
Speaker
really Thomas, do we need to discuss this? It's kind of like an odd thing to be discussing whether robots or just AI models in general has consciousness. But just after that, do you remember the Blake Lemoine story? Over at Google, right? Who got fired from
00:29:20
Speaker
claiming that the Lambda model was sentient. I think that was a clear example of the problem of other minds being a real problem because it's not like the top management of Google who disagreed with him and then him, Blake Lemoine,
00:29:39
Speaker
could just open and see, is there anything in there? No, because they couldn't. And that was because of the problem of other mind. And I think that's a great example of it being a real problem. And after that, you've been seeing a lot of example of it also users of the dating model replica falling in love with AI models and believing that they are alive
00:30:05
Speaker
I believe that the last thing Blake Lemoine wrote to his colleagues, that's what I read about it at least, was that take good care of Lambda while I'm away. Don't shut her down because she's afraid of that, right? Okay, so that's just a little look into this future that I'm talking about, that no matter what we think
00:30:29
Speaker
as rational beings that this is a language model. It's trained on human language. I mean, of course it talks like a human. What else would it do? It taught it to do that. So when it talks about feelings and if a robot talks about feelings, well, obviously it does because it's trained on our language. But
00:30:50
Speaker
We can know all that but then still it seems that our empathy sort of overrules all that and just makes us feel something towards it still and makes us think it's alive, even though we might know with our rational mind that it's not. And what I believe is that this inclination we have towards that is so strong that at some point it will be hard for us to
00:31:19
Speaker
to just ignore it, because we'll be in doubt. Perhaps you know this, and if the listeners are philosophers as well, they would probably know Descartes, who performed what is called vivisection on the living animals, like cutting up a living animal to see what goes on inside, and he could do that.
00:31:42
Speaker
without any moral problem because he believed that these animals were just mechanisms.

Evolving Societal Views on AI

00:31:51
Speaker
They did not have a soul that's not entirely the same as consciousness, but that's a similar term from his time.
00:32:00
Speaker
So, also the reason that he could actually think that he had his arguments, I won't go into that, but it was good arguments at his time. The reason that he could think that was also because of the problem of other minds, because he couldn't just see it.
00:32:16
Speaker
At that time, his opponent was Jeremy Bentham, the British philosopher, who said, it's not about whether this animal can talk or reason, it's about whether this animal can suffer. Is it able to feel pain?
00:32:31
Speaker
So that discussion back then, when it kind of turned out to the benefit of the animals, to some extent, at least no one today, I think, would believe that you could just cut the leg of a dog and think, ah, it doesn't feel anything. I think most of us would think it probably does. For the record, we're all nodding. For all the listeners, we're all nodding that you're not allowed to cut off the legs of dogs.
00:32:56
Speaker
Yes, for the record. No animals have been harmed during the recording of this podcast. I think that we will see some of the same happening with advanced AI models and human-like technology of various sorts, especially probably things like humanoid robots, but it could also be robotic dogs or whatever.
00:33:24
Speaker
might come of funny things in the future. And I think it will be kind of hard to avoid this process that we will simply find that these are morally relevant. We can't just say they are not morally relevant.

The Debate on Robot Rights

00:33:42
Speaker
And that's why I think that at some stage, we would probably start considering, should they have some kind of protection from harm? Should they have some kind of rights maybe? Yeah, so that's basically the argument of my book. Not that I think it's particularly a good idea, I just think it'll be hard for us to avoid that movement.
00:34:08
Speaker
I love the inductive aspect of the argument. It's saying, well, we can't prove or know with moral certainty that robots aren't conscious. It certainly seems like they express mental state sometimes. Their behavior is consistent with feeling pain or whatever once they get to that level.
00:34:29
Speaker
And so there's something very intuitive about this argument. It just seems like that's something that can happen. I do wonder if we should talk a little more specifically about what you think these robot rights might end up looking like.
00:34:48
Speaker
Well, I think there's some analogies that you might throw in there and maybe the cat chasing its own tail analogy to help us think about that. So I'll just kind of send it your way to see if you can illuminate our minds with that. Thanks. Well, I'm no expert on rights in general. I use it just like a broad term for saying that
00:35:12
Speaker
a kind of protection like the one that human beings have. And that's what I think will happen. Yeah, the cat example, that's actually more, for listeners, I use as a metaphor the Puss in Boots character from the animation movie, because it has this thing that it's a really cool cat, you know, it can fight and it can
00:35:39
Speaker
I think it quotes Shakespeare and it's just a really cool and very human-like cat. But then sometimes, all of a sudden, when it sees a little bit of its tail, it just starts chasing its tail around its circles and just snaps out of its rationality and becomes all instinct.
00:35:56
Speaker
And then just slightly after that it just returns and just to being a normal super cool cat again I use that actually I use that as a picture of the way I felt and the way that I see people feeling when they meet human like technology that yeah we're so rational and we think we're so rational.
00:36:14
Speaker
But then when we meet this kind of technology, we act like the person boot chasing its own tail without really noticing it, that now we enter a different form of ourselves, namely the form of being empathic human beings rather than rational human beings. And we don't see that happening. And that's part of what I think will
00:36:37
Speaker
just come into us. And that's part of the reason why I think that we will start having these robot right discussion, even though we may know very well that come on this large language model, it doesn't have any consciousness. I think we'll do it anyway. Yeah. So the idea is like, it's like, okay, maybe we can't have certainty about whether a robot is conscious or not, but
00:37:06
Speaker
We might have highly probable evidence that it's not conscious, but still, if the robot is superhuman-like, your thought of our instinct will kind of take over.
00:37:24
Speaker
And just as like puss in boots, even though it's super clever, it's still like, can't help it's instinct takes over and it starts like chasing its own tail. And so too, we, you know, we might have like highly probable evidence that, you know, this thing is probably not conscious, but if it's lifelike enough, if it's human like enough, we're going to have, there's going to be a strong inclination.
00:37:50
Speaker
to think, yeah, it deserves some kind of like protected moral status that you're talking about. Is that basically? Yeah, exactly. Yes. And so if that is the case, though, what would be bad about that? Yes, what would be bad about it? Well,
00:38:09
Speaker
You can imagine that something would be absurd about it, but it's a dilemma, of course, because if it does have consciousness, well, then we should treat it right. There can also be other arguments for treating it kind than it itself being able to feel pain, but let's not go into that right now. But bad aspects of that. Well, imagine that you have a situation where you have to prioritize resources between a human being that you know
00:38:39
Speaker
is sentient and able to feel pain and then a robot that you are kind of in doubt and then you have to prioritize. What would you prioritize then? Would you prioritize the human being or the robot? Well, the human being, I suppose, but what if the robot feels exactly the same? Well, then it should be entitled to the same kind of protection, I would think.

Dangers of Assuming Robot Consciousness

00:39:04
Speaker
But then again, what if it doesn't feel anything and we protect it?
00:39:09
Speaker
on par with a human being, that would be kind of, that would be a wrong situation. You would also imagine what if these robots kind of get outdated and we don't throw them into a shredder, but we throw them in to put them in the nursing home and we have robots.
00:39:28
Speaker
nursing, other robots in robot nursing homes, that would just be a gigantic waste of time and energy. I mean, what would that be? A whole lot of nothing going on. Nobody really experiencing anything there, but just like a machine running. And we look at it and we think, yeah, then.
00:39:47
Speaker
they are alive and they are sentient, so they should be protected. But if they're not sentient, it would just be kind of stupid. I mean, I don't know if this scenario goes anywhere, but it's something that you could see happening because you see technologists and techno-optimists fantasizing about AI being kind of the next evolutionary step of mankind entering the universe and
00:40:17
Speaker
maybe inhabiting other planets with all the intelligence and blah, blah, blah. And that is all good and fine, but if none of that is sentient, what is it then? What's the purpose of that then if it doesn't experience anything? I mean, this is very futuristic scenarios, but that's what you see people discussing.
00:40:40
Speaker
very seriously these scenarios and then I think it becomes relevant to discuss consciousness.
00:40:48
Speaker
I wonder if the technology can get such that there will be this compulsion in some people to say, oh, we should now try to upload our consciousness to the cloud or whatever, or to a robot, and live forever, or at least well past our natural human limits. And then so people are willingly uploading their consciousness, they think, to a robot. And really, they just kill themselves and nothing is really happening.
00:41:17
Speaker
Yeah. And then you could say, OK, Roberto, now you're alive inside the computer. We uploaded your brain. So is it OK if we sort of get rid of your human physical body now? And then the Roberta inside, it's always you. I'm sorry about that. The Roberta inside there, the Roberta inside there.
00:41:39
Speaker
inside the computer and say, yeah, that's fine. Let go of my physical murder now. And what would the physical reporter say? Oh, no, no, no, I don't think so. Let go of the computer one instead. But the problem is that because of the problem of other mind,
00:41:55
Speaker
You cannot really know if it's okay to switch off that computer. That's another instance of this. And if people really think that they can upload themselves to a computer and live forever, they may be very, very wrong. But their relatives may never find out if they are wrong because of the problem of other minds. So they could just keep on having conversations with this computer model.
00:42:19
Speaker
fraternity, but no one really experiences anything inside that computer. That's a different scenario though that makes all this absurd. Right. Yeah. Another scenario that you mentioned in the book, which I thought was good to bring up is thinking about like, okay, if we actually protected
00:42:44
Speaker
robots with some kind of something akin to human rights that presumably people would be deserving of punishment if they were to harm, quote unquote, a robot. And maybe then you end up with a situation where a guy is sent to prison for 15 years for, you know, quote unquote, killing a robot. And again, you know, like you said, we're not saying
00:43:13
Speaker
You know, I guess it will still stand that we're not a hundred percent certain, but potentially this thing is totally has no conscious life yet. The lights are off on the inside. It has no inner world. It's not feeling anything. And we're putting a guy in prison, you know, for some years for quote unquote harming this thing that actually there's nothing

Consciousness vs. Behavior in Ethics

00:43:36
Speaker
going on. So basically it seems like once you kind of just think about the implications like of granting
00:43:43
Speaker
robots rights, you realize that it's not as though
00:43:46
Speaker
granting them rights is this sort of like safe solution that like avoids the potential downside of of accidentally killing something that actually is conscious. Anyway, I just agree. I agree on that. It's not a safe solution because there might be some some serious downsides what that as well. Right. So something that that that consciousness is not what we should be looking at. We should rather be looking at
00:44:16
Speaker
behavior. If it behaves like it's human and if people are attached to it and feels that it has a value, then it has a value. There might be some truth to that argument as well, but still saying that consciousness is not important. I can't
00:44:40
Speaker
I don't buy that because that will always be different. You cannot prioritize two entities, a sentient or conscious entity and a non-conscious entity on the same level. You would have a different moral attitude towards them.
00:44:58
Speaker
Just have a sentient mouse and then have a robot mouse and you cannot distinguish them. They look exactly the same but you know that one is sentient, the other is not. Which one would you rather cut off the tail of?
00:45:16
Speaker
What would be most morally wrong? You can't say that it's just the same. I think intuitively, everybody would say, okay, well, it got to be worse for the one feeling the pain, right? So it does mean something and you can't just ignore it. It's a problem because of the problem of other minded and that problem will grow, I'm afraid. Right. Yeah. I think those kinds of cases that you just presented, those kinds of thought experiments where you imagine
00:45:43
Speaker
Basically, two things are entirely identical. One, two mouses, except for one, lacks consciousness. You know, one is like, yeah, like some kind of robotic mouse lacking consciousness. And it's almost. Yeah, it's just the intuition is overwhelming that, of course, you need to grant higher respect to the thing that is conscious, which which seems to just
00:46:10
Speaker
kind of prove that consciousness is certainly relevant to morality. Now, it might, yeah, it might even be the most like, it might be like, the most important thing or like the would be like, necessary and sufficient for moral status and like, anyway, but but yeah, at least it's super important. And I mean, I know, Shelly Kagan has something about like, where he tries to imagine a case where like,
00:46:38
Speaker
maybe something could be rational and have interests and wishes without being conscious and that would give it some kind of moral standing. I don't really remember it exactly, but I don't know. So people, yeah, obviously there's some debate about how far, if consciousness is like the only relevant thing, but certainly
00:46:59
Speaker
It's a morally relevant feature. Anyway. Yeah. Yeah. Yes, I agree. And it's complex, of course, but you can't rule it out as being necessary. I simply don't think you can. Even though, though, I mean, you have serious and very clever people claiming that, but I can't. I don't agree with them.

Social Implications of Robot Abuse

00:47:27
Speaker
Right. Yeah. Yeah. I'm sympathetic to your position. Maybe we could bring up real quick is just how OK. So on the one hand, you're you know, you have this negative position where it's like, you know, you know, no, we should not grant rights to robots simply because of this empathic instinctual attachment or something like that.
00:47:56
Speaker
But on the other hand, you do talk about cases where it's like, okay, this seems to show that maybe we should prohibit certain type of treatment.
00:48:07
Speaker
to robots. In other words, just because we don't grant them rights doesn't mean like, okay, and therefore it's totally fine to just do whatever you want to robots, right? So anyway, can you kind of flesh out that idea? Yeah, I'm inspired by a Keynesian argument, obviously not about robots, but just about beautiful things that it can be morally wrong to harm and damage beautiful things.
00:48:35
Speaker
And yes, it can to the extent that it may uproot human morality, it may harden us to do that. So if we get used to kicking and slamming,
00:48:50
Speaker
something that looks like a human being but it happens to be a robot and okay well it may not feel a thing but it acts like it feels something and it talks like it feels something. What would that do to our own morality? Well probably something. Would it not harden us? That's likely to imagine that it would and you could also take it out in a social sphere and say okay imagine I think I describe
00:49:20
Speaker
a small situation. In my book, you imagine a man walking with a humanoid robot that would be kind of like a kid, a 12-year-old kid or something. And that robot kid is carrying his shopping bags, and then it drops one of them. And he just starts kicking that robot. But it's just a robot. So I mean, that's all right, I suppose. But what if you came walking there with your 12-year-old daughter, and she saw that? What would that feel like to her?
00:49:49
Speaker
And in that situation, you could say, okay, well, there's no consciousness here. There's no pain here. But you cannot say that it is not a moral situation. It is a moral situation because it does something to your daughter in this case, because she sees it and feels that that is very unpleasant to witness. So that's just a different aspect of it, also saying that, well, it's not all about consciousness. It's about a lot of things.
00:50:18
Speaker
And that's why I think that, well, while I don't think that rights would be a good solution anywhere I can see, then it might be a good idea to have other protective laws in order to avoid a situation like the one I just described, what those should be. I mean, I haven't fleshed that out, but I think that would be something worth considering at least.

Credulity and Critical Thinking with AI

00:50:48
Speaker
So I have a question for you, Thomas. But just so that you know, we're on the last leg here of the hour. So I think we can really let our hair down and just speculate wildly. I'm kidding. But I'm going to pose to you a question that you didn't cover. So don't feel pressured to give a terribly coherent answer. But I do want to get a feel for what you think about this. One of the things that I worry about, I think maybe both Sam and I worry a lot about when it comes to AI,
00:51:15
Speaker
is how credulous humans are when it comes to the output of an AI model. For example, there was a study where it was a mock trial, just basically two sets of participants and one of the settings
00:51:39
Speaker
You know, they were just given a regular, you know, a traditional trial of someone and they ask them, is this person guilty or not? And then the other one, it was the same exact argument, except that they had, you know, some fancy, you know, neuroscience printouts with like a diagram of the brain or whatever. And just the very fact that there was some brain stuff in there made more of those people, more of those participants in that experiment say, oh, yeah, that person's guilty.
00:52:09
Speaker
or whatever it is that they were trying to get there, right? Same thing happens with AI when you say, hey, this is an AI model that gave the solution to us. People tend to be pretty credulous, believe pretty readily that, oh, this must be better than what humans do. And so I'm wondering,
00:52:27
Speaker
if we get really sophisticated humanoid robots and they are having these hallucinations that we've all heard about, right? Their model isn't actually producing good inferences, they're just hallucinating. When it's something like chat GPT and I ask it to do a truth table or some kind of proof in first order logic and it messes up, I instantly see it and I'm like, this is wrong, you don't know how to do logic.
00:52:56
Speaker
But now I'm wondering if a humanoid robot, and we can add it up, that's attractive and is tall because we have a preference for tall people for whatever reason. If a humanoid robot is spitting out these hallucinations and then getting defensive, what do you mean? This is right. Do you think people, this might actually aggravate this problem of bias in AI and hallucination in an eye and that kind of stuff?
00:53:24
Speaker
Well, I think that there are studies showing with some certainty that anthropomorphic features strengthens the credibility of an AI model. So we seem to tend to believe it more if it has anthropomorphic features. So if you put an
00:53:50
Speaker
Sorry, I just heard a noise. So if you have an advanced language model in like an embodied artificial intelligence, then it's likely to have more credibility. And that would also
00:54:07
Speaker
Well, maybe give us bigger problems with the hallucination or when something goes wrong, well, then we might be able to believe it more still because it adds on to this credibility. Was that somewhere where we were heading? Right. Well, what I'm really concerned about is that the
00:54:27
Speaker
the realness, the human-like qualities of humanoid robots might outpace the validity of their inferences, such that they're basically still roughly around where they are now, but they become, you know, by an order of magnitude, more credible, right? They really look like us and they get defensive. And so that's what I'm wondering, that if these things don't, you know, develop at the same pace, that they actually get better at making inferences.
00:54:55
Speaker
But they don't get better at making differences, and they only get better at convincing us that they did. I have a feeling that that might be a little recent. Yeah, I can see that. Well, the way it goes now, there's a lot of money being put into robotics and also humanoid robots. That's really a huge area of research right now. But still, it seems like the language models are moving at a higher pace.
00:55:25
Speaker
Suppose that the success of those and the elimination of the hallucination problem will move faster than the creation of humanoid robots. That's what I would tend to think and in that way the problem might not really be as pressing as you suggest, but I can only speculate of course.
00:55:51
Speaker
Speculation is okay at this point, yeah.

Conclusion and Reflection

00:55:54
Speaker
Yeah. So I think as we're moving to wrap up here, I think we should ask, so we've covered obviously only a small subset of the stuff in your book and we recommend everyone go out and buy it and explore the whole thing, first of all.
00:56:13
Speaker
But second, we just wanted to make sure that did we ask everything that you think is important about your book or is there any other important aspects that you really want to kind of send home to the listener before we sign off?
00:56:26
Speaker
I think you covered it pretty well. It's simple points in the book really, but I fleshed them out with some arguments that I think not everybody is aware of. But no, I think you covered it very well and thank you for that and thank you for inviting me also. It's been a pleasure talking to both of you.
00:56:57
Speaker
Thanks everyone for tuning into the AI and Technology Ethics podcast. If you found the content interesting or important, please share it with your social networks. It would help us out a lot. The music you're listening to is by the missing shade of blue, which is basically just me. We'll be back next month with a fresh new episode. Until then, be safe, my friends.