Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#25 Pilar López-Cantero: The Ethics of Breakup Chatbots image

#25 Pilar López-Cantero: The Ethics of Breakup Chatbots

AITEC Podcast
Avatar
28 Plays8 days ago

What if your ex never really left—because you trained a chatbot to be them? In this episode of the AITEC Podcast, we’re joined by philosopher Pilar López-Cantero to explore her provocative article, The Ethics of Breakup Chatbots. From the haunting potential of AI relationships to the dangers of narrative stagnation, we dive into what it means to love, let go, and maybe linger too long—with a machine. Are these bots helping us heal, or are they shaping a lonelier, more controllable kind of intimacy?

For more info, visit ethicscircle.org

Recommended
Transcript

Introducing Pilar Lopez-Cantero and Her Work

00:00:17
Speaker
Hi everyone, and welcome back to the A-Tech Podcast. Today we're talking with philosopher Pilar Lopez-Cantero about her new article, The Ethics of Breakup Chatbots.
00:00:29
Speaker
Pilar is a Mary Sklodowska Curie fellow at the University of Antwerp working in practical philosophy. She's written extensively on love, grief, narrative identity, and the ethics of emerging technologies. Pilar, thanks for coming on the show.
00:00:47
Speaker
Thank you for inviting me. um Okay, let's start with some questions here just to kind of get the ball rolling about yourself. Maybe you can just tell us where where did you grow up and how did you get into maybe philosophy in general and then this kind of philosophy in particular?
00:01:07
Speaker
Right, so actually these two last questions are the same answer because ah funnily enough, I got into philosophy through breakups. So I grew up in the south of Spain. I'm from a region called Andalusia in the south of Spain. And I was a journalist for a long time, but I was really disenchanted with being a journalist.
00:01:25
Speaker
And I love telling this story because I was in the middle of a really painful breakup. And then I got an ad, an online advertisement for a popular philosophy book. I read the book and I thought, hmm,
00:01:36
Speaker
I could think like these guys, because it was a book only about boys, and i could think about these guys and they supply these two breakups and understand why these breakups that I'm going through, this breakup I'm going through hard so much. So from there, I started ah reading more philosophy. I got into my master's, I did my PhD.
00:01:53
Speaker
So from the beginning, I was interested in the philosophy of love and the philosophy of the end of love really much.

The Philosophy of Love: History and Evolution

00:02:01
Speaker
And this is ah you know basically like ancient in in a way, right? they we've been They've been philosophizing about love since the very start with Plato and all that. So is that what you, like what, you know, just for us that are aren't, I'm not too deep into this ah literature. what Is the literature very contemporary or is it also very ancient or where do you read it's So as you said, there's been people talking about love forever. There's this a fantastic book called Love, a History by philosopher Simon May that tells like the whole history of philosophers who have been talking about love. And as you say, that goes in the Western tradition, goes back to Plato, goes back to Aristotle. Then we have all the Christian thinkers like Aquinas that are talking about love as well. ah Kierkegaard, which I have to say he's my personal enemy, but we can talk about that in another moment. and But then in the 20th century, we have like within analytic philosophy, obviously in feminist philosophy we have Simon simone de Beauvoir as well e talking about relationships, Sartre a little bit as well. and
00:03:03
Speaker
But in analytic philosophy, which is where I am kind of more and placed, there is kind of a resurgence of a philosophy of love from the perspective of thinking about reasons. What reasons do we have to love the people we love. And that was a big topic in the 20th century. And now we have a more diverse kind of sense of we talk about falling in love, we talk about asexuality, we talk about polyamory, about breakups, about divorce, about the limits between friendship and love. It's changed a lot in the last 10 years, I would say, the philosophy of love.
00:03:37
Speaker
not Interesting. It's really cool. um Yeah, I've come across that May book. I've never pulled the trigger, though, but that's

What Are Breakup Chatbots?

00:03:44
Speaker
cool. ah So, you know, the work we're going talking about today, um the ethics of breakup chatbots, ah it's going to you provide and a normative analysis of breakup chatbots. So can you just kind of tell us, you know, what's a normative analysis? And then more importantly, you know, what are breakup chatbots?
00:04:03
Speaker
right So a normative analysis, just to put it briefly, is to just analyze. So you have a scriptive analysis which would tell tell us what if these things are and a normative analysis that goes into like the morally salient ah features of the use, design and just the existence of this kind ah of technologies.
00:04:24
Speaker
and So it's looking into stuff that matters morally and that also matters like politically for justice. And normative analysis is a shorthand for saying that, the ethics and politics of bigger chatbots. And I kind of like... um made up this term, breakup chatbot, so to speak. It's not that you can find it out e there in the wild being discussed as such. And as I say in the paper, and there is a general way in which people are using LLM-powered technologies during breakups, ah for example, to try to rehearse breakups.
00:04:58
Speaker
ah Some people are asking also advice, should I break up with my partner, which is a little bit worrying, rather worrying, I would say. it um But also like different like there are different ways in using it, and I go into it in the paper, but what I call breakup chatbots, what I choose to call breakup chatbots is just the kind of technologies that allow you to create a digital duplicate of your ex-partner so you can continue talking to the chatbot as if it was your ex-partner.
00:05:28
Speaker
Interesting. And so the um that might include giving the chatbot a lot of data potentially about your relationship and then asking the chatbot you know to kind of pretend to be your ex-partner because that could...
00:05:48
Speaker
Yeah, it could the the the amount of data you input into the chatbot can vary. You could just just say, listen, I broke up with my partner. They are they are kind of like they're a bit grumpy and they love painting. You know you just give like a generic ah description and just talk to me like you're them.
00:06:06
Speaker
And you can just use like that look, like like very, very... like thick grain, coarse grain, sorry, and ah description. But you could go to the point of actually uploading more detailed data like message history, emails, and then you can also sort refine the answers when they say the chatbot says, and no, I don't think you should do that. Then you can say, well, my partner wouldn't talk like that. Can you actually talk in this other tone? So you can also refine it that way and quote unquote, train the chatbot to be more similar to your ex.
00:06:38
Speaker
Okay,

Ethical Concerns with Breakup Chatbots

00:06:39
Speaker
great. and And you've already touched on this a little bit, but just are breakup chatbots popular today? um Well, in all honesty, yeah, I only have like anecdotal evidence coming from online forums, because for us, sorry, because there is not really a a study and that I have found at least saying, well, this is how people are using these chatbots after breakups. and Maybe some people are doing these studies as we speak because this is a very new technology, and but not that I could see that there was anything to how prevalent they are. so But I thought, since I'm a philosopher, I don't need it to be already a professor. an important technology, I could just think about, well, this is described in this way by these anecdotal reports. So, if and this corresponds to how technology works right now, you can't refine a chatbot to and act or talk in a specific way.
00:07:33
Speaker
So, how would it be if it was actually used for doing breakups to continue this relationship? So there's no like company who has developed like an AI where it's like, hey, like we'll walk you through how to use this chatbot for the sake of ah yeah like maybe working through a breakup or...
00:07:58
Speaker
you know, creating like a virtual continuation of your now ended relationship. There's no company. Does that make sense? Like you could imagine, I guess, the company that did something like that, right?
00:08:11
Speaker
No, when I started writing this, I could find online some kind of like service that was in a waitlist, but that has never come to public. That was specifically designed to talk to your ex. But they are there are existing companies, existing chat chatbots that are not marketed to, okay, reproduce, like recreate your ex. But they are directed at, okay, create a character. And then however you want to create that character, it could be your ex, it could be Taylor Swift, it could be Plato, right? People are doing these things.
00:08:43
Speaker
ah So it's in the realm of possibility to do it with your ex, but nobody's marketing it. Yet. For breakups. Yeah. I mean, I think I agree. I can provide my own anecdotes where I think some people, at least, you know, students talk to me about this and, and I feel like most students just use, you know, generative AI to like,
00:09:05
Speaker
cheat but a lot of some students at least use it for essentially everything right so if if it turns out that you know a recent breakup is in their you know history then they might try it and you know it's it's it's sort of naturally and sometimes if you just mess around with the chatbots they sort of you know will will say that. And like, i don't know, they'll get more information from you or something like that and if you're just having very loose conversations. So I can definitely imagine this happening to people that I know. Right. So.
00:09:37
Speaker
See. Maybe we could ask like, um What might motivate someone to use a breakup chatbot? I guess it's kind of obvious, but like maybe we could talk a little bit about that.
00:09:51
Speaker
Why someone might be motivated to get a chatbot to simulate being their ex-partner, I guess. So breakups can be very, very painful. That's like it's a given, right? They're not always very painful, but they can be very painful and they can be also very disorienting. In some of my previous work, ah I have written with my co-author Alfred Archer that falling out of love in general, not only when you break up, also when you are just with somebody and you're falling out of love with them, Can this make you unable to know how to go on? You that cannot really make sense of what is happening, but also like of your own self-concept and also what to do next.
00:10:33
Speaker
So we tend to try to orientate ourselves really quickly when we are in this ah state of disorientation. um in art In that paper we wrote, we say that we shouldn't be trying to orientate ourselves so quickly because in this orientation we can find out many things about ourselves.
00:10:50
Speaker
But i think it's this sense of not knowing how to go on and not know how to continue that leads some people to use breakout chatbots and or you know every other way of and being oriented again.
00:11:04
Speaker
And of course, there can be people who can use it for with a more productive, more direct aim of trying to understand an explanation that somebody has given them. So you might get be given an explanation for a breakup that is, well,
00:11:19
Speaker
I don't think we are compatible because we are going to be working in different sides of the continent ah and we want different things. And although it hurts now, you will understand it in the future. Something along those lines, right? Like a bit more complex. It's not me. It's not you, it's me, right? And you could go to the chatbot and say, well, I don't understand this at all. Explain it to me. And the chatbot might get form sentences together that looks like an explanation, that looks like a more complete...
00:11:48
Speaker
more substantial way of understanding ah what this person has told you. So that's another way that people might be using breakout chatbots as well. And some people might be just trying to just not acknowledge the end of the relationship and just continue, try to continue the relationship by talking to the breakout chatbot as if it was their partner, as if nothing had happened.
00:12:11
Speaker
but So that's, just real quick, I guess it's like, here are three potential uses. One is, I guess this would be a breakup chatbot type use where ah the the chatbot doesn't simulate being my ex-partner, but it does provide like an an analysis. It gives, like you said, like maybe you you tell the chatbot, this is what my ex-partner said, and the chatbot kind of analyzes what they said.
00:12:42
Speaker
something or what they said, or just your relation facts about your relationship more generally. That's maybe one use a second might be like, um, it pretends it simulates being your ex partner.
00:12:55
Speaker
And the reason why you ask it to simulate being your ex partner is for the sake of maybe understanding, the breakup.
00:13:06
Speaker
And then another use is like, I want you to simulate being my ex-partner so that we can sort of continue, so that I can kind of pretend to still be in the relationship.
00:13:19
Speaker
I guess maybe, does that sound like three different uses of that?

Navigating Breakups in Modern Dating Culture

00:13:23
Speaker
Yeah, I think that's like when when when users have a specific aim, right? like So the orientation is more general, but like when they have a specific aim, I think these are the three aims that we can distinguish this. A, a give me an explanation. B, okay just allow me to just like rehearse this more and just like get myself transitioned out of the relationship by talking to the chatbot.
00:13:43
Speaker
Or the third one that you said, let's just like ignore that the breakout has happened. Let's continue with the relationship in this format. I have a two questions. One of them is a little random, so I'm going to start with that one and then I'll get back with the other one. But it it seems like the you know the the middle option where you you have a ah ah chatbot trained to be like your ex to help you overcome you know the breakup.
00:14:11
Speaker
That seems like a really novel problem, right? if i I don't know the history of marriage and relationships, but it sounds like dating is like, I don't know, that like a 20th century kind of thing.
00:14:25
Speaker
ah is am i Is there anything to that? Like there literally aren't any rituals for getting over breakups because it's just kind of a really novel problem? Or what do you think about that? I think I agree with you about the kind of like novelty of it. Like there's always, obviously that relationships have been broken all the time. And as say, one of my nemesis, Kierkegaard, make a huge deal of breaking our relationship. So this has always happened. But the kind of of systematic way in which most people you know have gone through a big breakup, that's kind of new of our like current society, of this society where dating is a thing, right? And where people are not just like, okay, you are going to marry this person and that's going to be it. I wouldn't say breakups as are new, but like the generalization of breakups in the way that is now, it is indeed new.
00:15:12
Speaker
and And then trying to understand what to do and trying to understand what count were are good reasons for breakups and how to react to breakups is a bit hard, not only because don't have kind of a script to do it, but because the scripts we have are very, very amatonormative. So amatonormativity is a term coined by philosopher Elizabeth Brake, according to which we think that life is worthless without romantic relationships.
00:15:45
Speaker
So the scripts we have are that if you break up, you are are a failure or there is ah you are failing somehow, right? There is something that has gone wrong. Instead of just seeing us, well, we do have relationships that end in the way in the same way that jobs end, for example, right? So in that sense, the scripts we have our really and are driving us towards thinking that a breakup is a huge, painful thing that disrupts your life a lot.
00:16:15
Speaker
and So there's a mix. It is painful to be broken up with, but at the same time, we don't have many ways of understanding breakups as non-painful. Yeah, it almost seems like um breakups are ah A, always bad, and B, ah you know I guess...
00:16:34
Speaker
necessarily unwanted. like I guess what I'm trying to say is that I can imagine something have someone having a beautiful 20-year relationship and then it ends. But you know it ended because you just kind of grew apart at the end and it was kind of better and it was amicable and whatever.
00:16:52
Speaker
But some people still see that as a failure, even though like it might have been that they were actually they were really happy together for those 20 years and now they're really happy apart for you know whatever the remainder of their life.
00:17:03
Speaker
And that seems okay, but, but it's so amatonormativity that idea that, well, no, it's still bad because it they didn't make it to the end. It was supposed to be for life or is that, am I getting that correct?
00:17:15
Speaker
So a lot of the normativity is usually like tied with this idea that then you have to not only romantic love is the most important thing in your life, but also it's paired with finding the one and only. right And it was kind of Plato who came up with this, really, because he's the one who said that love is the merging of two souls ah who are finding each other after they were separated by the gods.
00:17:37
Speaker
So this whole idea of finding your other half and if you like kind of find it has to be with that person, right? There is a lot of scripts that go into this amatonormative idea.

Are Breakup Chatbots Morally Noxious?

00:17:50
Speaker
So maybe now we can turn to kind of your main thesis in the paper. So um the main thesis seems to be something like, you know, when um the chatbot is designed with the aim,
00:18:04
Speaker
or the potential to to sustain a continuing bond than the breakup chatbot. It's not just contingently harmful, but it's actually morally noxious by design. So,
00:18:17
Speaker
um Yeah, so maybe, yeah, if you want to restate your thesis however you want, you can do that. And then also maybe we can talk about, you know, this idea of sustaining continuing bonds. So, like, what would it mean for a breakup chatbot to be designed to sustain a continuing bond?
00:18:36
Speaker
Mm-hmm. So it's starting by the latter, I think, it's this easier to ah like explain what that is. ah Continuing bonds is a way is is um ah concept that we have in the psychology and philosophy of grief, which is the idea that after... so mostly the the debate on grief is about bereavement grief, so that when somebody dies, right? So when somebody dies, you don't continue sever that relationship. You continue interacting with the person in ways of like... and remembering them, going through their letters, writing them letters. Some people have visual kind of hallucinations.
00:19:12
Speaker
of and also like They can smell like the the people that have passed away. and You do funerals, you get together in their birthday, etc. So that's like understood by philosophers of grief as continuing bonds.
00:19:25
Speaker
Now, continuing bonds, the notion of continuing bonds has not been really explored with regards to breakups. So with breakups, what I understand is that it's kind of the same. So you break the relationship the romantic relationship is broken, but then you kind of try try to continue some aspects of that romantic relationship.
00:19:45
Speaker
And what do I mean when I say that ah chatbots are morally noxious by this design when they are sign when they have this aim? In the paper, I say, first of all, that my my concern is with satbots that are specifically designed to continue and to do continuing bonds after a breakup, but also potentially could be the case, right? Because as I said, there is no specific service that is specifically designed for that, but there are services that can be used in that sense.
00:20:19
Speaker
um And the reason I think we should not just see them as contingently harmful is that when we say something is contingently harmful, means that it has the potential to be harmful, but it depends on the context, whether sometimes it's harmful, sometimes it's not.
00:20:35
Speaker
what i see What I think is that even if using the breakout chatbot doesn't harm you specifically, doesn't harm anybody, because imagine you have the consent from the other person and you have like, and we can go into this a week more, you know that you are interacting with a chatbot, etc.,
00:20:52
Speaker
It's still morally noxious by design because it reinforces and bad ideas about what romantic life should be. And also inadvertently to you, it makes you engage in a meaningless project and it also makes you like it disrupts your narrative capacities by making you unable to get out of these kind of amatonormative scripts.
00:21:16
Speaker
So we got a couple of things there to to unpack, right? You gave three reasons there. So we'll we'll touch on those three reasons in a second, just for me to spit back at you what what you said and to make sure that I understand it.
00:21:29
Speaker
um If you design a chatbot to sustain continuing bonds, And even if you do so and you you know you do so in a way that you know you use it for a little bit and afterwards you know you you let it go and and nothing really negative happened to you.
00:21:48
Speaker
Even still, this is morally problematic and ah and and you give those three reasons. right So just making this thing is a bad idea, morally speaking. okay you think about it's not Sorry, I interrupted you there, but like think about cigarettes.
00:22:04
Speaker
You could smoke a couple of cigarettes and that's not going to give you ah cancer. That's not going to make you sick. But however, the fact that there is an allowance that like there are big companies creating cigarettes and selling them to people and they are accessible to people in shops, we could say that that's a bad industry. That is a morally noxious product.
00:22:21
Speaker
So it's kind of, that's the idea behind it. Sorry. I interrupted you. So go on. No, that's, that's, that's actually, that's a great example. um Okay. And could could I maybe just real quick? So,
00:22:33
Speaker
so Um, to be clear, suppose someone, uh, you know, yeah, gets on, um, chat GPT or something and gives it some information about their ex partner.
00:22:49
Speaker
And they, um, either asked the chat GPT to, um, help them understand why they were broken up with, or they even asked JGPT to pretend to be their ex-partner for the sake of understanding why they were broken up with.
00:23:07
Speaker
For you, that would be a morally permissible use or as well as design. In other words, there's if your aim is to understand the relationship, then you're not aiming to continue the romantic bond.
00:23:23
Speaker
um and that would be a morally permissible use of it or design of it. is that Is that accurate? Well, you proie ah like i probably didn't do enough in the paper for you to not attach that view to me. because what i So what I say in the paper is, i see right you know what, i'm not i'm not going to be but a it's maybe it's going to be fine if you use it with those purpose, but for sure it's not fine if you use it as continuing bonds.
00:23:53
Speaker
ah Because I do think that there is some there is a separate conversation to be had when we are thinking about like using this technology only to understand the breakup. I think we should analyze that as a separate use, even if it's the same product.
00:24:06
Speaker
So, yeah, in the paper, I don't argue against that use, but I would have to say that I don't know if I'm completely in favor of that use. Got it. Got it. That, that makes a lot of sense. And so it's like, yeah, that for that particular use, um, you're not really taking a stand on it being either morally permissible or morally impermissible. However, when it comes to a certain type of use, you are taking a stand that it's morally impermissible. So, um, and just to real quick, it is, I was thinking,
00:24:35
Speaker
I mean, I think you talk about this paper, but in my mind, I have this idea of, okay, there's one person might try to continue or sustain the romantic bond by like, you know, in a really like sort of delusional way where maybe they, um yeah, they ask the chatbot to pretend to be their ex-partner and somehow they have this delusion that this is actually continuing the relationship. Like maybe they have this delusion that, you know, through the chat bot, the relationship isn't even over. It's like, we're still in a relationship thanks to this chat bot for pretending to be my ex-partner. So that would be like a delusional continuer. They're like using the chat bot to in a deluded way to continue their relationship.
00:25:17
Speaker
Whereas another one would be like, they're basically using the chat bot in a sort of make-believe way. Like they're pretending... they they know. So if you ask them, Hey, are you still in a relationship with so-and-so they would say, no, I'm not.
00:25:34
Speaker
However, they are, so they are asking the chat bot to pretend to be their ex partner for the sake of preserving a sort of feeling or just to pretend that they're still in the the relationship. And, and, and, um,
00:25:50
Speaker
Anyway, my understanding is that like if you're either using it in a diluted way to continue the relationship or you're just pretending to continue the relationship in this virtual environment, both of those cases would be like somehow morally impermissible, maybe to design the chatbot for that or to ah be the one who's using it in that way. Is that...
00:26:11
Speaker
um Does that sound right? Absolutely, yes. I think so. like if you If you look, here is ah an excellent example of how these chatbots can be contingently harmful.
00:26:22
Speaker
If you're deluded, you could say that there is a specific harm that you're suffering ah because you are like at disconnecting from reality and there fantastic work done on AI and delusions and how this can be dangerous for the integrity of people and for their own lives sometimes. and So that's a direct harm.
00:26:40
Speaker
But here is, that's why i argue that it doesn't matter whether you're deluded or not. Even if you're making, but even if you're in a fictional list, as we say, even if you are like, you know, I know I'm not in a relationship with each other, but you know, I cannot deal with this breakup right now because I'm submitting a big grant application. So I'm just going to do this until I submit the grant. I'm going to pretend that my partner has gone away and I'm talking to them. Well, I still say that even if you're not suffering in any way and you're not harming anybody, if you have the permission from your partner,
00:27:09
Speaker
This is not a good product to use because of what reinforce what the tendencies it reinforces and what it does to your um authenticity and what it does to your narrative capacities. and It is so interesting to me because like, we're kind of talking about the ethics of pretending. Like, I just think it's so interesting to talk about when is pretending or make believe actually immoral, you know? Cause anyway, I don't know I just think about like, it's interesting to think about in like video games, you can pretend, know,
00:27:40
Speaker
to kill somebody who's innocent. And, you know, you can wonder, is that morally wrong? You can, uh, anyway, it's just kind of interesting to think about like the ethics of doing something that's a simulation. And, and it seems like this would be a case where you're kind of arguing that there's something morally impermissible about, um, yeah, pretending to continue, uh, the relationship, even if you're not deluded, it's, there's still something morally impermissible in it.
00:28:08
Speaker
There is something wrong, yeah. I don't know if you are doing something specific, if you're something, if you by using it are something wrong, but at least I think there is something wrong with it existing. And of course there is this parallel with video games, right? The panic about video games is always, oh, but people are going to be reinforcing violent tendencies.
00:28:27
Speaker
And you could say, well, if you are against that argument against video games, you can be against my argument against break a chat box. But what I think is that when you're in a video game, you are you know you're not in a war, right? Imagine you're playing Call of Duty.
00:28:43
Speaker
Being at war, being a soldier, unless you are an actual soldier, is not what you're doing in your everyday life. It doesn't have to do anything with your life. You're not recreating something that you feel every day in your normal day, right?
00:28:59
Speaker
Whereas when you are recreating a relationship that you are in with a specific individual and you are doing this in a unidirectional way, you are reinforcing tendencies. So I understand that this parallel can be created and I think like There is probably an argument to be made against my view from the video game people, but I think it's easy to answer. Well, there is a difference here regarding of your starting standpoint, so to speak, before you make believe.
00:29:28
Speaker
Interesting. Yeah. I guess I have one like super obnoxious meta-ethical question that we're going to delete from the conversation probably, but So it sounds like you're, you're like sometimes, um, moral philosophers or ethicists or whatever, i guess it depends.
00:29:46
Speaker
Some, some philosophers will frame, there's some moral philosophy questions where you you talk about like some actions that are just permissible or impermissible. And then there's some people that are more like the ancient ethicists that are like, well, this is ethics for them means how you thrive, how you fare well, how whatever. It sounds to me like you're making an argument more so on the like eudaimonic, you know, how, what kind of,
00:30:13
Speaker
practices should you have in your life to to have a good life, a meaningful life? ah What kind of ah institutions and technology should there be in society for everyone in society or most people in society to have a meaningful, good life? Is is that right? Like you're leaning more in the ethics and the moral philosophy kind of divide?
00:30:30
Speaker
I would say, so like I don't see like a difference between ethics and moral philosophy. To me, like like ethical thinking is is part of... like Moral philosophy we are thinking about ethics, but I understand what you mean about this know like moral principles, right? So they the differentiation between which is the the the correct moral principle to apply in this specific situation,
00:30:49
Speaker
and And it doesn't seem that I'm in that realm. It seems that I'm more in the sense of what kind of life should we have and how should we relate to each other, which is more relational ethics, so to speak.
00:31:00
Speaker
So I'm definitely more on the relational ethics side. So I think like you you you put that really well. But at the same time, I do think there are some principles that enter my thinking, which are principles of justice.
00:31:12
Speaker
ah ah And I absolutely think that one important principle of justice is equality in romantic relationships, in all relationships, right? But and power imbalances are really dangerous and can be really bad in intimate relationships, family as well, friendships as well. And i the that's a principle that I think comes into how we can have a good life. We need the right structures for us to have a good life. And if a structure is like is like designing an environment where intimate life can be like really imbalanced between different people from different groups, that's a problem. So I would say i mean I'm taking from both a little bit, so to speak.
00:31:56
Speaker
Yeah, that makes sense. um So maybe now we can turn, like just um we can try to just make sure we capture your position. So it's like, break up You tell me if this is right. It's like breakup chatbots, when they're designed for sustaining continuing bonds, they're morally noxious by design.
00:32:17
Speaker
Why? Because first, they're aimed at maintaining the user or keeping the user kind of engaged in a meaningless activity. Second, they promote abuses of power.
00:32:31
Speaker
And third, ah they support oppressive master narratives. So it seems like you have like kind of three reasons for claiming that breakup chatbots are morally noxious by design. Is that is that fair so far?
00:32:45
Speaker
Yeah. Okay. And okay, so let's, I guess, should we just kind of get into the first reason maybe? you know So yeah, the the first reason they're morally noxious is that they're aimed at maintaining the user in a meaningless activity. So yeah, so why would...
00:33:01
Speaker
yeah just start Why would using a trick breakup chatbot to sustain a romantic bond while with an ex-partner, why would that be a meaningless activity? So when we think about romantic relationships as part of our life that can be meaningful, it's because they are expressive of what we care about.
00:33:24
Speaker
What we care about is an expression of our identity, right? Who we are is expressed through what the things that we care about. And you could care about things that are not meaningful, right? You could just like be very focused on, like you know, like you know like wolf wall of Wall Street type of guy, right, who just wants to be in the biggest party. And that seems to be things that you can care about, but they are not they don't have objective value. So philosopher Susan Wolf says that a meaningful project, there is kind of a combination between things having some objective value and things being valuable for you.
00:34:00
Speaker
So... For a romantic relationship to have objective value, it has to be not really harmful, but it also has to be oriented to expressing what you really want in your life. So imagine I want to be going out with this person, I'm in love with this person, and I want to interact with this person. And when I love somebody, this is not just a one-directional attitude. It's not my love is directed at you and there is thoughts. It's a relation where I am shaping you, the loved person, and you are shaping me. We are shaping our relationship. We are shaping people around us as a couple as well. You can say there is a joint identity, so to speak.
00:34:37
Speaker
So if you are doing this with an entity which is unable of actual, or let's call it thick reciprocity, right an entity that is shaped exclusively in your terms, and you're doing this while, as you say, engaging in making-believe, thinking that this technology I'm using, this artifact, is a representation of my partner, there is a problem there have because You might know that you are not engaging with your partner.
00:35:10
Speaker
But love is not only about what we know and what we believe, but also how we feel. You might know it's not your partner, but you do it because it feels as if you hadn't broken up.
00:35:24
Speaker
So even if you're not deluded, even if you that have the core the the right beliefs, you know this is not your partner. There is something about like you are misguiding your own emotions, so to so to speak. That's what I would say. And that's why that's a meaningless project because it's you're pretending to do something that is meaningful, something that you care about, but you are directing it at the wrong target.
00:35:48
Speaker
So real quick follow, but let me see if I'm not sure if this is going to be clear, but I had this thought of like, okay, what if someone thinks like, yeah, um well, maybe let's start like this. What if someone they're broken up with and this relationship with this person was sort of like the source of the meaning in their life.
00:36:10
Speaker
So, you know, in terms of what they personally value, it's like almost everything is riding in this relationship. Now, ideally, um, that relationship continues, but they've been broken up with.
00:36:26
Speaker
And so it's like, yeah, ideally they would have thick reciprocity with this person. But what if someone says, well, look, you know, this is what makes my life meaningful is a relationship with this person.
00:36:38
Speaker
I can't have a real one anymore. So is it really wrong for me to like, it's a non-ideal situation. So I have to like settle for a situation where, yeah, I don't have thick reciprocity and,
00:36:54
Speaker
it's like the alternative is just absolute meaninglessness because i because we're assuming this person's like whole meaning in life hinges on a relationship with this particular person. So I don't i don't know thiss i don't know if this this objection is making much sense, but like i thought is like, you know can't we allow it because otherwise they would just be in total meaninglessness? Because again, we're assuming that this person's like whole meaningfulness hinges on a relationship with this particular person. So and does that make sense what I'm getting at here?
00:37:25
Speaker
Yeah, absolutely. And many people will be in that situation, right? So A, the problem there is being in ah being in a situation in your life when the whole meaning of your life comes from a romantic relationship. That already I see as a problem.
00:37:38
Speaker
So if this kind of product is going to be catered at people who feel like that, it's already catered at people who are vulnerable because they cannot find sources of meaning outside of romantic relationships.
00:37:50
Speaker
So that would be already a problem. But let's like bypass that for a second. right let Let's just stay with the example and what things happen in real life, which is that people are like that. And that's true. Imagine somebody who gets broken up with ah their partner and they are they they are so sad that that what they do is did they try to find somebody else really fast, right? Somebody else who resembles, you know, as much as possible the person that they were going out with.
00:38:19
Speaker
And that's it. That's called a rebound, right? People do it. It's not something out of its will. I don't think people who are doing, who are rebounding are doing something morally wrong.
00:38:31
Speaker
But I think they are not being kind of they are not engaging in a relationship as a meaningful activity for the reasons of expressing what they care about. They care about the partner who left.
00:38:43
Speaker
And they are trying to fill that hole, so to speak, by jumping into another partnership, which of course is going to bring nice things because if you are hard, especially if it's not an abusive, you're not going from an abusive relationship to another.
00:38:56
Speaker
If you're going to a nice supportive relationship to a rebound where you don't love the person and you're just there just to continue by inertia, but it's bringing you some good things, you could get some good things from there.
00:39:07
Speaker
But It's not as authentic as looking at yourself and thinking, okay, what do I need? what is How can I create meaning? in my How can I engage with a meaningful activity that is not this relationship and not just creating a a cheap replacement of what it actually is meaningful for me?
00:39:27
Speaker
I had a ah thought when I was reading your your paper, and this is not exactly what you say. i think it's like maybe adjacent. I think you'll you'll tell me if you agree or not, but I see ah social relationships or or I see our capacity for relationships as a skill that we have to sort of develop. And i and i see it as ah as an improvisational skill.
00:39:50
Speaker
and And if we want to be good at this, which is essentially, in my eyes, to be good at being human, so social that we are, then we have to be able to improvise with several dance partners and and romantically and non-romantically and as friends, as is ah whatever social role you want to you know insert in there.
00:40:11
Speaker
And the reason then why this chatbot relationship would be meaningless is that it's not adding to that improvisational social capacity. So I don't know how that that lands for you, but you can you can let me know.
00:40:28
Speaker
No, absolutely. This is something. So like one of my favorite philosophers of love, who is Amelie Rorty, ah talks about improvisation as a huge part of what makes love important how what makes and what makes love love, really, which is like improvising with each other. And then Ben Bagley picks up on this and has a really nice paper on the improvisation, he calls this as well.
00:40:53
Speaker
And I think there is no possibility of improvisation in this deep way with a chatbot, even when the chatbot is going to answer in ways that are unexpected to you. I don't know if you've ever used ChatGPT.
00:41:05
Speaker
You're asking ChatGPT a very simple thing and they they surprise you in a very annoying way because they give very a crap answer to not put it in those words. so You are reacting to it by being an annoyed because why didn't you do this? But like I'm not saying that there is no reaction and reactivity and there is some ways in which you can be surprised, but improvisation is something deeper in which you are shaping each other and you are honing that skill, as you said, Roberto, of what is to be with each other. and Learning to be with other person is amazing.
00:41:39
Speaker
something's a very uphill battle but it's one that is worth it just because you get to yeah improvise in that way with each other if that makes sense. Yeah, I was looking at an article on I didn't read it so I read the abstract but still it's about micro yeah it's about micro cultures that you create with your significant other and before I can even you know download it and and think about it um i immediately started bringing it to my life like there's all kinds of you know jokes, inside jokes that I make with exactly one other person. Like only my wife knows this and some, you know, some comment that to anyone else would just seem utterly not funny will make us crack up laughing because it's it's it's the microculture that that we created. So, yeah.
00:42:25
Speaker
Yeah. I wonder if I, so with my partner, he's English, I'm Spanish. And I wonder if I would be able to recreate at in a t chatbot the specific strand of Spanglish we talk to each other.
00:42:36
Speaker
like I think if a chatbot can do that, I would be like really surprised.
00:42:58
Speaker
So kind of just to yeah to to reflect on this first reason for the chatbots, the breakup chatbots being morally noxious by design. It's that it has to do with, you know, the user, even when they're not deluded and when they're simulating this relationship, the continuation of this relationship, the user, um by doing that, there are they're necessarily maybe aiming at something meaningful, but for it to really be meaningful, for this to be a meaningful activity,
00:43:34
Speaker
ah They need to love a particular existing person. And the what who they're interacting with, or what they're interacting with, that this this simulated ex-partner, um theyre it's it's a non-reciprocal...
00:43:55
Speaker
it can't reciprocate like a real existing person. So because that, this is this like this non-recip, this because reciprocating is sort of necessary for it to be a meaningful, um, relationship, I guess.
00:44:10
Speaker
Um, because that it fails in this case with the chat bot, because there is the absence of that reciprocation that makes it, um, meaningless. Um, I guess is that, does that roughly capture it?
00:44:23
Speaker
So, it's not the absence of reciprocation, but the absence of potential for reciprocation, right? Because I think like a child but can never reciprocate you in the way a person can.
00:44:37
Speaker
That makes it already not suitable to be an object of a meaningful relationship. So it's not so much that the lack of reciprocity makes the thing meaningless. It's just like you cannot have a romantic relationship with a cop.
00:44:51
Speaker
or with a bottle. So you cannot have it with a chatbot either, even if it looks like the chatbot is kind of like reacting in a way that a bottle doesn't react, right? I am equating it with an inanimate object that looks like it is something different. So it is isn you cannot have a relationship with it, is what I think.
00:45:11
Speaker
And we don't have to dwell on this too long, but does does your argument at all depend on any kind of like metaphysics of... ah of AI, because like, if it was the case that like, oh, we actually think that, you know, these AI like instantiate mental states and blah, blah. blah I don't want to go too deep into this, but do you think like your argument um hinges at all on like a certain characterization of what, yeah, like what these chatbots are?
00:45:47
Speaker
I think so. right I obviously don't get into the into that in the paper because like that's not my field. But if I hand on my heart have to say, i think that it does hinges on um chatbots not having personhood.
00:46:06
Speaker
I don't get into this, right? So like it might be that if I got into it, then they they it would be different, like the metaphysical claim that the the the argument would hinge on. But i suppose it's going to be something like that. It would be the fact that is it's is ah is the kind of thing that doesn't have personhood understood in a thick way, which is not just you know some rationality or some kind of ability to produce recent statements, but personhood as in something that come matter.
00:46:38
Speaker
Yeah. And the the good thing about, ah you know, your overall argument, though, is that even if we get a general AI at some point or whatever, and we want to call them agents, you give three reasons that sort of collectively you know make this morally noxious either way.
00:46:56
Speaker
So I want to get to the second one because ah it's it's my favorite, I think. um And and it you know it would it would be the case that even if AIs were agents, it just wouldn't matter.
00:47:07
Speaker
it It would still be morally noxious. um Maybe I'll just let you explain ah why using a breakup chatbot promotes the abuse of power. And then I will have a deluge of

Unilateral Control and Power Dynamics in Relationships

00:47:18
Speaker
follow-up questions. But a why why is this ah chatbot use an abuse of power?
00:47:24
Speaker
Right. Yeah. So thanks, Roberto, because you say the argument there from the metaphysical commitments, because it's true. Even if it was an agent, we would have this problem. Right. So let's put us ourselves in the situation of somebody using ah i break a breakout chatbot. Right.
00:47:42
Speaker
So they always have the power to... make the chatbot behave in one way or another. Behave and put this in square quotes, right? Because it's not really acting, it's just maybe like reproducing statements, etc.
00:47:56
Speaker
But we know what we mean here. Everything is going to be in square quotes. I don't have to say it all the time. right So, if i I'm always going to be able to tell the chatbot, you shouldn't react this way. right This is not how things are going to be between us in this interaction we are having.
00:48:15
Speaker
You always have the power to make the relationship or the interaction with the chatbots exclusively in your own terms. You might not do it all the time.
00:48:27
Speaker
you You might actually, that's why that's the difference again between contingent and harm by the design. You might just actually be a person that is so like socially aware that even with a chatbot, you're going to be like really wanting to stop these tendencies of shaping the chatbot.
00:48:44
Speaker
But it's designed for you to be able to do that. And it actually um is designed, it's going to be,
00:48:54
Speaker
the if we put this product for sale, the users are going to want the product to be reactive to what they want to do with it, right? Because it's not a person, it's a product that I'm shaping.
00:49:07
Speaker
So in that sense, right, you could say again, well, you're engaging in make-believe. This is like a virtual world where you are able to exercise this power. But you're exercising this power to a representation of a person who exists and a person who has decided to break the relationship with you.
00:49:25
Speaker
So you are deciding that, no, you are going to not you're going to acknowledge maybe in belief that this relationship is broken. But phenomenologically, how you feel, you are not going to do anything. You're going to continue as if you are in the relationship, so you're not recognizing this normative authority, this normative power of the person really ah fully. You're not recognizing it fully.
00:49:46
Speaker
And also, you're going to be able to shape the reactions of this square, is like substitute of the person in your own terms. This would be bad already if we didn't have into account that This is precisely what feminist philosophers have warned us against and heterosexual relationships, which is that in a patriarchal society where heterosexual men are generally more powerful, they tend to translate this kind of power imbalance to romantic relationships. Obviously, this is a social observation and we could go into like how like and whether things have changed or not, whether things are like exactly like that right now.
00:50:25
Speaker
but what is happening with What would happen with Breka chatbots is exactly what feminist philosophers were against. So that's a very long answer. Sorry about this. is I think the worst aspect of these products, without any doubt.
00:50:41
Speaker
So just to throw out like a, just, yeah, kind of objection, like, or or something. kind i mean, it just will help for clarifying, i think, hopefully. um So I had this thought of like, so someone, let's say someone, yeah, they're not delusional.
00:50:57
Speaker
And yeah they're having the childbot pretend to be their ex. and um And you know you're worried that um it's going to risk creating an expectation of total control.
00:51:13
Speaker
maybe in just in intimate relationships in general. well What if the person was just like, you know, look, I know in real relationships ah that real relationships never allow total control.
00:51:26
Speaker
And I understand, you know, understand that. It's just in this imaginary or pretend context, I get to exercise total control. um so i guess the thought is like, you know, is it really always going to be, um, fostering an expectation of total control if the person has clear in their head, like, oh yeah, I can exercise total control in this imaginary, um pretend context, but, oh, I know that, you know, when I get outside of that imaginary pretend context, I'm not, ah you know, i shouldn't have those expectations of whatever, like constant availability
00:52:04
Speaker
that kind of thing. I don't know. Does is that make sense, the thought here? Yeah, no, absolutely. they did We're back to the contingency, right? So we're back to asking like, Is it not within the realm of possibility that somebody will not get this expectation at all?
00:52:19
Speaker
And again, absolutely. I do think it's possible. I want to believe of myself that if I was going to use this, I wasn't going to go out i from using the chat and think, well, now I can go with my partner and they have to do whatever I say, or at least know more than I do now.
00:52:35
Speaker
Right? So like people, the the same way I expect that of myself, I think many people will be like that. However, the product is designed for you to act in that way.
00:52:47
Speaker
And that's what I think is the problem. The problem that we are putting a product in people's romantic lives where it is designed for having like only unilateral ah kind of input and control of what's happening. So I think it's a bad thing that that goes out there and it enters from romantic life in general, just the possibility of even using it.
00:53:10
Speaker
Yeah, that's fair. Absolutely. It almost seems to that, I guess, to support Pilar here, not that you need a Pilar, but anyway, um ah but you know it seems like ah this kind of interaction with a chatbot is is just way too similar to a regular interaction with ah with a human being in some regard so that we naturally habituate ourselves ah to that those sort of responses.
00:53:34
Speaker
And this is where it's really disanalogous with earlier we're talking about like video games and all that. like it this this the kind of conversations you have with a chatbot are too close to real life, unlike video games, because I i never steal a car and beat people to death with a pipe or whatever the hell you do in Grand Theft Auto. So though I'm not going to get habituated. I guess it's more of a risk that I would get habituated to these kinds of controlling relationships than that I would like to steal you know Jaguars or whatever they do. Yeah.
00:54:07
Speaker
Yes, when you think like about how people get used to things that harm them and harm others, sometimes it happens through this innocuous habituation without you even realizing.
00:54:20
Speaker
and So like the resemblance with your actual life is going to be important. um Here, I don't mention it in the paper, but philosopher Jens Labby has this notion of mind invasion. And he's talking about how capitalistic rules invade the minds of workers like just like through really small like things that happen in the workplace.
00:54:39
Speaker
And I think this could be the same. You are not that far from and the... If you compare the interaction with a chatbot to the interaction to your partner, you're not that far in in the act to from the actual situation where your habits are formed. So it is dangerous, so to speak, I think.
00:55:00
Speaker
Great. Uh, real quick on the normative power thing. So that really interesting me. So it's like, um, is it, is this right? Like, um each partner in a relationship, um, has the normative power to, uh, end the relationship unilaterally.
00:55:19
Speaker
Um, they can, yeah. Um, and then if you continue in this like make believe way, ah a relationship, the person is sort of um disregarding, they're refusing to recognize that that other person's normative power to end the relationship. Is that is is that there roughly correct?
00:55:44
Speaker
So it is correct. And I think like I should explain this a little bit more than like you know like what is behind this thought of the normative power of breakups is, again, an idea I take from philosopher Richard Healy. He has a great ah paper on the ethics of breakups. And the idea is that when you break up with somebody,
00:56:02
Speaker
It is a normative power because it's a power to just modify your moral commitments towards the other person because you have a specific moral commitment to your partner as your partner.
00:56:14
Speaker
And when you say, i break up, you have this normative power. So in practice... If somebody breaks up with you and then you get their consent to use a breakup chatbot, it seems that you would say, well, you are respecting the normative power, right? Because you're not making this other person stay in the relationship.
00:56:36
Speaker
So in principle, you would say, what is the problem here? So my problem is that to recognize a normative power that, again, is very important to maintain justice and equality in romantic life, this power needs to mean something.
00:56:53
Speaker
So it's not only the fact that you need to act in the ways, okay, yes, you broke up with me. You don't have to come pick me up from work anymore. You don't have to just like collaborate in the bills. Fine.
00:57:05
Speaker
but I'm going to act as if you hadn't done that." There is some kind of felt sense ah to that experience that I think to recognize a normative power fully, you need to feel the weight of it, so to speak. and it's it's a bit tough i I'm expressing myself in thicker terms than I do in the paper here, but that's the problem that I see here. And I think that the normative power of exiting a relationship And that power being recognized fully is something that you shouldn't relinquish for yourself either.
00:57:39
Speaker
So quick follow up on that. So what do you what do you think about a case like this? I was thinking about, um so suppose a parent tells their child, you know, ah you're forbidden, a teenager, let's say, like you're forbidden to drive a car because you don't yet have your license.
00:57:58
Speaker
So it seems like they have like a, the parent has a legitimate normative authority to forbid that usage of the car. Um, but it it seems like that the, the, um, the teenager is still allowed to, it's still morally acceptable for them to pretend to drive a car to pretend uh or to pretend in a video game for example to drive a car um so why would that be a case where it's like um there's nothing wrong it seems like there's nothing wrong in that case so are they are they still acknowledging the parents normative power in that case or how is it different i guess is kind of my question
00:58:42
Speaker
So there are the several differences here. One is that the normative authority that a parent exercises over a child is by nature imbalanced.
00:58:54
Speaker
And it should be imbalanced because the father is, or the parent, sorry, is the educator and is the provider. Whereas in our in a romantic relationship, it should be between equals. So it shouldn't be the case.
00:59:08
Speaker
like We might have special obligations against each other. I might say say, you shouldn't do that to me. But it's not because I am instructing you. It's because like there is commitment that is between us. I don't know, maybe I'm coming from an old-fashioned year of parenting, but that's like how I see this. I see that moral difference there between these two relationships. ah But also, like if you think about it, like i tell the the parent says, don't drive the car. And then the kid is going around pretending to drive the car. It seems that you are mocking.
00:59:38
Speaker
the expression of my normative power. So I think that kind of really, actually, even when it's not an analog analogous relationship, expresses that to recognize a normative power, you don't only have to obey the specific action or inaction that is prescribed, but also you need to engage with the meaning of it, which is, I told you not to drive the car, don't go around pretending you're driving a car because you are mocking this instruction I gave you, right? Yeah, that sounds plausible. I'm just imagining, you know, we try to forbid Frank from like, I don't know, my son Frank from like, you know hitting Mary Lou, of course. And if he were to like run around pretending to hit her or something, that would be problematic as well. Anyway, so that's, that's, that that's very fair. Um, Roberta, should we go to the third reason? Is that time?
01:00:22
Speaker
Yeah, let's do it. Okay. So, um, yeah, so the first reason, ah we've got the first and second reason. Uh, the third is that The breakup chatbot ah disrupts the agent's narrative capacities by inflicting narrative harm on the user. So yeah, could you kind of just introduce us to this this line of reasoning?

How Chatbots Impact Personal Growth Post-Breakup

01:00:45
Speaker
Yeah, so like part of my research, as ah this room way back from my PhD, is on self-narration and on narrative identity. So I have like a parallel work that is is like kind of more long-term working on what narratives cap capacity narrative capacities are, which is sometimes people think about narrative identity as the fact, oh, the stories you tell yourself about yourself. And that's one part of it. But I think...
01:01:12
Speaker
Narrative capacities are also what Peter Goldie called narrative thinking about the future and about the past, which is to put together events and them having a specific emotional import for you and then meaning something.
01:01:27
Speaker
So they are meaning-making practices, practices of making sense of the world in the way that it makes sense to you. So there are several kinds of capacities in this sense, which is putting events together, which is giving import ah to events, and which is like your relation to the events, and whether that you're being honest about them or you're alienated from that event, etc. So that's the whole idea I have in the background that I don't go into in the paper about what narrative capacities are.
01:01:55
Speaker
And there are things that can disrupt these narrative capacities. There are also things that can enhance these narrative capacities. right ah so And these disruptions is what I call narrative harm in the paper.
01:02:10
Speaker
So I draw from people who have already been discussing narrative harm, regardless of whether some of them do use this term or they don't. And and one is narrative rewording and the only one is narrative deference, which is proposed respectively by Lucy Osler and Ellie Byrne.
01:02:26
Speaker
and So the idea and narrative rewording is that there might be some
01:02:31
Speaker
artifacts from outside, ah which could be not only technologies, but also norms, for example, that lead you to understand things in a in a very specific way and lead your narrative capacities towards one specific path. So that's why railroading.
01:02:48
Speaker
And that's one of the dangers, I think, is there with Break a Chabot. We can go into more details on that. like The narrative difference is then the idea that you can give too much weight to what another agent tells you, to the the detriment to do your own way of interpre interpreting how you think about stuff.
01:03:06
Speaker
And then I come up with a third one I call narrative capitulation, which is giving in to damaging scripts, damaging master narratives we have about romantic life.
01:03:17
Speaker
So I think these three, again, the design of the product means that it's going to be designed to do these three things, even if it doesn't succeed in doing it always.
01:03:29
Speaker
Yeah. And I think there's there's some good empirical um work that you know your narrative, your sense-making capacity is linked to like long-term well-being and even like longevity and stuff. And so by allowing yourself to be, i mean, just you know taking the railroading example, by sort of going into a direction that, I mean, at least to me, it just seems more...
01:03:51
Speaker
I don't even have the word for it, like atomized, like you're, you're having a relationship with yourself essentially, and like a non-human and, uh, that, that just seems, you know, more closing, closing yourself in not, you know, being, having the skills to be open again for more relationships and all that. And so it just seems like a vicious, uh, downward, spiral to quote to a Nine Inch Nails album, apparently.
01:04:17
Speaker
Yeah, I think like, and all these hands are related and related to the other two bad aspects as well. Because remember, like, it's railroading you towards a meaningless relationship. So it's like putting you in a path that has no meaning and maintaining you in it because the good chatbots will be to,
01:04:35
Speaker
to good children in the way of functioning ones, able to maintain you in this kind of noxious engagement. ah And then your way of making sense of the world is conditioned to that meaning so meaningless relationship you are being kept in.
01:04:51
Speaker
When it comes to the issue of like, you know, it seems like, you know, one of your ideas is that the um person's narrative agency is going to be um kind of impoverished or harmed by in virtue of the fact that there's not that genuine um element of mutual improvisation we talked about before.
01:05:12
Speaker
Can you address like some people might be thinking, oh, but you know, LLMs, they're so unpredictable. They're so variable. Like, are they really going to be static reflections of the user?
01:05:25
Speaker
Why, you know, Given how smart and um and and and varied they can be, um yeah doesn't that allow for some mutual improvisation? um um So anyway, yeah, can you can you talk to that kind of worry like, oh, these things are so...
01:05:45
Speaker
Sure. so So I think that engaging with ah with a breakout chatbot um could change you in ways that you didn't expect, of course.
01:05:57
Speaker
right so like And you could say, well, isn't that improvising? It's surprising, the fact that you have changed in ways that don't expect. And if you look at... ah ah stories that we have about how people have engaged with chatbots, some people have gone to extremes that I'm sure didn't expect that they were going to be doing that that kind of stuff, right?
01:06:16
Speaker
um So it's not the matter that is going to keep you in a limbo, so to speak. The matter is is that you're going to be directed towards engaging in projects that are way worse than than analog projects. right that like any Any other stuff that you could be doing shaping you, and it's is just going to be richer than this like very thin reciprocity, the very thin interaction, unilateral interaction, interaction informed by noxious ways of understanding romantic life.
01:06:48
Speaker
you were deciding to give up a whole you have like a whole like stack of cards in your hands of how your life could be and how could you make sense of the world. And you you're throwing them away and keeping the worst one, really, so to speak. Sorry to speak in metaphors. Yeah. But I hope that that clarifies a little bit what I'm trying to get at here.
01:07:07
Speaker
It's like the the the the person, the user has a but a lot of like alternative futures, you know, it's like yes single, they can have a new project, can have a new identity. And it seems like by, um, by engaging with this, the chat bot, they're kind of getting locked into this.
01:07:25
Speaker
Um, yeah, this, this more meaningless, um, form of life. So is it kind of, so is this, is that the narrative concern kind of importantly connected to the first concern related to the meaningless project then?
01:07:40
Speaker
Yes, it's it's very connected. So like they are connected they are interconnected and that's like the railroading specifically is very connected to the meaningless project project because it's like directing you to stay and remain there and to make sense of the world through that meaningful meaningless project. That's where the narrative capacities are important.
01:08:01
Speaker
That's why it offers us a different kind of aspect of the badness of being in a meaningless project is that you're going to understand the world through that relationship or how quote unquote, right? And that's a bad thing because this is a meaningless relationship, but it's going to maintain you. if if These products also, let's not forget, and I insist always, they are made for profit.
01:08:25
Speaker
And to maintain profit, they have to maintain interaction. So to maintain interaction, they're never going to say, well, just go out and touch grass, right? They're going to just try to keep you somehow talking and engaged in this and meaningless project. And that's going mean going to empower, rich, impoverish your way of understanding the world and yourself.
01:08:47
Speaker
yeah I want to be sensitive to time here, um but maybe we can just go back a little bit more on the narrative capitulation because that's, yeah, like you said, that's kind of a concept that you developed. And so so just to restate, like, is it basically the idea of like, there's narrative capitulation involved because the user is kind of surrendering their own authority and instead they're kind of...
01:09:13
Speaker
surrendering themselves to these like oppressive kind of master narratives, and rather than kind of resisting those master narratives through alternative um counter stories. And it seems like the main master narrative you're worried about is this ah motto nor normativity where the assumption that a meaningful life requires a romantic relationship. and um But I guess Yeah, there could be other like singlehood stigma. But yeah, anyway, do you want to to just talk a little bit more about the whole narrative? These are two very important kind of scripts that I say that...
01:09:49
Speaker
These products are made i designed for you to capitulate to, right? And you said, so a metonormativity, singlehood stigma, this notion of a failure if the relationship ends.
01:10:00
Speaker
All these ideas, when you finish a relationship, because you are in a state of disorientation I mentioned initially where you're not really sure of anything, that's the moment when you can start questioning, and well, should I be thinking this way, right?
01:10:16
Speaker
Or have I invested too much of myself? Was this relationship that relationship where I was subsumed? So I say this, like, calls back to this paper I said I wrote with Alfred Archer, this idea that, like, the end of relationship so is a moment to...
01:10:31
Speaker
look a lot of things in your life in the face. right ah However, and it is tempting to not do this at all. Most of us don't want to do this at all because then you have to add the complication of this kind of self-reflection and this kind of trying to battle like what everybody around you thinks is normal.
01:10:51
Speaker
Everybody is telling you, without love, you're nothing. right so don't you So it's very demanding, this idea that I'm putting out there. um So knowing that it's very demanding, knowing that it's very difficult, then what I see is going to make things worse is having a product that is going to say, you don't have to do it.
01:11:11
Speaker
So not only you are not going to be bound to do it, but you're going to have a way of not doing in it at all. ah So you're going to have a really easy way to capitulate to these whole things. So you are not only capitulating, you are contributing to build these narratives that are harmful for everybody.
01:11:33
Speaker
ah So in that way, that's what I call narrative capitulation. And I don't want to say that Love is not a meaningful part of life. I think love and specifically romantic relationships are an incredibly important part of our lives. But I've also seen people whose lives get really, really destroyed because of not having romantic relationship or because of our romantic relationship breaking.
01:12:01
Speaker
And I think we can recognize the importance of relationships without making everything hinge on having a romantic relationship. And I don't think having breakout chatbots is going allow us to progress in that sense.