Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#6 Thomas Nys and Bart Engelen: Manipulative Online Environments image

#6 Thomas Nys and Bart Engelen: Manipulative Online Environments

AI and Technology Ethics Podcast
Avatar
85 Plays5 months ago

Thomas Nys is in the Faculty of Humanities at the University of Amsterdam. Bart Engelen is an associate professor at Tilburg University, also in the Netherlands. Together, they have co-authored a number of essays. Today we will be discussing their chapter from the recently published The Philosophy of Online Manipulation, published in 2020. The title of this chapter is “Commercial Online Choice Architecture: When Roads Are Paved With Bad Intentions.”

Some of the topics we discuss are commercial online choice architecture (for which they use the acronym COCA), whether COCAs can be said to be manipulative, different conceptions of what manipulation is, how COCAs can undermine our autonomy, and what is at stake when our autonomy is eroded by web-based commercial interests—among many other topics. We hope you enjoy the conversation as much as we did.

For more info on the show, please visit ethicscircle.org.

Recommended
Transcript

Introduction of the Podcast and Guests

00:00:15
Speaker
Hello and welcome to the AI and Technology Ethics Podcast. This is Roberto. Today, Sam and I are in conversation with Thomas Nese and Bart Engelin. Thomas Nese is in the Faculty of Humanities at the University of Amsterdam. Bart Engelin is an Associate Professor at Tilburg University, also in the Netherlands. Together, they have co-authored a number of essays.

Understanding COCA and Its Manipulative Potential

00:00:38
Speaker
Today, we will be discussing their chapter from the recently published The Philosophy of Online Manipulation, published in 2020. The title of this chapter is Commercial Online Choice Architecture, When Roads Are Paved with Bad Intentions. Some of the topics we discuss are commercial online choice architecture, for which they use the acronym COCA.
00:01:05
Speaker
whether Cocos can be said to be manipulative, different conceptions of what manipulation is, how Cocos can undermine our autonomy, and what is at stake when our autonomy is eroded by web-based commercial interests, among, of course, many other topics. We hope you enjoyed the conversation as much as we did.
00:01:42
Speaker
we spend a lot of time online, up to 40% of our waking life, apparently, online. And much of it is spent in these coca environments. And so just tell us what these are. And perhaps in your response, you can tell us a little bit about nudges and what what exactly is going on with that. Sure. Thanks, Roberto, and thanks for having us. um Yeah, COCAs are um just basically short for commercial online choice architectures, but that's quite a mouthful, so we decided to go for the acronym. um So commercial online choice architectures are these online choice environments that within which people, often called users online, ah make all kinds of decisions.
00:02:30
Speaker
um And these choice environments, we know that from psychological and behavior behavioral economic research, influence so the decisions we make within them. So um typically, The notion of choice architectures refers to what an architect does. right They design, for example, a physical place, a physical space, and a choice architect does the same thing for a choice

Impact of Design and Algorithms on User Behavior

00:02:55
Speaker
environment. So for the way in which, for example, options in the environment are presented and framed, and we know that the way you frame options has an impact on people's decisions and people's choices in those environments.
00:03:07
Speaker
That's also where nudging comes in. right If we know how these choice environments influence people's behavior and how they can be designed to steer people's behavior, um we can talk about nudging people specifically using the knowledge about those influences to steer people in a specific direction. So a standard example in an offline setting would be a supermarket. Our cafeteria with specific kind of products at eye level, for example, we know that has an impact on people's behavior. In an online setting, you can think of, for example, the way in which social media apps are designed or YouTube, for example, the algorithms of YouTube determine what kind of videos you get to see, but also the way in which those different kinds of options are presented has an impact.
00:03:59
Speaker
So the big companies, that's where the commercial aspect of Cocos come in, and that own these um these online platforms, ah for example, Google in the case of YouTube. Obviously, those companies have have an incentive to steer users in a specific ah direction or to, and for example, ensure that YouTube users stay online and they are going to um design the online choice environment with the aim of
00:04:36
Speaker
Yeah, influencing users' behaviors and in that direction. So yeah, it's a commercial online choice architecture that refers to these kinds of online apps, websites, and so on that are specifically designed to steer people in a specific direction. So just a quick follow-up here. um I remember that Amazon, I think they experimented with um a different checkout process. And and they discovered that if they're um if their checkout window didn't load fast enough, I think their sales would drop X percentage points or whatever. So is this part of the ah choice architecture as well?
00:05:14
Speaker
Yeah, it can be for sure. So it all the these little features and elements in online environments um that partly influence how users behave, what stuff they click, how long they stay on the platform, ah whether they actually make the purchase that they and that they put in their in their shopping cart, for example. All of those elements um yeah again can be ah can be part of of online choice architectures. i just just I guess, you know, I really want listeners to to be able to visualize me and baby, you know, what what this is. um So perhaps maybe even the color red being present in the notification bar, is that also a feature of Vakoka?
00:06:03
Speaker
Colors can be part of it. Sounds can be part of it. um The feature, for example, in Netflix, it has this kind of default auto-play. I think that's part of the online environment of Netflix, and that is known to influence people's behavior, right? people and to bench watch more if you have these kinds of features of the of the um online environment. And of course, the companies that own and design these environments notice and they did they deliberately use that knowledge to steer people like us.

Ethical Implications of Manipulative Designs

00:06:42
Speaker
Right. And I guess like just the fact that they're so heavily personalized now is almost I guess a kind of key feature of
00:06:50
Speaker
these commercial online choice architectures, like the fact that, um I don't know, when you go on Spotify and its interface, you know, it's recommending very personal, you know, playlists that it made specifically for you. I guess that would be kind of a key sort of feature of contemporary online choice architectures, that personalization. Yeah, but it even it even happens, for example, if you ah Google something, if you enter a query in Google, and the ranking that you get is basically determined on the basis of previous search searches. And of course, that matters because we know that people click on the first or the first three or four options. So which options Google presents
00:07:38
Speaker
ah highest up in the ranking basically is going to have a huge impact on the stuff that people get to see and and the and the the links that people get to click on. Right. Position bias, I think it's called. Yeah. Yeah. yeah that This is exactly the kind of case where you see the power that these companies have to um yeah to steer us towards specific kind of websites, for example, to block ads, of course. And it's perfect. Yeah. And that kind of moves really nicely into the next question, which is, you know, so you're, you know, Barton Thomas, you guys wrote an article dealing, you know, it's an ethical analysis of cocas of these commercial online choice architecture. So Thomas, maybe could you just kind of, yeah, why do you think cocas deserve ethical scrutiny?
00:08:21
Speaker
Yeah, well well, so also for my part, thanks for for having us. um um Well, the first reason I think was already mentioned. We spent so much time online ah within these environments that I think philosophically, we we really need to understand what's going on there. What are the ethical challenges that we encounter in these online environments? So that's the reason, I think the main reason why we um wanted or to focus on this. which is relatively new. You could say that you know the internet is is already among us for for some time, but these ah social media platforms right they are more recent and so on. So these are rapid developments and philosophy needs to catch up with these with these things.
00:09:03
Speaker
And it also um ah relates to what Bart just said. So we know that there's an influence exerted when you arrange certain options in a certain way, as in the cafeteria or in the supermarket. So we know that that also um happens online. So our choices are highly influenced by commercial agents. So what do we need to think of that? Is that an instance of manipulation? Is there something going wrong? What values are involved there? I think that's what get us going and to to to write the article on on on this subject. but junior yeah the The emphasis on manipulation is interesting because I don't know, it kind of reminds me how in critical theory in that whole literature, like
00:09:47
Speaker
people like Herbert Marcuse and Adorno, they were really worried about you know the culture industry manipulating us and kind of dominating us and controlling us. And when you read him talking about that in the 60s, it seems sort of like hyperbolic and maybe paranoid. but then you But then you think about it today, it's like you're thinking about these cocas and these commercial online choice architectures. It's more like, yeah, like this this this potential for these large corporations to manipulate us seems a lot more plausible in our day, I guess, yeah. Well, it's really interesting that you mentioned, Marcuse, that there's something like, what is it, repressive de-sublimation? So there's an idea that we can get what we want easily, right that they play into our desires and the things that we that we want and that we go for. But that it's kind of repressive, that it makes us more easy to get so that that it doesn't allow for culture to thrive and so on and so on. so
00:10:43
Speaker
The difficult path, let's say, is obscured by um by us getting the easy win and by getting by satisfying our easy desires. Right, right. Yeah, he he talked I think he talks about that as like the the the highest form of domination is when like you are actually catering to people's preferences and you don't have to use coercion as much or anyway. um but good So maybe, you know, the next point maybe is like, Thomas, what would you say to someone who's just kind of skeptical about the idea that Koka's could really um mean control or and influence us that much? Like, okay, maybe
00:11:21
Speaker
I don't know. like You can think about the example of like, okay, maybe the the design of, let's say, a supermarket can make us go in a clockwise direction through that supermarket. But it's not as though a supermarket design can turn me into a vegan. you know it's like It's not going to go that deep. So anyway, I'm just kind of curious. What would you say to someone who's skeptical about? like How much can they really influence us? yeah well Well, I think that's kind of the the danger or the difficult thing about East Cocos because we have the idea that we are very much making our own choices right like we do in a supermarket, like we do online when buying stuff. It's our choice. We are doing what we like and we are not being influenced.
00:12:05
Speaker
But I think that's a naive idea. right what ah What we know from the social psychologies in the 70s and the 80s and so on is that we are highly influenced by these architectures. So even if we say that we pick the series that we watch on on Netflix, um still, right what options are presented and the way in which they are presented in this automated um setup that we just continue watching and so on, um that's beyond our control.

Exploring Manipulation vs. Persuasion

00:12:33
Speaker
And so um I think it's it's an illusion to believe that we are this fully autonomous self-deciding creatures within these environments. so and So that's what I would say that you need really need to think about this deeply. Yeah, I remember there's, ah ah speaking of the social psychological literature from the 80s, there was this book by Nisbet and Ross called The Person and the Situation.
00:12:59
Speaker
And there's all kinds of interesting studies there where, you know, just seemingly trivial features features in the environment ended up ah influencing people. um The one that comes to mind, I don't know why, is the one where it was a bunch of, you know, I guess pantyhose, which um you know Men were told to select you know which of the pantyhose that you would would you know want to purchase. And they were all identical, just different packaging. And there was a right-hand bias. They would just pick the bottom right. And so they they were all identical, but they were able to provide reasons as to why these on the right-hand side were were better. And so it seems like we're not even aware of how the environment, both physical and virtual, influences us.
00:13:46
Speaker
Yeah, that's right. and um So we're not exactly, we're not. as people were not aware of all those kind of little um features that influenced our decisions. um But of course, these companies are now in a position to gather all the um all kinds of relevant data about this. For example, again, going back to the YouTube example, they've got massive amounts of data of what kind of videos makes each kind of people
00:14:18
Speaker
ah intrigued orma which which makes it click on specific kind of content. So just analyzing that data enables them to really easily um derive which kind of videos works for which kind of but for for which kind of people. So yeah, that's um and and the fact, of course, that these elements, these features are a principle irrelevant, right? Whether it's left hand bias or right hand bias, or whether it's eye level or just below eye level. In principle, that shouldn't matter, right? That's not a good reason to go for option A or option B, but still these irrelevant factors that influence our behavior. And I wanted to add one thing and with respect to
00:14:59
Speaker
Yeah, I had a kind of previous question about how deep do these influences go. You could say that maybe even if the influences are really subtle, ah really minor, also on pretty kind of like have unimportant decisions like what kind of video you click on YouTube or which song you listen to on Spotify or whatever. um They can add up over time. So you have these examples of people, for example, radicalizing online because they're on a specific social media app and you get kind of like sucked into ah yeah kind like a rabbit hole of conspiracy theories or so on.
00:15:38
Speaker
Yeah, so ah so they add up over time, right? I mean, so even if a one-off decision that is influenced by these coca designs, you could say, yeah what should we worry all too much? Maybe not if you look at a one-off decision, but if you see how they have a cumulative effect over time, ah yeah, it makes the ethical analysis all the more urgent, I think. yeah then and And I also want to add or pick up on the point that robert Roberto said about the these men picking pantyhose and then offering reasons for that. So it's really important this post hoc rationalization, which keeps people in the illusion that they act on the reasons of their own, so to speak. So they're unaware of what actually goes on, but they provide reasons and very much think that they have a good reason to do this.
00:16:29
Speaker
So I think that kind of feeds into the idea that there's nothing wrong with these online platforms or whatever, because we think that we do the decision making ourselves. But in fact, that might be this very type of rationalization that we see in this research this research from the 80s. Okay, so I guess we should begin to move then towards this question, right? Are cocas manipulative? ah Just so that the readers are, um you know, ah Sorry, the readers and well the readers of the paper. Of course, people will read your paper after this they hear this interview. um But let's let's just kind of talk about what manipulation is before we talk about whether coca's are manipulative. So let's juxtapose um
00:17:18
Speaker
what might be called rational persuasion with with with manipulation. And let's just kind of try to disentangle what might be, you know what's an incentive, what's coercion, what's rational of persuasion, and what's manipulation. So what is my manipulation according to your theory? and So this is um one way of getting a A handle on this basically is to can like see what manipulation is not. So it's not rational persuasion. It's not incentivizing. It's not coercing. um But it's some thing in between or something else altogether. and So if you rationally persuade people what you do is you influence try to influence them. All of these are influencing techniques.
00:18:02
Speaker
But with rational persuasion, you try to influence them using arguments, relevant information. You try to rationally convince somebody. So if I were to try and ensure that Roberto, for example, leads a healthier life, I don't know whether he is leading an unhealthy life, but imagine he were. And I would be his friend and say, Roberto, we really need to pick up your game. I can try to provide arguments why he should do that. right I can can refer to all kinds of benefits and and that that would come to him and I could try to convince him that that's rational persuasion. I can also incentivize him. I can make i don't know healthy options cheaper or unhealthy options more costly. I can reward him if he
00:18:47
Speaker
achieve certain kinds of health aims, for example. I can also coerce him. I can physically remove um all kinds of unhealthy options from his environment. Cheese. You have to remove cheese from my overall life, I think. If that's your weakness, your your vulnerability, let's remove that physically, then I would be coercing you to lead a healthier life. You no longer have the option to. ah totor to lead an unhealthy life. and But manipulation is different, right? So it it doesn't engage with people on a rational basis, doesn't engage their rational capacities, it doesn't use material or financial incentives, and it doesn't use coercion.
00:19:26
Speaker
oh but is it then so ah Typically, most definitions of net manipulation involve something like non-rational influence, a kind of influence behind people's backs, um a kind of yeah triggering certain kinds of psychological or cognitive mechanisms to steer people in a specific kind of direction without giving arguments, without using coercion, and without um and know changing the incentives.
00:19:53
Speaker
Gotcha. And maybe if we could go a little bit further, too, into this concept of manipulation before we get back onto the coca thing. you You kind of distinguish two sort of broad ways of conceptualizing manipulation, a means-based conceptualization, and then an ends-based. And the ends-based conceptualization, that's the one that, you know, Bart, you and Thomas, you know, endorse. But first, yeah, what's the means-based concept of manipulation? So means-based refers to the means of manipulation. So the idea is that here is that what distinguishes manipulation from these other influencing techniques um is basically the techniques at hand, so how people are influenced, so right the mechanisms
00:20:38
Speaker
um um that explain how people are are are influenced. So again, in rational persuasion, the mechanism is so some kind of rational capacity that that's at play. In manipulation, it's

Intentions and Ethics of COCA

00:20:50
Speaker
something different. So a widespread um understanding or definition of manipulation is that it's ah influences that bypass people's rational capacities. But if I get you to pick an option um, merely because it is at the eye level or merely because it is presented in a specific kind of color that is not engaging in your rational capacities, right? That's influencing your behavior, um, by bypassing those rational capacities. So on a means-based account, and that would count as manipulation because you're influencing people by bypassing their rational capacities, kind of like influence them behind their backs.
00:21:30
Speaker
Now we're not a fan of this kind of definition because we think it's too broad. So it basically boils down to understanding manipulation as non-rational influence, but that's too broad. When there's all kinds of non-rational influences happening in the world that are not manipulative, right? So people are influenced by each other's body language, for example, or somebody here would start yawning. Other people would automatically, it would be contagious. Other people would start yawning as well. That doesn't mean that you're manipulating other people or just using body language is not necessarily manipulative. In our view, what makes something manipulative, there needs to be something more. There needs to be a kind of intention, a deliberate intention to steer people's behavior. If I know that
00:22:20
Speaker
um specific kinds of body language make it more likely that you like me, for example, and I use that in all kinds of flirting techniques, then I might be starting to manipulate people because I'm deliberately using knowledge about those inluencing butto influences two to get you to do something that I want. right That's completely different than if I were to use body language without any intention whatsoever, I can't do it. Yes. So it's almost kind of surprising because like, I mean, uh, so I guess a lot of people leave this intention element out because I just think kind of naturally people would assume that in order for one person to manipulate someone else, they have to be intentionally trying to bring about some
00:23:08
Speaker
behavior. But yeah, I guess a lot of people were just kind of focusing too much on that bypassing of rational capacities. And they just, I guess people were kind of leaving out that element of intention. Is that basically... Yeah, I think that that happens in the in the literature. And it's this focus on these this means that that explains it right. So it works differently than rational persuasion. It works differently that incentivizing or that coercing. So there must be something specifically characteristic of manipulation in the way in which it works. And to some extent, it's true. I mean, a lot of these non-rational influences are are manipulative and that is what distinguishes them from these other influencing techniques. But I think if you if you take only that element, you're missing something.
00:23:57
Speaker
Yeah, I definitely think it's, I feel like it's in the social construction of what manipulation is because I definitely thought the mean space conception was intuitive. you know i you Thank you to your, thanks to your philosophical therapy, you have convinced me that end space conception is is much better, but that, I don't know, that was one of the most fascinating parts of of the paper for me that I just, you know that's worth the price of admission right there. So with that, ah with the end space conception of manipulation in mind, Thomas, why don't you tell us a little bit about how Cocos are manipulative.
00:24:38
Speaker
So I think that that so we want to shift attention from this means-based conception to this ends-based idea of manipulation. where we want to I think this has two elements, right? we We could either focus on the end or the goal to which we are um manipulated. So what do they want us to do or perform? What kind of behavior do they want us to um to perform? But I think the end that we want to want to discuss is what is the intention or the motive behind this influence. So so why do people um try to influence our behavior? And we believe that this why this intention or motive is crucial in identifying what's potentially wrong with ah manipulating um practices.
00:25:25
Speaker
Now, if we go back to to Netflix, for example, there you can see that um there is this intention. And some people find that, I think, problematic because they think, how can a commercial agent or a collective have intentions, right? Intentions is something that we attribute to to persons, to people. But I think it's clear or we think it's clear from the design setup itself. We clearly know that you know this this setup that within five seconds the next episode starts, that this is meant to keep us engaged and to to to move us even against our better judgment to continue binge watching a series on Netflix. on netflix
00:26:07
Speaker
So we can kind of um um a backward engineer um the intention behind these he saw um enterprises. And I think that um overarching commercial interest is ah potentially what what is problematic about these these environments. right so So our ends-based conception allows us to focus on the underlying intention of why people are exerting this influence over us. And um this is something that we should but that we would miss when we would only focus on the means used, because these means are often used unintentionally and other in other domains.
00:26:49
Speaker
And another point I just want to mention, I think that one of the um common ideas or intuitions about manipulation is that it involves a kind of sneakiness. And a sneakiness often relates to these means. We are unaware of the means by which we are influenced. But the sneakiness also pertains to the end. It also pertains to the why. We think that we are on Spotify just because it caters to our interests. We think that we are Netflix just because it gives us what we want and allows us to enjoy series. But because this is so upfront, we kind of miss this commercial aspect to it. We think it's there for us, but actually we're there for them. um So that is also what's sneaky about these these practices.
00:27:37
Speaker
So let me run this back at you, Thomas, just to make sure that we have it you know correctly. So manipulation, we should think of it mostly in terms of intention. What is attention what is the intention of the manipulator? And for things like Netflix, well, you know there's all kinds of methods or means by which they might try to influence us. But what's most important is that their intention is to get us to stay on Netflix as long as possible And to go back every single day right by hooking us to a show or whatever. So the intention is what is morally concerning about it. Time on device is what they want. And for other things like Amazon, the intention is spend a lot of money exclusively through Amazon. And Facebook, of course, they just want you glued to your phone so that you can or you know computer so you can get more ads. right So it's it's time on device. is that Did I get that right?
00:28:33
Speaker
Yeah, so I think that you got it absolutely right. So I think that is the first step in the argument to say that let's include at least the intention. Let's see the bigger picture, not just the means used, but the intention behind it. And then the second step, the second element is what what can we say about the effects of this influence on our autonomy? they said you know um is it Is it good for autonomy? Does it support it? Does it undermine it? What does it then do with these agents or these users as they are called online? That's the second step.
00:29:09
Speaker
But you're right Roberto, so it's really so it's really about so kind of shifting the attention slightly away from the how of manipulation towards the who does the manipulating and why. So what's the underlying intention there? and um that can still be compatible with arguing, for example, that certain designs of social media apps are addictive and that that in itself is worrying. um But they're not addictive ah for no reason, right? I mean, there's a clear intention why they are designed in this addictive way um and bring and we wanted we wanted to explicitly bring that into the picture because we think that helps us tease out what exactly is morally problematic about it.
00:29:52
Speaker
Can I real quick ask quote about, um, the sneakiness thing? So I'm just thinking like, um,
00:30:02
Speaker
so I'm thinking that sometimes, ah well, maybe I'm wondering is would this count as manipulation? Like the way when you're in a supermarket, they often put, let's say chocolate near the checkout and, um, okay. So they're, So they're intending you to purchase the chocolate. um And the idea is that, I don't know, maybe it's like there's some mechanism by which if you put it close to the checkout, you're more likely to purchase. Okay. And then they don't explicitly mention like, Hey, the reason why we're putting this here is to get you to purchase
00:30:47
Speaker
the chocolate, but it's not really sneaky. I don't know. I guess what I'm wondering is like sometimes sneakiness, it's like um it's not explicitly mentioning why maybe they're doing the thing they're doing, but on the other hand, it's sort of obvious. like It's sort of obvious that when Netflix like you know Well, I don't know if it's obvious, but maybe you could say that like in certain cases, it's sort of obvious why they're doing the thing that they're doing. And so it's not sneaky. Uh, is this, uh, is this question making some sense? You guys kind of seeing where I'm going with this or I could try again, maybe, but
00:31:25
Speaker
I think the question makes sense. um So I think there's a lot of gray areas and a lot of ah cases where you can go either way. I also think manipulation is not an all or nothing kind of concept. um Something can be more or less manipulative and it depends on part on, for example, how covert the influence is or how sneaky the influence is exactly. and In a lot of cases, it's pretty obvious what is happening, especially if you're somehow aware of ah these kind of techniques. It can become more obvious what's happening and that enables you to bypass these kind of influences more easily. ask So yeah, that would make it also less manipulative for you.
00:32:09
Speaker
ah because it would it would enable you to to to see through what's happening. ah But but it's um there's a lot of grey areas and and and it's ah it's a gradual thing rather than than than an all-or-nothing thing. so Yeah, for example, the eye level example in principle has nothing to do, doesn't but provide you with any good reason to pick whatever is at eye level. But sure, if you see through it and if you know it, and if you are aware of the fact that supermarkets are going to do this, whether we like it or not, ah you could say, yeah, those would be reasons to say the manipulation is not that um ah big or or that problematic.
00:32:51
Speaker
Thomas, did you wanna add something to that? I think I mentioned heard you starting on that. No, I think it was the same idea that Bart picked up on. So I think that you can be if you see through it, then you have reasons not to do it. You're resilient. It's not that Netflix is always successful. It isn't. and You're able to peel yourself away from the screen from

Influence of Online Environments on Autonomy

00:33:12
Speaker
time to time. so so um But I think that the the lesson is also that it's hugely successful on a huge group of people. even if it isn't always successful so it is a gray area there are degrees um sometimes you are resilient sometimes you're caught of guard um but if you apply this to two billions and if you apply it all the time then it will have massive cumulative effects.
00:33:39
Speaker
Right. And so I've been feeling this cumulative effect ever since COVID. So ah by the way, there's always a moment where I throw the conversation off the rails. Here is that moment right here. um my I've been noticing, you know, there it was it was just so it's so easy now to instead of going literally down. I live in a fairly walkable neighborhood and I could just go down the street to, you know, pick up some food or whatever. But I order it from Uber Eats and I have it delivered to my door with zero human contact. Essentially, all these cocas have made um
00:34:22
Speaker
purchasing things so easy and and have almost you know trained me to do it mindlessly so that I do it without, you know without again, human contact and and easily. And I think I spend more money because of it. And and this these are the cumulative effects. I used to not be like this, I swear. I promise. like This is how it's been lately and and and know our city, Los Angeles has never quite been a city where we all just talk to each other, but it's becoming even more so a city where people just don't talk to each other because you're not you know you're not trained to to to do that. You you have fluid non-human transactions instead.
00:35:01
Speaker
And then are you buying cheese then? Are you or ordering to cheese? I have my monthly cheese subscription. See, I don't even have to think about that one. No, I think this is a really, really important point. So what you what you say, so it's about the cumulative effect on the one hand, but it's also the path of least resistance, right? That it makes it so easy. to to um act upon certain even fleeting desires, right so certain whims of the craving that you have. um And if you would you know if you would need to take the effort, you wouldn't act upon those.
00:35:35
Speaker
um and and And I think that's really important also what you said about you know without human communication or human contact whatsoever. um It points to a moment where it isn't totally in line with our autonomy, with the things that we truly value and really want to act upon. But then by offering this path of least persistence kind of makes makes us act against what we consider to be in our better interest or more valuable and so on. So I think this is a very good example where you can see that this manipulation over time can make us act in ways that we don't really want to go into.
00:36:15
Speaker
and need the the The example also, so that of course that happens for Roberto's like food delivery choices. and For a lot of people that happens with social media usage, right? I mean, if you see your weekly reports, how how many hours a day you spend on Facebook or Instagram or what have you. And that's often not the kind of amount of time that people would like to spend or how they would like to spend their their free time. But it's yeah, it's it's because it has become so easy because you have these techniques to draw the attention to the app. um and And yeah, so so the question is, if you were in a more reflective moment, or if you would look back on how your choices have evolved over time, ah do you still endorse those? Or this is this something that you would endorse if you would be a bit more reflective? That's kind of interesting just to think about
00:37:08
Speaker
people living in ah in a way where if they stop to reflect on the way they were living, they wouldn't actually endorse it. And it's like creating a sort of like, I don't know, yeah like a contradiction in the person where it's like you're living one way. And then if you were to really seriously consider the shape that your life is taking, maybe you wouldn't. And I mean, I think, I mean, I guess it kind of, but I guess this kind of the point you're making kind of really, um
00:37:39
Speaker
It's maybe a perfect example for you guys because it's like, that's so obvious with what you were just talking about Bart amount of time people spend on their phones. I mean, I think almost when they do these studies, as far as I can tell, when they ask people, like, you know, how much time are you spending on their phone? And you know, and then is that the amount of time you would like to be spending? It's like, well, no, like that's way more. So I guess that's just a perfect example of a way in which people today are kind of living as it were, like divided lives. It's like on the one hand spending tons of time on their phone, on the other hand on reflection, being feeling alienated from how they're living, I guess.
00:38:20
Speaker
Yeah, that's exactly what the worry about autonomy is about. right I mean, so sort the question there would be, um are the choices that we're making. um And um like our kind of life goals and life plans and life aims um are those authentically mine. Am I in control when I make my decisions in life? um And when you zoom in on a very minor decision, like do I check my Facebook in the morning and how do I spend one or 10 minutes there? That's not going to
00:38:53
Speaker
be the right kind of level to to think about when you think about it. You need to zoom out and look at, um yeah for example, the larger patterns or that the the extent to which, for example, I've become um dependent on certain kinds of internet usage. And and and then you can ask, it still is this something that I endorse upon reflection or or does that feel that something else is at stake? Oh yeah, Roberta, go for it. I have a side question here. I mean, this is on this topic, but you use the work of Thaler, the behavioral economist. He's the one that came up with Nudge, him and Sunstein.
00:39:38
Speaker
And ah so they also have this notion of sludge, which is, ah you know, something that is put into the the transactional process that slows things down. And an example of this is it used to be the case that if you wanted to cancel, let's just say your gym membership. You have to go in person during nine to five hours and it was just really hard to you know to get to the manager to then cancel it. Whereas I think you know a 24 hour fitness was sued and now you can do it online. So now now and now it's streamlined but there used to be all kinds of sludge in the way.
00:40:14
Speaker
And so Sludge in the, ki so I'm trying to you know apply this concept to the COCA environment. Sludge in ah you know our online transactions would be something like, so on Amazon, I have my payment method and my shipping address. It's all like instantly there for ah for me to just click on. With one click, I can i can buy whatever it is I wanted. So Sludge would be that you have to manually enter your payment method and you have to manually enter your shipping address and your mailing it or your billing address too. And so do you do you guys make use of ah Sludge at All in your in your theories or anything to to say on that?
00:40:55
Speaker
um and Not yet, I think. It would be another paper to write. But I think and the worry is the same, right? I mean, the way in which it works is different. So not just typically make things easier. So they facilitate specific kind of behavior, right? That's the path of least resistance. so know you know We were talking about start just influence people by making things more difficult and more um effortful or more time consuming. The example you gave where you would need to manually enter your credit card data, for example, is obviously not in the interest of the company. So you're going to make that as easy as possible, but you're going to make um unsubscribing from ah from a service, for example, more difficult. right Then you need to... Or just, for example,
00:41:42
Speaker
being able to visit a website without cookies, for example, but you need to um consent. and it's ah going to be veryed They're going to make it very easy to consent to accepting those cookies. um And if you were to, it that's going to be right there. You could say there's a sludge element to it to just visiting the website without having to consent to those kinds of and cases. but But the worry is the same thing. So the question is, do these environments and and these and the way in which these are designed with nudges and sludges all around, do they enable people to lead the lives that they want to lead?
00:42:24
Speaker
or do they and push people in directions that they no longer effectively endorse, or do they make it harder for people to um set their own aims in life and and achieve those. That's great. And ah Thomas, do you want to hop in? Yeah. Yeah, just I just really like the idea of of the thing that i that I got from it, that sludges could be used against nudges, right? That you could you could think about it to set up a sludge. In the sense that you could get reminders every 15 minutes on your phone. Do you want to continue using your phone or you should be actually a working or something like that? um Virtuous sludge. Yeah. and so That would be the, yeah yeah. And then sometimes people set set ah set this up themselves, right? they They put their phones away and they need to walk to their phone. So it's more you know difficult for them to reach that phone. So yeah. um
00:43:22
Speaker
And it's way it's ah it's actually a way of of getting control back of actually doing the things that you deem valuable in life instead of wasting time.

Ethical Challenges of Manipulative COCA

00:43:32
Speaker
Yeah, I just want to ask a question about like, just to clarify, like, when it comes to cocas being manipulative, on your theory, it wouldn't. It's not always wrong in an ethical sense. It's not like always morally wrong for of coca to be manipulative, right? It's more about when it pushes us, you have this, both of you guys have this really interesting concept of perimeters of autonomy. I don't know if you want to get into that now, but basically it's like, is it this idea that when cocas manipulate us into behaving in a way that brings us beyond
00:44:12
Speaker
are primers of autonomy, that's when it's morally wrong. And you know outside of that, it can be morally acceptable. Is that is that kind of what you guys are saying? That sounds about right. um yeah So this this notion of perimeters of autonomy refers to ah the i idea that autonomy leaves quite some space and leeway. right and I mean, there's um The plans we have in life are kind of overall life goals and aims and and the way we we think of the good life typically enables for for for quite some ah some some room and some new way. As Sam mentioned, I would never be nudged or steered towards a vegan option. um I'm a vegetarian, so you could never nudge or steer me towards a meat option. But this is um kind of like principle that I have to lead a vegetarian life
00:45:06
Speaker
um still allows for a pretty wide range of options, right? I mean, like I can be, um might be pretty indifferent towards, I don't know, veggie burgers or veggie or just vegetables, right? So if environments online or offline were to influence me within my perimeters of autonomy, just nudged me towards, I don't know, the new veggie burger rather than my my ah my go-to veggie option, um that would be definitely less problematic than if it were able to influence me beyond and and to move me to steer me beyond my perimeters of autonomy. You could say that's typically what doesn't happen. right i mean Whatever kind of design you can come up with is won't
00:45:54
Speaker
making me make a meat option. But again, remember these kinds of like cumulative cumulative effects, right? I mean, if you get sucked into a rabbit hole of conspiracy theories, it could say that that person ends up over time beyond their perimeters of autonomy. They become completely different people. or the example that Roberto just gave, as is this the kind of ah life and interactions that I want to have? Apparently, by making it so easy to order online, these are the kind of choices I'm making. Is this still me? Is it is this what i this in line with my um overall overall values or overall aims? That's an open question um and the influences are to blame.
00:46:42
Speaker
Yeah. Yeah. That's it. It's interesting to kind of what you're talking about, like the pattern. I mean, if I'm kind of wondering if there's almost like kind of two types of, you know, violations to someone's perimeter of autonomy. It's like in one case, let's say someone, yeah, is a vegetarian and then they're manipulated into eating, uh, you know, me on a particular, in a particular case, that's, you know, direct, that act is directly inconsistent with their values. Um, but on the other hand, I'm thinking about how, like, if someone's manipulated into spending, let's say tons of time on social media, it's like, well, in each individual instance, you know, maybe it's like, I'm
00:47:30
Speaker
looking at you know my friend's Facebook page or I'm looking at a band's Facebook page. And so it's like each individual instance is like, no, I'm cool with that. like nothing inconsistent between me and looking at this band's Facebook page or my friend's Facebook page. But when you look at like the overall pattern of their life, it's like I'm ending up just spending crazy amounts of time on social media, let's say. And it's like that overall pattern that's really worrisome rather than that the immediate action. Anyway, do you feel like that's the kind of distinction that makes sense? or
00:48:10
Speaker
Yeah, that makes a lot of sense. So that's one way of putting the claim that a specific cocoa design can be manipulative in a one-off instance. right i might I am a vegetarian, but I still like meat and the designers of, I don't know, um All kinds of restaurants can can i exploit that that that vulnerability of me. um But the pattern you were talking to but you were're talking about works in the same way that also exploits certain kinds of um vulnerabilities, then becomes problematic only if you you can like step back and look at the overall pattern and and ask the question whether that's the kind of life you want to live. All right.
00:48:50
Speaker
i yeah I just wanted to add that there's also, of course, a category of Kogas or of influence that ah that kind of makes use of our heteronomy, that kind of pushes on um that side of us that we don't want to go into. So you can imagine if you have a strong phobia for something, you're very fearful, then that can be used, of course, to to um in the interest of these commercial comp companies. right They could it could induce fear, for example, and And actually you don't want to be fearful, you would rather be rid of that fear rather than to act upon it. But so it's just a matter of predictability and sometimes they can predict people in their heteronomy as much as in their autonomy. So I think this this happens a lot, right? We we we feel insecure at times or we can easily be made to be feel to feel insecure.
00:49:43
Speaker
you could use that You could leverage that and use it against people because you know that this is how it will act in the phase of such challenges. And I think that's the most, I don't know, malicious or ah problematic sort of manipulation that um and targets us in our vulnerability in terms of um um our weaknesses. So this is a ah great transition here. um Clearly, there are some cocas that that that you know their intention is for us to lean into those drives of ours that we really want to suppress. If you're a recovering alcoholic or if you're a gambling addict or if you're on a diet and you just can't have sweet, delicious ice cream. you know ah So those are pretty obviously morally
00:50:32
Speaker
concerning at least let's just say ah those those cocas that drive us towards those you know drives that we're trying to suppress. But ah we can also conceive of a coca that you know in a sense promotes our autonomy. you know enables a more stricter adherence to a vegan or vegetarian lifestyle or a healthy lifestyle, working out more often, et cetera, et cetera. So can you can you tell us a little bit about ah these cocoas that might promote our autonomy? and And then also answer the question, are those also in a way morally concerning, even when they at least superficially promote our autonomy?
00:51:16
Speaker
Perhaps Bart can say something about the ones promoting the autonomy and then I will say something about how they might still be problematic.
00:51:26
Speaker
Sure. So you can think of examples. These have already been mentioned, right? But you could think of going back to me wanting to promote Roberto's health, for example. Roberto might be on board with this, but still struggling, right? With his kind of cheese ah addiction, or he might be willing to exercise more, but then um at the the exact time where you want to i know pick up your running shoes and go out for a run, um something happens and your will is weak. Well, there's ah tons of technologies that help you in living up to your commitments, right? I mean, you've got all kinds of exercise apps that can make exercising much more fun or there's um yeah all kinds of um
00:52:16
Speaker
tools that you can use in order to overcome those weaknesses, those vulnerabilities, and so so yeah and then and enable you to actually achieve the goals you want to achieve. So in those kinds of cases, um we would say that they might be manipulative, right? They're kind of gamifying stuff, making things more fun. As an example of nudging, it shouldn't really matter. It's not giving you a good argument, so it's pretty arbitrary whether you get, I don't know, like a fake medal or something like that when you go for a run and you hit a record or something like that. So it might be manipulative, but it doesn't violate um or threaten your autonomy, kind of like ah supports or promotes that autonomy.
00:53:00
Speaker
That doesn't necessarily mean it's completely unproblematic, Thomas.
00:53:08
Speaker
but Thank you. um So I think that the idea was that, that um so we talked about these perimeters of autonomy. So they can be this can be used by companies, right? So you want something, you want new running shoes, but you don't know which to buy. And so yeah they will offer you their best choice, let's say, and you're totally fine with this. So it's within these perimeters. and And I think companies try to and they fight for this space. like They fight for pushing the things that they want you to do within this perimeter. We also discuss the ones that push you out of this. But I think that the real question is, even if companies leave our autonomy intact, right even if it's within these perimeters, could we really say that they respect our autonomy?
00:53:55
Speaker
And it seems that this is kind of exaggerated. They just instrumentalize our autonomy. They just to use use it strategically. they They latch onto it, so to speak, to what we desire and what we believe in order to sell their products. But actually, they're quite indifferent to our autonomy. They don't really care about us being in control. It's just that it's the easiest way to sell their products.

Balancing Commercial Interests and User Autonomy

00:54:20
Speaker
So that's kind of another worry that we have that even if you believe that it's totally okay with Netflix.
00:54:27
Speaker
or YouTube or whatever or Amazon um catering to your desires you still believe that you are being used that there is this this bigger picture in which you are just um a means as an autonomous individual to um um get them what they want more than what and that they care about you getting what you want So I think there's two questions here. So one is, do the hands of the manipulator and the manipulé of the designer of the coca and the user of the coca, do they align? So and sometimes they align right in a web shop, for example, it's it's a win-win situation, right? do I want to buy my running shoes. Amazon wants to sell running shoes. what what' what's what What's problematic about that, right? The interests align completely. That's why it's a win-win situation.
00:55:17
Speaker
um but But in other cases, they win by exploiting our our vulnerabilities, right? So that kind of like goes to show that ah they don't really care about our interests or our preferences. Sometimes it's beneficial for them to care and then they will care. Sometimes it's beneficial for them not to care and just exploit whatever vulnerabilities they and they are aware of. and And that's the kind of disregard or indifference that we think is arguably problematic about
00:55:48
Speaker
about these kind of Koka designers.
00:55:53
Speaker
And i kind of um this reminds me a little bit, for example, when I'm at the wine shop and I'm i'm undecided and I don't know what $200 bottle of wine. I don't know if you know this. I roll in fat stacks of cash. so Which wine am I going to indulge on tonight? And i don't I don't know. And so I see the wine manager. That's his favorite right there. And so that makes it easier for me. Same thing online. that the Amazon might know that I'm getting a whole lot of fitness products right now. And so one of the things that, you know, I gotta choose the shoes. I don't know which ones, but it'll say best seller or Amazon's choice or whatever. And then that really does make it easier for me to say, well, a lot of people can't be wrong, so let's go with this one right here. And so, yeah, even when they are trying to promote my ends, it's really also free only so at the superficial level. Ultimately, what they want is me to make more purchases and become the type of person who exclusively purchases things on Amazon.
00:56:52
Speaker
Yeah, I think that that the concern is really skin deep, right? It's just a means to get what they want and it's not. um It's not truly caring about your choices or how um informed that choice might be. So wine is, I think, a very good example in which any fancy label that is put on there. And if it sounds French, then you then then then you buy it. But it's not based on intricate knowledge of of or a comparison of what you like and what would best best fit your needs.
00:57:26
Speaker
This brings it back to Steyser to this end-based notion of manipulation that we were talking about earlier. right i mean It's the intention that matters. It's supposed to kind like determine whether it's a case of manipulation, but also to identify what's more morally problematic about it. and an Ultimately, the companies care about one thing. and That's the underlying intention behind these kinds of designs. um They want you to engage in certain kinds of behavior, regardless of what your preferences or interests are. and there that That just kind of like, in our view, and yeah just provides a clear picture of what it is that's problematic about this about this whole story.
00:58:10
Speaker
I was going to ask, do you think like should do you think we should like start a movement where it's like we're trying to get them to be more like relying only on rational persuasion and advertising and stuff? like i mean with is that just i mean i don't know I feel like I once read a paper where someone tried to argue that like yeah any any any advertising that's not just totally dependent, you know totally going in for rational persuasion is just is just not good. And like, you know, we shouldn't anyway, I don't know. I just, what do you guys think about that? Like is.
00:58:45
Speaker
yeah Well, we're all in favor of the revolution, but but i don't i don't I don't know. I don't think that we want to want to want to say that we are totally rational and that we need to be totally rational, right? That's the only way we're emotional beings, we're irrational beings and so on. So I think it's the instrumentalization that's really problematic yeah that um They just push our buttons, let's say, both as rational people, but but mostly as non-rational beings. And it serves their interests. So I think that's the thing that, that to me, is so potentially so problematic about it. um But if it's about the addictive side to it in social media, for example, then perhaps we need ah a more stringent means to whotu counterbalance or to oppose to these, because we know what the effects are. and saying
00:59:38
Speaker
yeah so one One way of looking at it, and this is so this has often happened in the and and the literature on nudging, for example, as well, was once we realized how these companies are using these techniques to steer us in all kinds of behaviors that that might be ah detrimental for us or for our ah social relations, um Yeah, the the the the reaction is often, over well, we can, for example, enable the state to nudge in opposite directions as well. right If companies are nudging us towards unhealthy foods, let the state nudge people in the direction of healthier foods or to exercise more or whatever. But then you're basically adding nudges to nudging and adding manipulation to manipulation. right i mean If there is something to the idea that it's problematic to be and instrumentalized and this kind of like disregard for our interests,
01:00:28
Speaker
um and our preferences and our aims in life, that's where part of the problem lies. Then you could say the only solution there is to, I don't know, regulate and these kinds of techniques or ensure that there are certain domains in life where these ah techniques are, I don't know, but forbidden or submit or where we create these kind of nudge free zones, right? and because Because that's the only way to avoid being manipulated and to avoid basically the the core of the problem arising. So yeah, just adding nudges in the opposite direction is is going to make things worse in this respect. So creating this kind of nudge free
01:01:09
Speaker
domains or areas or or ah spaces um is the only way to actually solve it. Also, I feel like in certain cases, it's like, how would you even nudge in the opposite direction? I don't know. I'm just thinking like, um So this is, this, I don't know if you come across this bug, but it's kind of interesting. Outrage machine, like Tobias Rose Stockwell in the subtitles, like how tech amplifies discontent, disrupts democracy and what we can do about it. But basically like he talk kind of talks about this whole, I think it's kind of well known now that like one of the main ways that, um, that, you know, Cocos are especially ones that have
01:01:50
Speaker
are peddling kind of political content. One of the way main ways they get people to click on the content is by inciting like sort of anger, but specifically a moral anger. so like The thing that gets you most like locked in to in like focusing on some content, you know apparently, is is stuff that's like inciting a sort of moral outrage in you. And so anyway, I'm just thinking about how like, okay, if that's that's one poll where it's like you're kind of being constantly triggered to have like experiences of moral outrage toward like the opposite, you know, political party, like, I don't know how you would nudge in the opposite direction. Like, well, I just don't even like, it like, they yeah, like, how would they
01:02:37
Speaker
manipulating you into like being more you know peace-loving and like friendly to your neighbor.

Social Media Algorithms and Polarization

01:02:43
Speaker
i don Anyway, I don't know. i just um but It's to think of. So I think different kinds of social media ah have different kinds of algorithms, basically. I work in different kinds of ways. So the outreach machine makes me think of what used to be Twitter, right? X, that's where and you get that kind of dynamic. I'm not yeah um um not on Twitter. So I don't know. But as far as I can tell, people sometimes complain about it being a cesspit of all kinds of ah vile reactions. ah But it um the it it works. right i mean It gets people um inside it. It makes them engage with and other users. That's their kind of business model, so to speak. Other apps, I think, have a different kind of business model where it's all about
01:03:30
Speaker
trying to ensure that that people get, I don't know, like a dopamine hit or that they're happy and they're using the app. and That would be kind of like a happiness machine or something. It already sounds nicer, but again, it exploits um similar kinds of psychological mechanisms. It exploits specific kind of emotional responses, again, with the sole aim of um yeah achieving those kind of commercial ends that underlie the design of these platforms. But yeah, I'm ah pretty sure that if Elon Musk would tweak the algorithms of X, you would get different kinds of interactions because it determines what kind of content from who you would be exposed to, right? And it would definitely impact the kinds of debates or the kinds of interactions you would have on these kinds of platforms.

Conclusion: Ethical Dimensions of Online Manipulation

01:04:20
Speaker
Yeah. Yeah. It's interesting when people try to point out, it's like,
01:04:25
Speaker
It almost seems like, you know, it's like with the rise of, you know, um, maybe like this implementation of AI and these online platforms, you know, with the gathering all our data and and thus the higher potential for manipulation, it seems like the onset of those kind of technologies and the increasing polarization in the in America, it almost seems like it's kind of like occurring at a similar time. So it's kind of interesting to think about basically what you guys are talking about with the maneuver of cocos at the level of yeah maybe like some of the negative political ramifications. but um
01:05:06
Speaker
I mean we're about an hour here so i just wanted to you know just to wrap up with the final question just like is there anything um else you guys would wanna emphasize here any anything that you feel like maybe we didn't hit on in your paper.
01:05:23
Speaker
and no Not really, I just wanted to, to to with regards to the to the final discussion, let's say, whether it is outrage or whether it is happiness, it is the the path of least resistance that is the problem, let's say. so So we are so predictable in doing stuff that isn't necessarily ah the best part of mankind, let's say. So whatever it is they they they use um might make this more effortful or the hard way, um even even um less attainable. And perhaps democracy, for example, isn't that easy to maintain. So so the road downhill is kind of um what we're headed for.
01:06:47
Speaker
Thanks everyone for tuning into the AI and Technology Ethics podcast. If you found the content interesting or important, please share it with your social networks. It would help us out a lot. The music you're listening to is by the missing shade of blue, which is basically just me. We'll be back next month with a fresh new episode. Until then, be safe, my friends.