Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#13: Marianna Capasso: Manipulation as Digital Invasion image

#13: Marianna Capasso: Manipulation as Digital Invasion

AITEC Podcast
Avatar
0 Plays2 seconds ago

In this episode, we speak with Dr. Marianna Capasso, a postdoctoral researcher at Utrecht University, about her 2022 book chapter “Manipulation as Digital Invasion: A Neo-Republican Approach”, which can be found in The Philosophy of Online Manipulation, published by Routledge.

Drawing on a neo-republican conception of freedom, Dr. Capasso analyzes the ethical status of digital nudges—subtle, non-intrusive design elements in digital interfaces that gently guide users towards a specific action or decision—and explores when they cross the line into wrongful manipulation. We discuss key concepts like domination, user control, algorithmic bias, and what it means to be free in a digital world.

For more info, visit ethicscircle.org. 

Recommended
Transcript

Introduction to Dr. Mariana Capasso

00:00:16
Speaker
Today we're joined by Dr. Mariana Capasso, a doctoral researcher at Utrecht University specializing in ethics of technology and political philosophy. We are discussing her recent article, Manipulation as Digital Invasion, a Neo-Republican Approach, which applies a neo-Republican theory of freedom to explore how digital nudges can overstep ethical boundaries and become wrongfully manipulative.

Dr. Capasso's Journey to Technology and Philosophy

00:00:41
Speaker
Welcome, Dr. Capasso. Thanks. Thanks for the invitation. I'm so happy to be with you today and yeah, talk about my article. and And by the way, this was a chapter, but was like of, yeah, of 2022. So it's not so recent and I've changed, I've changed many ideas from, yeah, the, yeah.
00:01:01
Speaker
Okay, I'll say, yeah, we're excited about that. And yeah, it's a great article. And so, yeah, let's kind of dive into it a little bit. I mean, well, maybe first, could you tell us a little bit about yourself?
00:01:12
Speaker
ah Maybe, know, just kind of share where you grew up and what drew you to philosophy, maybe what drew to philosophy of technology. But anyway, yeah, just a little bit background. Yeah, yeah. Just there is a little background about me. i'm um As you said, I'm a postdoctoral of researcher in the Netherlands right now at Utrecht University. and I'm Italian.
00:01:31
Speaker
So I grew up in a small city close to Naples called Avellino, and it's famous for the wines. So like... ah Yeah. White and red wines.
00:01:41
Speaker
Yeah. Like Fiano and all the other, so Italian wines. But yeah, I left my hometown when I was 18 and I spent more than 10 years in Pisa. So doing my bachelor master PhD, I started a postdoc in Pisa.
00:01:57
Speaker
And why philosophy? Because I was selected as Scuola Normale Superiore, it's similar to the called Normale Superiore in Paris, so it's a very high profile school specialized in humanities.
00:02:11
Speaker
And my first love was Kant, so you know all the the starry heaven above me and the moral law within me and stuff. Yeah. So it was Kant. So I was trained as a yeah, like as really a history of philosophy.
00:02:28
Speaker
Also because in Italy we are trained basically on history of philosophy. But during my master thesis, um I was, yeah, I was researching something about ah application and vendung in the modern sphere in Kant.
00:02:43
Speaker
And at that time, the MIT just released the Moral Machine Experiment. So, you know, like the re-adaptation of the trolley problem of Philippa Foot, but with self-driving cars.
00:02:56
Speaker
So I was really fascinated by the idea of applying ah ethical theories to new problems, like with technologies like self-driving cars. And this is maybe Yeah, this was like the motivation behind the idea of doing a PhD in philosophy and technology.
00:03:14
Speaker
So my PhD was completely infied on the ethics of AI. So on freedom and control with AI. And yeah, starting from 2018, yeah. yeah Yeah, so more than and five years for sure that I'm doing like stuff in philosophy of technology. And right now, I'm a postdoctoral researcher in the Netherlands specializing in discrimination in AI. so Fascinating.
00:03:39
Speaker
Awesome. Well, I feel like it's somewhat surprising that you grew up in a place of wine and poetic beauty and then you land on Kant, but maybe it's more about balance. more about balance in life. Anyway.
00:03:52
Speaker
No, I like Kant too. I'm not... i'm Anyway, great. So let's kind of know turn a little bit more to

Understanding Digital Nudging

00:03:57
Speaker
your article. So your article focuses on digital nudging um and you're exploring it through the lens of political philosophy, particularly neo-Republicanism, which is not the political party.
00:04:12
Speaker
um So before discussing your philosophical framework, Neo-Republicanism. Can you just kind of introduce us to what digital nudging is? Maybe share some examples to illustrate digital nudging.
00:04:26
Speaker
Yeah, sure. Of course, everyone has an intuitive idea of what nudging is since you know the publication of Tolerance Us and much celebrated book Nudge. So a nudge holds a really powerful problem promise in the sense that nudge can improve people's decisions without changing how oh incentives, how motivation of people they have. So like just changing how options are presented to them. So nudge is general nudge, traditional nudge is any aspect of choice architecture that can alter people's behavior without forbidding any option.
00:05:03
Speaker
So, or significantly changing their incentives, economic incentives. But So the design of cafeterias, so putting healthy food at high level boosts equal consumption because items capture the attention of consumers, but also like horrible picture on cigarette packs can also reduce cigarette consumption because of emotional responses so of users. Yeah.
00:05:29
Speaker
but Let's zoom out. So what is digital nudging in the first place? Of course, it's the application of nudging in the digital choice architecture. So that it's a new domain, a new sphere of nudging.
00:05:44
Speaker
And... m Yeah, digital nudging has received many definitions. Maybe the most famous one is like ah digital nudging refers to the use of user interface design elements to guide people's behavior in digital choice environments. And these digital choice environments can be smartphones, social networks, apps, websites, also like digital assistants like Alexa.
00:06:08
Speaker
And oh like, think about also Netflix. ah If Netflix recommend you a movie based on your you know on your history, um this is can be can be considered nudging. Or like I have this example, I also write um wrote an article a few years ago about um Alexa, because in um Amazon ah new announced the new partnership with the UK National Health Services for giving health nudge to patients at home, so even even before COVID.
00:06:41
Speaker
So just giving recommendations to users yeah about yeah how to improve their yeah they attitude to towards health. And maybe last but not the least, one thing that I would like to mention is that also the and scholars have coined a new term that is called hyper nudge when we talk about digital nudging.
00:07:04
Speaker
why Why? Because they are hyper, because they are based on big data and they are more interconnected. And of course, the digital environments can allow them more dynamic and more efficacious and targeted behavioral ah Yeah.
00:07:21
Speaker
Influences. So my favorite nudge that works on me all the time, I'm shopping on Amazon. Of course, I have to give Jeff Bezos more money and it'll say, you can get this item today, but you need to go over $25. So is there anything else that you want to get? Here's a list of things. And I'm like, of course I need more mouthwash.
00:07:42
Speaker
Let's do that right now. And so that is that, is that a nudge? Am I getting that right, Mariana? Yeah. Yeah. Can be considered nudge. Yeah, one one and example I was thinking of was Duolingo. So like gamification is a type of nudging.
00:07:56
Speaker
So um Duolingo incorporates a lot of like games in the app. And, you know, those games are going to motivat tend tend to motivate people to use that more.
00:08:08
Speaker
But it's a nudge because it's not as though the games are forcing you. to use the app more. It's not as though you can't exit out. It's just that when you put games in an app, you know it's known that people are going to use it more.
00:08:23
Speaker
So it kind of nudges you into more use. Another really like common one is the default settings. So like when you install software, you know agreeing to the terms is already It's already opt in. Yeah. yeah yeah and And so that's a nudge, like you said, because it's not forcing, it's not as like you can't opt out.
00:08:44
Speaker
to the, you know, it's not that you can't disagree to the terms. it's just that, um, it's already been preselected for you. Um, right. Can you give me a quick example of a hyper nudge just to make sure that I like, yeah.
00:08:57
Speaker
Yeah. The one based on big data. So like the really, like the one you said about, ah for example, Amazon, it's based on your, uh, on your history, on your own data that you already.
00:09:09
Speaker
Yeah. yeah It is true that it gets me every single time. Like not Come on, Roberto! I don't know. I'm sorry. Anyway, I mean, yeah, yeah, no, exactly. Like, when it's... I'm thinking, like... For example, like, you might think with Netflix, right?
00:09:28
Speaker
It's just a sort of human nudge that if after you finish one movie, another movie is starting up, yeah you're more inclined to watch that next movie.
00:09:41
Speaker
However... If you add that that movie was the specific movie that they're queuing up was selected because they've already mined Roberto's personal life and they know that he wants to watch King Lear.
00:10:01
Speaker
don't know. I I said I've also talked about, you know, this, like, okay, go to the next episode.
00:10:12
Speaker
This is considered like a never ending scrolling, it's the same. You're always being connected and can be considered a sort of nut or a sort of tendency to implement certain kinds of digital influence on users.
00:10:28
Speaker
So yes, and this is an actual example by scholars working on these topics. Right. Good. yeah Awesome. Okay, that's great. So then um maybe we can talk. Okay, so we got we know what digital

Distinguishing Digital Manipulation from Nudging

00:10:42
Speaker
nudging is. That's kind of like what you're focusing on.
00:10:46
Speaker
maybe we can now talk about like, what would you say is the main question of your paper? um Would you say it's kind of like, what's the distinction between a digital nudge that's sort of wrongful and one that's more benign?
00:10:58
Speaker
Because obviously we can think of more benign ones. Like yeah we just talked about a couple benign ones, right? Like it seems kind of benign. It's super benign to be like, They opt you into something rather than opting out, at least in certain cases, it seems. Anyway, point being, yeah, what would you say is the main question of your paper? Is it like the difference between benign versus more wrongful nudging or, yeah? Yeah.
00:11:20
Speaker
ah Yeah, I would maybe put it differently. it's Yes, it main me the main aspect of my paper is like trying to disentangle what is wrongful, manipulative from what is not in relation to digital nudging. But maybe my primary aim, so my primary question was to provide like a taxonomy and granularity to a field that is marked by inconsistent terminology and heterogeneity.
00:11:46
Speaker
And I must admit two fields, two different fields. One related to manipulation and digital manipulation and the other related to digital nudging. So that both presented ambiguity.
00:11:58
Speaker
in their you know definition, but also in the relative dimension of the terms. So I would describe the main question of my chapter as what is a conceptually grounded way to describe digital manipulation and why and when it is wrong.
00:12:15
Speaker
And then, what are the main dimension criteria that characterize digital nudging? And not just not just manipulative, but manipulative in an ethically or politically wrong sense.
00:12:30
Speaker
so this is my maybe maybe my main question behind the article and why I've connected the two fields, like digital manipulation and digital nudging. Because again, digital nudging has been defined as an approach that alters user behaviors without undermining the deliberative choice of freedom.
00:12:49
Speaker
And manipulation is exactly this, like the undermining of the choice of the freedom of choice. So I honestly see like a really...

Neo-Republicanism and Non-Domination

00:12:58
Speaker
strong connection between the two different literature, one on manipulation, broad, more general, and the one on nudging and digital nudging.
00:13:09
Speaker
So it looks like we have a whole lot of concepts to to cover. We've got to figure out manipulation, we've got to figure out maybe autonomy, and then... kind Okay, so let's talk about neo-Republicanism. Just ah introduce the idea to us and...
00:13:26
Speaker
And then maybe you can get into maybe an example of how you can apply it. ah Okay, yeah. I'm trying to be... yeah There's a lot of neo-republicanism in the literature, so I'm trying to be very like effective and ah say this. okay and Contemporary theories like Philip Patton, Quentin Skinner, Maurizio Virolli and others like Laborde and also Christian Valentini have developed a new idea of freedom, like freedom as non-domination, that it's an in-between between two different sense of freedom that we have in the literature. church
00:14:00
Speaker
One is the negative liberty, the other is the positive liberty. The negative liberty is when you have the absence of any kind of interferences or every kind of constraints on your action, on your option.
00:14:11
Speaker
so So all the action for you are... legitimate and this is the liberalism no ah perspective on freedom. And then you have a more positive idea of freedom that is connected to self-determination, so the realisation of your own purpose, your own ends.
00:14:30
Speaker
and Neorepublicanism, the idea of ah freedom as non-domination is exactly in between, in the sense that it's the enjoyment of certain conditions in which non-interference, like the in the negative sense, not freedom as non-interference, is guaranteed.
00:14:47
Speaker
So and in any way, any shape, um it's guaranteed in in a robust way. And what does it mean? ah That no one, no agents or group in society has a power on you that is arbitrary.
00:15:01
Speaker
So that is not controlled by you and does not track your interest. And of course, there are many definitions of arbitrariness. There are many definitions of control neorepublicanism.
00:15:14
Speaker
But the main ah fundamental idea behind freedom as non-domination is the fact that freedom is not just about action or option, but it's about status.
00:15:28
Speaker
It's about being a free citizen, being someone that has the right to being a free citizen. So it's... and one traditional super cited example is the one of the benevolent master.
00:15:42
Speaker
So the benevolent master um does not interfere with the slave, so does not do anything because The master is benevolent, but still, he can still exercise on the slave the constant threat of being interfered with.
00:15:58
Speaker
That means that the slave is constantly being exposed to the possibility know of the interference in any kind at any time.
00:16:10
Speaker
Okay, that's interesting. So the benevolent master, that example is really important for motivating people. ah the neo-Republican theory, right? Because the neo-Republican theory is kind of putting forward a certain concept of freedom that it's not just negative liberty. So it's not just...
00:16:30
Speaker
ah
00:16:33
Speaker
How do you describe negative liberty? Not someone currently interfering with you? Yeah, currently. but Yeah. yeah Okay. So like, for example, i mean, I guess the the problem with negative liberty, right, is like, well, what about a drug addict who no one's necessarily forcing him, but it's at the same time we feel like he's not free. Is that kind of why we might not be totally satisfied with negative liberty?
00:16:59
Speaker
And then it seems like with the benevolent master, that shows... a person, yeah, the master is benevolent. In other words, he's not constantly like, hey, do this, do X, we y news z so yeah do Yeah, yeah. You know, you're not... Yeah, the benevolent master is the classical example when you have, like, you don't have any interference, but you still have domination.
00:17:22
Speaker
so Right, no interference but domination, okay. But, yeah, you have domination because it's related to the relationship in society that we have. So the position in society that we have. And it's something, yeah.
00:17:34
Speaker
Okay, great. Now, just real quick, just to ask kind of like devil's advocate type of question, um or maybe like more like a stupid type question. But but like, you know, someone says that, you know, you're free when no one has kind of a power that is arbitrary to you, that you yeah that's not under your control. But what what do you say to someone who's kind of like, but wait, you know, doesn't my wife kind of have like a power over me that's not controlled? mean, like I can't really control wife. You know, obviously I don't.
00:18:03
Speaker
She's another human being, you know, she's kind and and she can influence and me in ways that are in some sense outside of my control. So anyway, can you just kind of distinguish like what's the difference between ah benevolent master and just like the way which other people are, yeah, just like, you know, outside of our control. And anyway, this is a good question, Sam. Go ahead. No, no, it's a very good question because again, like if you think about it,
00:18:30
Speaker
But step back, if you think about, like again, the relationship with the wife, husband, and all all of this, there are also theories that are saying like patriarchyism, all these kinds of... There are a strand of literature that is focused on, for example, like structural domination.
00:18:49
Speaker
So also the way social norms are reproduced that can be dominating. dominatating So in this sense, maybe I could say, okay, let's look at the context sensitive relationship with the wife and the husband and let's see if it can be a dominating relationship or not.
00:19:04
Speaker
But ah apart from this, I would say that, of course, there are cases with influence or just influences. They are not like interferences. They are not direct um the direct intervention into options that do value for you.
00:19:16
Speaker
So this is a first distinction. And maybe another distinction that I can make is the fact that Again, as the example of the benevolent the benevolent master is the classical example of non-interference but domination, but you can have case of interference without domination in the neo-Republican sense.
00:19:37
Speaker
And one example that... Pettit had in his books was the example of Ulysses. So Ulysses was bound to the ship ah by his sailors, but they've done this under his extraction, following his ideas.
00:19:56
Speaker
So you can have form of interference that are legitimate, completely completely legitimate, and can also increase your freedom. And another example that he made was, like of course, interference from the state, like taxes and all the rest. if the If we have a democratic net of control, they can be good for the citizen as a whole, it as a collectivity.
00:20:17
Speaker
So you can have legitimate interference. Just real quick. So the difference between my wife and a benevolent master is that my wife, yeah, she influences me and so forth, but it's not as though she can just step in and like force me to do things that are contrary to my autonomy or my interests. It's not as though my wife has the standing to necessarily be like, Sam, you must
00:20:52
Speaker
Now, I don't know, ah tend the garden for 12 hours. Whereas the benevolent master, I guess, right? He he he has this standing. yeah He's in a position.
00:21:03
Speaker
he has the authority, whatever, to intervene and be like, no, you have to... I'm going to force you to not... I'm not going to let you eat and you have to go out and work you know the field for like a day straight or something, right? like So and is that kind of... am i close Yeah. it And there will also be a more subtle way to be a master that can be political, economical, social.
00:21:30
Speaker
And the example of the wife, again, like you can have different interferences. Maybe I was not clear before when I was thinking when when i was talking about interferences, but interferences can be different in nature.
00:21:43
Speaker
So you can have manipulation, but you can have coercion. You can have like subtle ways to interfere with your option. So I have to look at the specific case and say, okay, maybe this can be coercion.
00:21:56
Speaker
So even in interpersonal relationship, there can be some kind of interferences, but they are not so maybe again, look at context sensitive case, but they care are not like, yeah, case of domination.
00:22:11
Speaker
yeah Sure. sure yeah Yeah. I didn't mean to say that necessarily a wife cannot be, um, dominating. I just want to, yeah, my assumption was my, whatever kind of influence my wife has, it's totally benign. It's wonderful.
00:22:26
Speaker
And my, it's very positive. So I'm trying to capture, you know, and can be contingent. That's the point. Can be contingent and not robust as the power of the master. Okay. Gotcha. All right. Roberto, sorry, you were going to.
00:22:38
Speaker
Yeah. I saw, I just, ah one, one question that sprang to mind.

Structural Domination and Digital Influence

00:22:42
Speaker
So you're saying, On the neo-Republican view, um the the opposition is basically to to domination.
00:22:51
Speaker
And it it sounds like you're saying that this includes domination not only in the regular legal form or or you know any kind of you know ah political apparatus of control. That's included. But also it sounds like even norms, right? So there might be social mores that neo-Republicanism would be you argue against. i'd So I got that right?
00:23:13
Speaker
Yeah, when I was making the difference because Philip Pettit and all the others that I cited in the chapter is a kind of a traditional way of neo-republican political philosophy, on the traditional strands of literature. But now we have many theories like Gedeke and all the others that are like trying to expand the idea of domination to include social norms or also beliefs that are upheld by a large portion of society and then reproduced by ah countless peripheral agents that maybe do not interfere like really with people, but reproduce these social norms. So even a structure can be dominating, not just agent like groups, ah people like masters, again,
00:24:00
Speaker
So, and for me, this is fascinating because again, like racist and all this. Yeah. discrimination in society, for example. The point is that makes some group vulnerable.
00:24:12
Speaker
And this is totally related to the core idea of freedom as non-domination, the idea of being like a free, a status-free citizen. So the core idea is there, but they've just expanded their domination to include structure, social norms.
00:24:31
Speaker
yeah So Mariana, you basically just guaranteed that you will be invited back. So we can talk about that in particular. That sounds fascinating. um So I think we have a grasp of non-domination for our discussion of digital nudges.
00:24:49
Speaker
So let's return to that. And I guess now with with this non-domination in mind, you can tell us why digital nudges become problematic, or sorry, when they become problematic.
00:25:05
Speaker
I will let you take it from there. um yeah one idea Maybe I can step back also like with this question because one idea behind my chapter ah was related to the fact that the literature talking the working on digital nudging, not the manipulative nature of digital nudging, just concentrate on the fact that nudging can be problematic when they are non-transparent.
00:25:33
Speaker
So when the intention behind the nudging or the modalities behind the nudging are not clear and transparent for the users. But ah in my chapter, there I try highlight how transparency is not the only criteria to assess the manipulative nature of nudging in the sense that what i argued was that what makes digital nudging lead to a loss of freedom that doesn't mean just domination, but like a loss of freedom, okay? First layer, just the loss of freedom is the fact that they are not democratically democratically controlled in the sense that they have an impact on what? On options that do matter in social and political reality, not just every kind of option, but options that do matter in social and political reality.
00:26:23
Speaker
And they are not being ah sufficiently or adequately justified by a part or a group of powers that should be held accountable for the election, for their influences on users, on people.
00:26:39
Speaker
So it's a multi-layered ah yeah description of what i and yeah relate to the manipulative yeah nature of digital nudging. It's not just that they are just opaque, non-transparent, it's related to how social actors use these um influences on people.
00:26:59
Speaker
So this technology, so, I mean, um I guess a lot of i things are are popping into my mind right now, but in general, with these very powerful AI technologies, us regular Joes get absolutely zero say as to when it gets, you know, launched and and who gets access to it.
00:27:20
Speaker
And you have social media platforms that, you know, ah they they they have powerful algorithms that they they kind of suck you in and addict you and, and, No one has any, no one asked for, me um I guess ah Mark zuberg Zuckerberg didn't ask for permission to to do that.
00:27:36
Speaker
OpenAI didn't ask for permission to, you know, release ChatGPT upon the education world and all my students are cheating now. And so in general, ah we have there is no oversight over these technologies.
00:27:52
Speaker
And so of course, that means that the ni debt digital nudging part of it which is you know sort of embedded in there, there's no oversight over that. So I definitely see that as, this is even before we get to neo-Republicanism, right? So, okay.
00:28:07
Speaker
Yeah. Yeah, and I can add on this that maybe one of the main requirements for manipulation is like the carelessness attitude in the sense that actors any actors can really do action or in like implement behavioral policies with a carelessness attitude, like neglecting all the harmful situation they can have.
00:28:33
Speaker
And this happens because they're they are not worried about, of course, ah like a democratic net of protection that can put safeguards So put a limit to their power.
00:28:47
Speaker
So could you tell us a little bit more about the transparency issue? Because like, so obviously, you know, you want to you want to include this extra element of consideration when it comes to figuring out what makes a digital nudge ethically problematic.
00:29:04
Speaker
um You want to go beyond transparency. Could you tell us a little bit why... is transparency so important? Like, uh, like ah my thought is just that like, it seems like you could have ah nudge be, uh, transparent and the sorry, non-transparent and yet it might still be okay. I mean, his idea that if it's non-transparent, it's necessarily transparent.
00:29:32
Speaker
bad um yeah can yeah just using this that sense that guarantee in In the sense that can still be legitimate. If it's okay, it's like legitimate.
00:29:44
Speaker
Yeah, can be legitimate. Like no you can like a user it ken do effort ah this is ah This is an example of by Petit, by the way, from insert from yeah one of the main theories of neo-Republicanism.
00:29:58
Speaker
And he was he was talking about this nudging, like the possibility of opt-out for organ donation. And you know that in the Netherlands, for example, for a very long time, ah there are these websites that do this, like really a serious website that it's just opt-in for organ donation. So it's really a real case of digital nudging right now during these years.
00:30:22
Speaker
And the analysis of PET, it was okay. Consider this example when you have a default rule for something like organ donation. If we look at this case, maybe the correctness of the message is okay.
00:30:38
Speaker
So even if it's not transparent, it can be okay because there's a mean of justifying it according to social good, to any other reason. so Yes, but my point in the and the chapter was it's okay. There can be a non-transparent match that can be still legitimate, but we have to look at like not just case to case, but also find ways to say,
00:31:04
Speaker
okay um There can be nudges that are still non-transparent, but there are against them some social good norms. like Think about like and nudging, a digital nudging like this in a domain like politics.
00:31:18
Speaker
It can go against the self-government of people. Or think about, I don't know, again, let health apps. okay If we want to implement cases of Ulysses, like, okay, Ulysses,
00:31:30
Speaker
you really track the interest of people, you have to look at the value of people. so And we have lot of examples like of healthy apps that suggest, no again, health not to people, but they have a backfire effect in the sense that if you are implementing these technologies with a vulnerable population, like affecting by eating disorders or with a mental health condition, then you can really cause the opposite and not just help them.
00:32:02
Speaker
So the point was like looking at the criteria that we have, if we decide to implement a nudge, digital or not. So if there are, according to proportionality, to reasonable, numbers to any kind of legitimacy, what kind of values they they want to implement or not, and looking at them not just by an individual perspective, but also like a group and collective perspective.
00:32:27
Speaker
Because again, the proponents of of nudging are Yeah. They just say, okay, just look at the individuals. So there is a really responsabilisation of the individual. But no, any kind of nudge is not it not yeah value neutral.
00:32:44
Speaker
Any kind of digital choice architects are are always implementing certain dominant values when they decide to frame, to present the digital choice environment.
00:32:56
Speaker
So could I ask a follow-up on the the issue of collective democratic control?

Criteria for Digital Invasion in Nudging

00:33:03
Speaker
um So, you know, one thought you might have is like, you know, ah nudge, it can be transparent, it can be not transparent, but it's really problematic when the nudge leads the person into something that the individual, upon reflection, would reject. So, for example,
00:33:26
Speaker
you know let's say I'm getting nudged all the time to watch more Netflix, and I really would like, upon reflection, to stop watching so much Netflix.
00:33:39
Speaker
And you might think, okay, there' that's a case of like maybe a manipulative nudge, because on reflection, Sam actually doesn't want to spend all this time watching Netflix. okay now and But you're talking about... like a sort of collective, more democratic thing. Like, why isn't it sufficient just to say, look, like it's, it's bad when it, when it goes against what I would affirm or endorse on reflection. Like, why do we need to talk about the collective? Is it like only bad when everyone, like when many people would agree that it's anyway? Yeah. So just kind of how do we get into the collective there is what I'm wondering. Yeah.
00:34:21
Speaker
um We get to the collective when need there is an interference on our basic liberties. So, for example, one of the ideas is like when it really discriminates some sort of people, some certain vulnerable <unk> groups that are already like discriminated against by ah also non-digital influence right there.
00:34:40
Speaker
So, ah and think about, again, like when there is the exacerbation some some And i in the chapter, I've made this distinction, okay, there are two different sense in which, of course, you can have like a radical manipulation.
00:34:55
Speaker
One is going against this idea of basic liberties or like discrimination or the others. And one can be maybe related to what you are saying in the sense that there can be values that are strictly related to groups and individuals.
00:35:09
Speaker
And this is a context sensitive concept. ah perspective on manipulation. Also cross-cultural, we can say, because again, like we can have basic liberties, but basic liberties can change through context and through time.
00:35:23
Speaker
So, and one of the main ah idea and principle beyond responsible research and innovation in the last 20 years in Europe was that of responsiveness.
00:35:34
Speaker
That means that any kind of technological innovation, any kind of technological products should be not just inclusive. So try to embed any kind of perspective of users when we design and implement technology, but also like responsive and responsiveness means ah change the direction of innovation according to values of people.
00:35:57
Speaker
So according to the space and time in which we implement technologies. And I would say that, yeah, our example is about the second criteria that I listed. So ah that manipulation can be a way to distort value value relevant option in front of the users. But what does it mean valued?
00:36:19
Speaker
It's not filled. We have any time to try to have a reflection on what does it mean value alignment? What does it mean to implement technology that really align with values of people?
00:36:34
Speaker
So we're kind of going a little bit out of order here, but um I think we should really kind of ah outline and and discuss in in detail these criteria that that you listed earlier so that the the listeners are ah with us here.
00:36:47
Speaker
Absolutely. Yeah. so um So you introduced this idea of invasion, right? And and there's four criteria that constitute ah invasion.
00:36:59
Speaker
um And so let's let's go over these. I'll just say them out loud and then and then we'll return to the first one and we'll go one by one, I suppose. But um inherently hostile, that's one criterion.
00:37:13
Speaker
They subvert relevant value options. They expose someone to uncontrolled misrepresentation. That's very interesting. And displacement. So that means not letting the person check and counter control the interferences.
00:37:27
Speaker
So I think the inherently hostile one, we we have ah some very easy examples of one, right? So that's the there are some that are clearly trying to get you to do something that's counter to your interests or to the interests of the collective.
00:37:40
Speaker
yeah And you were just discussing the the ones that might subvert our values. i Should we do another example of that one maybe? yeah so we can really kind of um zero in on on what that might mean.
00:37:53
Speaker
um as I said, valued option. So it's very difficult to define what does it mean, valued option. And the the example that I've made before, the backfire effect, for me, it's a very clear sign on how options can be changed according to the vulnerable groups of of reference. So if you implement healthy apps for vulnerable groups, like, ah again, people with eating disorders, you can increase addiction.
00:38:25
Speaker
instead of reducing them. So, and for me, valued option means being context sensitive every time you implement this kind of ah behavioral influence policies. But again, um digital nudge can shape users in a way that may or not be conducive to various social values. So like, can reinforcing self, reinforcing biases lead to the creation of filter bubbles, for example, echo chambers.
00:38:54
Speaker
So again, Another important stuff to do is establishing which option should be understood as valuable and can be controversial. and But this is not not not easy.
00:39:05
Speaker
Again, as an example of the organ donors, it's not easy. when la what yeah I almost feel like I want the the listeners to really understand like the gravity here. like you know You mentioned echo chambers.
00:39:20
Speaker
It really is a case that we don't want... certain closed loops, right? so So that, you know, we don't want social environments where people are believing things that, you know, contradict, you know, evidence and and science and and, you know, rationality.
00:39:41
Speaker
um But these things these there these environments do exist. and And because they're closed loops, they don't allow any uh contradicting evidence into the information sphere there and so their their their values sort of get corrupted and and it's almost like they're epistemic values that are getting corrupted too and so we don't want that sort of thing happening and this is what leads people to say you know and i've heard the i know some real we the people kind of uh you know professors they say you know if you believe in too many conspiracy theories
00:40:13
Speaker
I don't want you voting. And so this isn't this is just another example of like, you know, these fears where these these echo chambers are were a real problem because they they they if we don't pinpoint exactly what's wrong with them,
00:40:27
Speaker
you get to saying really illiberal things like, well, let's not let them vote anymore. So I don't know. Yeah, and again, this is one of the main criticism against nudging in general, the paternalism.
00:40:39
Speaker
Why, i don't know, like intellectuals have the right to decide for people and yeah have an epistemic power upon them. So it's both way. So it's like, okay, there's still this criticism right there.
00:40:54
Speaker
Why do we need to implement nudging in the first place? But maybe another example that I can make to make more clear the this criteria of the valued option. ah Apart from echo chambers and all this abuse ah literature on, you know, like democratic liberty online.
00:41:12
Speaker
ah Think about again, your Amazon example at the beginning. ah Think about um Amazon when Amazon nudge you to buy products based on what other customers bought.
00:41:23
Speaker
So your social net. Okay. So, and and this is an urging by social norms. This is the name in which they define this nudging. So social norms or even credible apparent norms online can emerge from social interaction online and can even change not the value sense of the rightness, the correctness of what you are doing.
00:41:46
Speaker
So think about this concrete example. I'm a woman. ah I'm young. I don't want to have kids. But anyway, my Amazon nudging me to buy products related to kids.
00:41:59
Speaker
So is this manipulation? Is this ah like a social norm that I want to uphold in counterfactual scenarios? So social norms are about internalization of behaviors, of beliefs in society.
00:42:14
Speaker
ah And we should look at the content of these social norms. What do they represent for the individual and the collectivity again? Yeah, so we don't want Amazon to misrepresent that having children is a choice, right? So they're they're trying to say it's it's necessary. So good, okay.
00:42:33
Speaker
Wait, so yeah, just real quick. So that's a subversion of value options because... um So it's like you have a certain set of values and then in this nudging, there's implied...
00:42:50
Speaker
a certain set of values. A dominant, yeah. Like a dominant perspective that maybe you not do not endorse. But just one inside, of course, my perspective doesn't want to be like overcomprehensive.
00:43:05
Speaker
Not every kind of influence that we have, like in Nubjim, can be manipulative or dominant. I'm just saying, just look at the content of the social norm that we have held online. So all these mechanisms, all these framing of choice environments can lead to certain choice that can maybe b can be designed the the symptoms, the sign outside of something deeper.
00:43:30
Speaker
So something that it was really related to not and not neutral value leading decision by choice architect. architect good in in in Real quick, so with these four criteria that we're going through, you know inherently hostile, subverting, relevant value options, exposing someone to uncontrolled misrepresentation, displacement, is the idea that all four need to be present or is that um if you meet one of them, that can count as a sort of an invasive digital nudge?
00:44:02
Speaker
No, I just listed these criteria and again, you don't have to have the... You have to meet all four. Okay, got it. Yeah, just to be clear. Okay, great. And then so... And can be cross-sectoral, can be, yeah, cross-domains. Okay, great. domain Right. So we'll have certain, you know, invasive nudges that they're not necessarily inherently hostile, but they are subverting relevant value. off And then actually just real quick, distinguishing those two inherently hostile versus subverting relevant value options.
00:44:30
Speaker
um I mean, i guess inherently how hostile would be like trying to get someone to, you know, eat a fourth Big Mac because it's like, that's just really bad for you. It's just like intrinsically...
00:44:46
Speaker
right Would that be kind of like an example where it's just like, look, it doesn't matter what your value system is, it's just like intrinsically… Yeah, yeah, it's like… Yeah, yeah, yeah.
00:44:57
Speaker
It's a problematic nudge when really touched upon some basic liberties. So my example was like fraud, discrimination, so it's something that it's really against the s self-interest of people, based based on basic liberties.
00:45:10
Speaker
Okay. But again, also basic liberty can change through time. So like we can have new rights. Now there is this idea of have the right to mental integrity. So against any kind of invasion from technology.
00:45:22
Speaker
So it's based on rights. Okay. The second one is more based on, yeah, the value that people can have in a specific time, in a specific period, and also based on the individual. Right. Because the one that you were mentioning about like encouraging certain people, you're giving, um recommending content related to child birth or something. right Obviously, there are going to be certain people where that's not subverting their values, but there will be others that it is.
00:45:48
Speaker
So anyway, whereas an inherently hostile one, it's like you don't really need that much contextual analysis. It's just like obviously... Yeah, gotcha. Okay, great.
00:45:58
Speaker
Sorry, I just want to... Neo-Republicanism is much more nuanced than I initially grasped, and I'm not sure my smooth brain is capable of handling all this.
00:46:09
Speaker
And this is this is not neo-Republicanism. Like, neo-Republicans try to apply neo-Republicanism to the realm of philosophy of technology and all the discourse about digital manipulation. So it's, like, even more complicated, more layered. than and As I said to you, also neo-Republicanism is not just something fixed.
00:46:26
Speaker
ah So there are scholars time to, yeah, it's constantly developing. This is an advancement right here. Anyway, so maybe we can move the next one. Like Roberto said, this is a really cool one.
00:46:38
Speaker
Exposing someone to uncontrolled misrepresentation. Yeah, so can you kind of just, yeah, flesh that out a little bit for us? Okay. Yeah. yeah I must admit of the fourth criteria, maybe the last two were the most neo-Republican in the sense that, yeah, I think the last two were the most neo-Republican in the sense that again, like what does it mean uncontrolled misrepresentation?
00:47:03
Speaker
Like a misrepresentation for Philip Pattett, again, it's like an interference and can be deceptive or manipulative. And The latter, of course, can involve true statements in the sense that it does not imply being disrespectful.
00:47:20
Speaker
So it doesn't imply giving you know falsity, but it's just about giving misleading impression, for example, in the relevant omission or abundant information. so And my example was like, I don't know, maybe the Facebook experiments a few years ago that intentionally changed many user new feeds, but omitted to inform the user said about it.
00:47:43
Speaker
Or again, like uncontrolled misrepresentation in petty terms means the use of false positives. And false positives are partisan misrepresentations that pretend to be supported in the name of the common good, while in reality they just exploit, they just use like partisan, um yeah like like promoting objectives and goals of a sectional part of people in the society.
00:48:12
Speaker
And for me, this was like kind of revelation to read about this because in the same period I was reading in the literature of philosophy and technology of ah phenomena like the Googleization of health research, that means that you know like Google and all this big tech can really enter into public sphere, like public research on health or of we have cases in the justice system, not to mention again the healthcare system. so And this big tech can really have a word on how this public sphere ah can be ah implemented or can serve the common good.
00:48:52
Speaker
So this criteria is basically pointing out that Hey, okay, behavioral influences are based on individual choice, but just step back and look at the broader, at the big picture.
00:49:06
Speaker
So who are the actors behind them? er What do they say? Are they implementing false positives? Why? Interesting. yeah but could you Yeah, maybe so like a really like fascinating. So like maybe a really simple example of misrepresentation would be like,
00:49:26
Speaker
you know um I'm subscribed to Netflix and then Netflix says, you know if you unsubscribe um to the email chain, maybe they lie and say, it'll cancel your entire subscription. So that would be just like an obvious case of misrepresentation where it's like, they're basically just lying to you saying, mean, this is not a real case, but...
00:49:52
Speaker
yeah Deception. Okay, this is just deception. Because this is more fascinating. If you're lying, if you're saying the set-through stuff, it's just deception. You can just say true statements and say something completely legitimate, and completely you know based on epistemic evidence, but still, maybe in the relevant omission or evidence of information, you can still be kind of manipulative.
00:50:15
Speaker
yeah could you yeah yeah could you get Yeah, could you explain that? like What's the difference between yeah just deception and then uncontrolled misrepresentation. and This is also fascinating because it was really topic also in the literature on manipulation.
00:50:31
Speaker
So not just digital manipulation, just manipulation, like how we door define manipulation in contrast to deception and why it's so different. Because manipulators then don't have to deceive people.
00:50:47
Speaker
They can do their work just really having the knowledge on how the mind of people work. So even based on... Yeah. I'm just thinking one example of, you know, it's very ordinary manipulation. example, yeah, of manipulation is all the targeted influences like profile, automated profiling based on big data collected ah again on your past experience or your network.
00:51:14
Speaker
And you do this by omitting information that can be available for you. This is one example. yeah Yeah. I was thinking of another kind of just very ordinary everyday life example where there's manipulation but no deception is like ah guilting people. So it's like if you're like if you're like ah sending them through a guilt trip. So like one way of manipulating someone is being like, hey, can you do this for me? And by the way, remember what I did for you last week and the the last month and then three years ago and then five years ago.
00:51:48
Speaker
Yeah. That can all be true. You can really have done all those things, but it's still a manipulative guilt trip. ah yeah So it doesn't include deception in other words. Not always is there deception. There's not always yeah related to deception.
00:52:04
Speaker
And this is fascinating because again, we are thinking about, you know, the, the heavy action the big tech can do. So that patterns and all this stuff. No, it's not just this.
00:52:14
Speaker
that That's, that that was my policy. It's not just by stating falsity or, you know, being in the dark, but it's more than that. You can also be like public. You can also like be out there.
00:52:26
Speaker
You can also base your, um, influences and all your processes on something that it ah can be considered epistemic legitimate in a sense, but still you can still be manipulative in themen looking at the power relationship that you um yeah upheld in society.
00:52:47
Speaker
Great. I mean, would this kind of be an example that's kind of closer to uncontrolled misrepresentation where it's like, you can imagine a situation where um a, let's say, and a social media algorithm is emphasizing one side of an issue more than the other. So in other words, let's suppose there's like some major political question and there's evidence, you know, there's some set of evidence for one position. There's some set of evidence against that same position.
00:53:23
Speaker
And so it's like, it's not as though by, promoting one set of evidence, you're doing anything deceptive. It's just that if you were to only promote one side, it's almost like a Texas sharpshooter fallacy or a, anyway, if you were to only promote that one side, it's ah but maybe it's a kind of misrepresentation because you're misrepresenting or you're giving a false sense of like yeah how much evidence is on each side. Yes. oh What's going on. Yes.
00:53:51
Speaker
Yeah. Anyway, I don't know. And the reason why this is so interesting, I think, is because psychologically, if you just give us a reason for believing something, people are more likely to to accept that conclusion, right? So, you know, giving someone both sides of an issue, that creates a little bit of a you know, suspension of judgment. I don't, I'm being all skeptical, but... um But ah but if you you know there's a study where where there's a line for people trying to make some copies.
00:54:21
Speaker
And a confederate of the study goes in and tries to cut the line and says, sorry, can I cut? I need to make copies. That worked better than if they just asked, can I cut?
00:54:33
Speaker
and so But it's obvious that they need to make copies. Everyone standing in line needs to make copies. That's why they're standing in line. But people just hear a reason And they're more likely to accept things. I don't know. Our psychology is weird is what I'm saying. But so yeah this sort of thing can really um get people to comply in in an automatic, non-conscious way. And there's something things so, to and ah to me, important about this one. So anyways, I just wanted to know float that over back to you, see what you think about it.
00:55:03
Speaker
But yeah, just to follow up on this, the best manipulator is the one that really have and knowledge about your cognitive behaviors. So the more you know about my cognitive attitudes, my cognitive habits, the more you are best to do your job as a manipulator.
00:55:20
Speaker
and And that's kind of gets into hyper nudging because the idea that in our time, you know, they can really, um, leverage a wealth of psychological behavioral insight when they are, um, manipulating us and, um,
00:55:37
Speaker
for their- Yeah, totally true. Yeah. um Great. Well, I mean, maybe go to the last one, i guess, displacement. So not letting the person check and counter control the interferences. So this is the fourth sort of criteria for counting a digital nudge as invasive.

Public Accountability in Digital Nudging

00:55:55
Speaker
Mm-hmm.
00:55:57
Speaker
Yeah. and and yeah ah For this fourth, maybe I can say a little more about the way also scholars have tried to prevent manipulation via nudge.
00:56:12
Speaker
So for example, they most of them tell us and say, okay, we can prevent manipulation via nudge, basically implementing principle of publicity. That means that any kind of political actors, any kind of industry, any kind of governments that implement nudging can just say out loud, okay, we're implementing nudging, we can justify it.
00:56:36
Speaker
And that's all. This is a way to prevent manipulation. So again, like being out there, it's a way to prevent manipulation. But in the chapter, I've tried to explain this is not effective in the digital domain because lot of, again, big tech, but also not just big tech, also so like lot of industry, just they don't care too much ah about our consent first.
00:56:58
Speaker
And second, even if they're saying out loud, okay, we're implementing this, there are no way that can, there are no safeguards. Okay. Or at least they're trying right now. We have in Europe, we have the AI Act right now. We try the Digital Service Act and all these kinds of new regulations that are trying to see to put a limit, um that are trying to say, okay, doing an impact data assessment, and ah you human rights assessment of what you are doing. So not just based on risk, but also on human rights right now.
00:57:32
Speaker
But still publicity is not sufficient to prevent manipulation. And we can also return to the idea of the benevolent master. no You can say, okay, I'm not interfering with you, but I have the power to do it.
00:57:47
Speaker
So, and the last criteriater criterion is based on like come there counter control. That means that you don't have just to counter control in a simple way. So quit the app.
00:58:03
Speaker
No, it's not just, just does not just ah align with the power to exit the app, but it's based on feeling public accountability gaps.
00:58:14
Speaker
That means that Any kind of processes, any kind of system that is implemented should be not just put there like in a public way, but also explained to people.
00:58:27
Speaker
So being understandable and people should have also a matter of contestation, like those who have a means to see it and say, okay, you are we do not align with this idea, with this process, with this technology.
00:58:42
Speaker
And I can give you maybe some few examples, like really recent example, for example, in Italy, A few years ago, then there was this case against Deliveroo because Deliveroo was using an app, of course, for the drivers.
00:59:00
Speaker
And this app is it's functioning on two paraers parameters of you know reliability of drivers. And again,
00:59:11
Speaker
The drivers and the the the national Italian labor organization took Delivero to the court because they found out that the app was discriminating against people that just decided to go on strike that day or discriminated against people that have car giving responsibility that day.
00:59:32
Speaker
So the app was not um like able to differentiate between different reasons. okay against the late cancellation of drivers.
00:59:43
Speaker
And for me, this is an example of like... the drivers did not have the power to counteract, to understand how the hub worked, but also to say, hey, okay, I don't want to be ah in within this organisation in this way.
01:00:03
Speaker
I'm not a free worker. And there is a strength, of course, of like ah scholars that are now implementing ways to talk about algorithmic domination in the domain of work.
01:00:15
Speaker
But yeah, this is another story. What do you say to people who are kind of thinking, well, you know, the marketplace is sort of a corrective sphere naturally so in other words if big tech if you know let's say spotify ah starts doing stuff that's really not um engaged with our concerns they're not respecting our concerns if they started doing that well we can predict that down the line people will uh stop using spotify and then um
01:00:53
Speaker
Spotify. So there's like, what do we, I guess that's the idea of the, like, there's a sort of marketplace correction that we will vote with our feet, so to speak. We'll, we'll disengage from Spotify.
01:01:03
Speaker
And so that's kind of a natural corrective to um these various forms of tech being super. a Anyway, you get get the point. It's just curious what you think about that whole line of thinking.
01:01:20
Speaker
Yeah. I don't want to disengage with Spotify, but yeah, this is like a traditional criticism of nudging, even not in a digital context. Like you have like market driven, profit driven nudging.
01:01:36
Speaker
that it's something different from the public policy nudging. And of course, we ah it's it's visible that are that they are market driven, profit driven, and there's nothing to do about it. looks like it's This is the game.
01:01:49
Speaker
This is the rule the rules of the game and we have to stick by it. so And of course, but for example, we can... We can think about, and some scholars have done research about this, we can think about implementing public policy nudging done democratically.
01:02:06
Speaker
And this can be a way yeah that are more transparent, more democratically controlled, and this can be a way to counteract many of those private market-driven influences. So maybe implementing a large program that could increase rather than diminish the democratic control that we can exercise ah upon our all digital choice environment.
01:02:31
Speaker
So they are thinking about like how to also do it boom um have both and how they can really ah support a more democratic net in the digital sphere.
01:02:46
Speaker
But yes, again, like and we have an intuitive idea. Of course, everyone is also like after Cambridge Analytica. It's out there. ah
01:02:58
Speaker
We have not even begun to crack how interesting this whole approach at digital nudging is. um Maybe by way of closing, i feel like there's there's so much that, you know,
01:03:15
Speaker
There's so many uses to the neol Republican approach that that you're um yeah advocating.

Social Norms and Discrimination in AI Research

01:03:21
Speaker
Is there anything that we haven't touched on yet that you just want to highlight? Like, hey, this is this is ah ah good lens to look at things through. There's there's there's something insightful here. Anything you want to... Or if you want to reemphasize instead, whatever you want yeah in closing, yeah.
01:03:38
Speaker
Uh, yeah, maybe, uh, and this is a way to relate to my own research right now, because I have, as so I said the beginning at the beginning, I'm researching right now, I'm on a huge Horizon Europe project on discrimination in the domain of work when you use AI.
01:03:54
Speaker
to scare candidates and all this stuff. So it's again, like human recommendation. That means like when you have AI and select people for you, so that not people, ah instead of decision, not really like candidates, job candidates, what, what do you think it's a good way to ah prevent this kind of discrimination? Like, I don't know, a few years ago, Amazon implemented the CV screening system to score candidates.
01:04:21
Speaker
and And the system was discriminating against women. Why? Because it based on um the last 10 years, Amazon basically received CVs as a CEO from men, mostly from men. So this is just a reflection ah of the existing discrimination that it's right there in society. It's already there.
01:04:47
Speaker
But when you use AI, it's like exacerbated. It's at scale. so and How can I connect this that I just said with all my chapter on this non again on manipulation?
01:05:02
Speaker
It's based on the fact that any kind of not any kind of like, ah yeah, influences, any kind of selection, it's not neutral.
01:05:14
Speaker
And we cannot have like something like completely the bias because bias is right. It's right there. There's always been a skew towards some groups. There's always been something like ah that. It's not completely, you know, like exempted from being biased, but And that's why maybe also social science and humanity and research can help computer science research, STEM research to the bias system in the sense that it's a way to see, okay, we should go beyond like a technicality, a techno determinist approach.
01:05:50
Speaker
for the biasing or for thinking about manipulation in the digital choice environments, but have a bigger picture on it and look at how discrimination and manipulation is really a social component.
01:06:05
Speaker
that it's based on social norms. So like trying to integrate qualitative study when you're doing this stuff. So for example, one of our last research, um we basically implemented a qualitative study of CV real CVs and noted that there are a lot of um like data that can be used as a proxy for discrimination, like GAP in CV can be a proxy for gender and all this kind of stuff. So when you are creating a dataset based on on this data, you have to really look at the data because data is not neutral, it's never neutral.
01:06:43
Speaker
and trying to find a way to uncover and making explicit certain social mechanisms within the data.
01:06:55
Speaker
We've been in conversation with Dr. Mariana Capasso. We've been talking about her recent article, Manipulation as Digital Invasion, a Neo-Republican Approach.
01:07:06
Speaker
Dr. Mariana Capasso, thank you. ah Thank you both.