Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Intelligent Systems: Digital Culture Shock image

Intelligent Systems: Digital Culture Shock

Breaking Math Podcast
Avatar
0 Plays2 seconds ago

In this conversation, Dr. Katharina Reinecke explores the intersection of technology and culture, discussing how cultural assumptions shape the design and functionality of technology. She delves into the implications of self-driving cars, the importance of understanding diverse user experiences, and the challenges posed by a predominantly Western perspective in technology development. The discussion highlights the need for greater cultural sensitivity in technology design and the potential consequences of ignoring these differences.

Takeaways

  • Technology is not culturally neutral; it reflects the values of its creators.
  • Self-driving cars are based on American commuting assumptions.
  • Cultural differences significantly impact user experience and technology design.
  • Efficiency in technology can undermine social interactions and relationships.
  • WEIRD populations dominate technology research, leading to biased outcomes.
  • Universal design principles often fail when applied globally.
  • Stack Exchange exemplifies individualistic design, contrasting with collectivist values.
  • AI systems must be designed with cultural sensitivity to avoid reinforcing biases.

Chapters

  • 00:00 Understanding Digital Culture Shock
  • 03:53 The Challenges of Autonomous Vehicles
  • 06:21 Cultural Assumptions in Technology
  • 08:37 The Impact of AI and Data Bias
  • 10:32 Efficiency vs. Social Interaction in Design
  • 12:14 The Concept of 'Weird' Populations
  • 14:24 Cultural Values in Digital Platforms
  • 21:53 The Simplicity of Design and Its Cultural Impact
  • 22:51 Efficiency vs. Community: The Stack Exchange Debate
  • 25:41 Adapting Global Platforms to Local Norms
  • 31:52 The Implications of AI and Digital Infrastructure
  • 34:34 Recognizing Cultural Bias in Technology Design
  • 37:42 Technology as Culture

Follow Katharina on Twitter, LinkedIn, Bluesky, and find her new book here.

You can find Lab in the Wild on Twitter and Bluesky

Subscribe to Breaking Math wherever you get your podcasts.

Follow Breaking Math on Twitter, Instagram, LinkedIn, Website, YouTube, TikTok

Follow Autumn on Twitter, BlueSky, and Instagram

Become a guest here

email: breakingmathpodcast@gmail.com

Recommended
Transcript

Milestone in Autonomous Vehicles

00:00:00
Speaker
Picture this, you're in a city where you don't speak the language, you don't recognize the road signs, and the cars behave like they're all in on a joke that you've never been told. But the reality is, is that it's not you as the confused tourist.
00:00:15
Speaker
It's actually a self-driving car from Arizona. In February 2023, Waymo proudly announced that its autonomous vehicles had driven over a million miles on U.S. roads without a human behind the wheel.
00:00:30
Speaker
Fewer accidents than human drivers, like safety stats, and the future apparently had arrived.

Technology and Cultural Reflections

00:00:37
Speaker
But here's the catch. The future was tested on a very specific slice of the world. Wide roads, clear signage, relatively orderly traffic, and a particular set of cultural norms about what good driving looks like. Our guest today is Katrina Reinecke.
00:00:56
Speaker
She argues that technology is never just about the code and clever algorithms. It always carries the fingerprints of whatever culture that made it. And when those digital fingerprints are pressed into other parts of the world, they don't always fit.
00:01:12
Speaker
Now, whether it's from pro traffic to South Korean search engines, we're going to talk about what happens when technology behaves as if everyone everywhere thinks like a designer in Sun Valley.

Digital Culture Shock Defined

00:01:25
Speaker
I'm Autumn Finaf. And this episode of Breaking Math is about power, perception, and the quiet shock you feel when technology clearly wasn't built with you in mind. What is the definition of digital culture shock? It's basically very similar to...
00:01:43
Speaker
what we experience in an interpersonal culture shock when we go abroad, when we um meet people from a very different cultural background, people who, you know, might have different clothes on. Sometimes we just see different buildings.
00:01:58
Speaker
Food is a big, you know, big one too, right? And I don't know whether you've experienced this before, but when you have, when you experience culture shock, it's really unsettling, right? It's it's this feeling of um You know, not, maybe initially we are very excited about it, about all these, you know, different inputs.
00:02:15
Speaker
But after a while, we definitely often have this feeling of just not really knowing what these signals are We can't really interpret them. And it's sort of sort of a um you know, and a feeling of anxiety.
00:02:31
Speaker
And researchers have actually found that there is a like people can't, become more anxious, they can develop depression. There are physiological signals that people like, for example, they have um increase in blood pressure and things like that, right?
00:02:46
Speaker
So culture shock is really real in interpersonal communication, and it is actually not very different in digital culture shock. So When people experience a technology that is developed by people unlike themselves, so people with different cultural values, for example, it might be a little more subtle, but people will notice something is off. Something isn't quite as I expected.
00:03:07
Speaker
And maybe I will find things not quite as intuitive as I was hoping for, right?

Designer Responsibility in Tech Design

00:03:12
Speaker
And users tend to blame themselves. They tend to say, it's me, right? I am not able to use this software. But In my field in human computer interaction, we would actually say, no, it's not the user's fault, right? It's the designer's fault for not anticipating how the user will perceive the software or ah the technology and how it should be designed to be more intuitive.
00:03:32
Speaker
And so but when there was a mismatch between how the software is designed and how the user perceives it, that's what I would call a digital culture shock. So you start your book with an autonomous vehicle, Waymo as the example, right? Right. So do you want to talk about some of the autonomous cars and what actually happens in that?
00:03:54
Speaker
yeah I mean, i'm I'm fascinated by self-driving cars because I, for the longest time, did not believe this would be possible. And I'm still not 100% convinced. But the reason I put this example into the book is because, right, the the way Waymo has launched it,
00:04:09
Speaker
is actually quite smart. They tested the cars in very, i what I would call, almost controlled environments, right? Very safe environments, so to say. So, when I give a talk on this subject, I often include an image of um a very nice streetscape where you have lane markings that are 100% correct and everything is is sort of easily predictable.
00:04:29
Speaker
And I would say that's very different from most of the world when you go to a different country. yeah and So the example I use in the book is Cairo, Egypt, you won't find a setting like that, right? Suddenly, it'll be pedestrians on the street, donkey cars, there there will be more buses, it will be just generally what some researchers call a bit more chaotic driving behavior, right?
00:04:52
Speaker
And it's not just a place like Egypt, it's, you know, I've seen similar situations all over the world. It's not, not um you know, even in France or Italy or so, right? Like, I just simply don't believe self-driving cars could be as safe as they have been proven to be in the US.
00:05:11
Speaker
And now you might have heard about, you know a few accidents here and there, and that is definitely still the case. I think Waymo itself has a pretty good history. When you look through their data, it's it's ah it's a fairly safe bet at this point for them. But again, only in these very controlled environments.

Cultural Specificity in Technology

00:05:26
Speaker
And, you know, actually, just I think a few months ago, Ramos started testing in Seattle, partly because they didn't have any data on on rainy streets. Right. And maybe also the potholes in Seattle and so on. I don't know.
00:05:37
Speaker
But, you know, that just shows you you can't just transplant something that has been trained in one country or in one part of the country to another. It's just not how this works. And now I would say that's the same for all of technology. I also started with this example because I feel like there's sort of an analogy between how cars are trained on data and then are able to predict the environment around them and how we as the developers or designers of software are trained. but We also gain experience throughout our lives and it forms our culture. That data important.
00:06:10
Speaker
Basically what we take and and what we use to react to technology and use it, but also to develop it and design it if we are in that kind of job. So something that you talk about in the book is American assumptions, right?
00:06:27
Speaker
So whether that's a driving car or data that ends up embedded in a robo-axis, what happens with that data and how does that transform from one scenario to the other?
00:06:39
Speaker
Yeah, I mean, the reason i I'm calling it an American scenario is sort of because the assumption that people want to have self-driving cars, to me, is already a very U.S.-American assumption, right? It's sort of a like many people in the U.S. s drive to work. And um the the idea of commuting by themselves is is, you know, probably very unique in the world.
00:07:03
Speaker
In most places, that's just not the case. People commute together, there's public transport, or they share rides, or... you or they might just commute by foot or bikes and so on, right? So this idea of of having long commutes by car and that being a really uncomfortable experience, and so you need to have self-driving cars to alleviate that, that is ah is a very American assumption to me. And, um you know, there there are definitely lots of upsides to self-driving cars and and i don't want to be in the position to talk it down because we we know that it's also it can make things more accessible for people and so on right and so there are people who definitely benefit from it but i think after all it is a very altogether it is a very very u.s assumption to have even this need And so the other question I think that you had is sort of about this data that that we're

Bias in AI Training Data

00:07:53
Speaker
training it on, right? So if if we are training robot cars, just the same as all of AI, right?
00:07:58
Speaker
If we're training that on Western data, we cannot assume that it's going to work for other contexts. It just doesn't transplant like that. And it'll break and actually lead to culture shocks. And I think this can backfire for businesses because they they they don't necessarily...
00:08:15
Speaker
want have a flop in a different country, right? It can also be in the case of self-driving cars, it it can actually have a severe security and safety implication. And ultimately, it can be very upsetting for users if things are not designed for them. And so that's sort of what I what i talk through in the book with with various different examples. So do you have a favorite example that you've gone through in the book?
00:08:37
Speaker
Yeah. You know, there's so many examples that I find it really hard to say this is the example. But I will say that the the self-driving car example for me is is a good one to just sort of tell people we can't just take something from one context to the other. But then later I do talk about ChatGPT, for example, and, you know, some of the earlier models in AI.
00:08:57
Speaker
And um we have done a lot of research, but many others have done similar research on looking at how these are biased towards the more English-based data they're trained on. And often that data comes from people in the US, right?
00:09:11
Speaker
And that data is inherently biased towards the views of mostly North American people. And, you know, you can see this in whenu when you ask um these language models, these AI chatbots, too.
00:09:25
Speaker
When you ask them questions about, for example, the World Values Survey, which is a large-scale survey survey that social scientists have done with people around the world to understand their values, When you ask a chatbot these questions, you'll see that it gives out responses that are very much aligned with people in Western countries, but not very aligned with people in the rest of the world. And so you can just imagine, right, if if a chatbot is there and tells you about, you know, it's made it doesn't believe in equality or maybe it does believe in equality, right?
00:09:56
Speaker
But if these things don't align with your values, it is really a problem. And what what we've shown in our research is that it actually... When people interact with it, even within just five minutes, it can trigger reactions very similar to culture shock, including this this feeling that it has a negative influence on your well-being and so on.
00:10:14
Speaker
Technology isn't culturally neutral. Now, what are some of the most important values or norms that you see when quietly encoded into some of these mainstream

Efficiency vs. Social Interaction in Design

00:10:26
Speaker
digital products? You just talked about ChatGPT, but...
00:10:29
Speaker
There's other AI training models as well. This is a great question. One of the most prevalent assumptions, and I would call it a misassumption, is that technology is always about efficiency.
00:10:41
Speaker
And I have to admit that i you know I grew up in Germany and I did my PhD in Switzerland and I felt like it was sort of indoctrinated in me that that we need to be efficient and people in Germany tend to be on clock time rather than on event time.
00:10:57
Speaker
and we We really, the clocks in our lives really rule our our daily procedures and so on. And so designing technology around this idea of efficiency and making things more efficient has always been very natural to me.
00:11:09
Speaker
But I'm starting to question that because some of the research we've done is that You know, where we've actually looked at how do people interact with each other online? What we saw over and over again is that there are many people around the world, often people who tend to be more collectivistic, so more group oriented, you know, they they might take a longer time to build trust with each other. And they actually work really hard to maintain relationships and to build them in the first place. And all of that requires a lot of social interaction, right? Yeah.
00:11:42
Speaker
And as Westerners, we sometimes tend to cancel out that social interaction in the name of productivity and efficiency. And for me, that is the the greatest single misassumption that, you know, why do we need to make everything in life more efficient?
00:11:55
Speaker
Why do we try to actually make it much more about social interactions? And now again, that's a very individualistic assumption that... Might seem very natural for people in the U.S. and in many Western countries where it often is about I'm an individual and I'm going to worship my own time and protect it as much as I can. From an engineering perspective or as a computer scientist who thinks that good design is just good design, how would you convince someone...
00:12:25
Speaker
that the design itself speaks to a particular cultural point of view instead of humanity as a whole? Yeah, that's a really good question. And I think something that I'm also still struggling with, I mean, my approach was to write the book, right, to convince people that we have all these different values in the world. And i can say with some certainty that i I don't think all of these values are worth supporting through technology.
00:12:52
Speaker
But at the same time, there are many values that I, you know, who am I to say that my values are the right ones, right? And so, you know, partly I think when we become more aware of these cultural differences across the world and we see that, you know, some people, it's it's simple things like, you know, some people might not prefer a user interface that's as plain and white and gray as as many user interfaces in Germany often are, right? um Some people might like it.
00:13:22
Speaker
you know way more colorful, for example. Some people might not like software to tell them to not have any chitchat in the online forums in the name of it efficiency, right? and yeah Instead, what we've seen, and you were mentioning India, right? like We've seen that Indian participants in our studies were often saying, no, we like having to, we we like Being able to talk to each other and having this chit-chat in order to build these relationships. And yes, we still want an answer to our questions in these question and answer forums, but we don't see the urgency to get this right away as some of our Western counterparts do. So underneath a lot of these apps and devices, there's decades of research about how humans think and behave, or at least what they claim. So when we look at closely at humans and when they're being tested in the lab, it turns out that
00:14:18
Speaker
A very large portion were Western-educated, industrialized rich, and living in democracies. So these so-called weird populations, which is being studied like penguins. So for listeners who haven't heard of this term before, what are weird populations? Where does it come from? And why does this acronym matter? Yeah, so this acronym was coined by psychologists who found that most studies are done with North American undergraduates, right?
00:14:50
Speaker
and And it's very simple. We do a lot of research in North America, and then we tend to just study people who are readily available, right? And so these are North American undergraduates who often have to take these studies for credit or just to earn a little bit of money and so on, right? It's it's a very simple thing.
00:15:06
Speaker
But it has these unintended side effects of that we know a lot of studies knowledge And, you know, we we know a lot about the behavior and the thinking and how people use technology in North America, but we don't really know very much about the rest of the world. And in fact, in some of our studies, we found that, um you know, we only know...
00:15:29
Speaker
thinks about roughly 12% of the population because the rest of the population is just almost never being studied. And this is an issue because for the longest time, we've actually assumed that a lot of these these human behaviors are very universal. And now we found, whoops, they're not. These psychologists at some point published a paper And they called it the weirdest people in the world. And in that paper, they showed how the findings about North American undergraduates tend to be on the extreme end of a spectrum when you compare it to other populations across the world.
00:16:05
Speaker
And this is anything from how people make decisions to how they behave in other

Localized vs. Universal Design

00:16:10
Speaker
settings and so on. And so we ultimately, when we design technology following these these ideas about weird populations, we make them make that technology, we we basically optimize it for weird people and we leave out everybody else.
00:16:27
Speaker
Given this weird bias, what's an example of universal design or guidelines that fall apart from the moment we test something somewhere else?
00:16:38
Speaker
I know you talk a little bit about certain forums. Right, exactly. Yeah. I mean, this is really difficult because I... personally do not believe in universal to design.
00:16:48
Speaker
I don't think there is any designs out there that are really universally suitable for everybody. um And, you know, people might want to fight me on this, but I will also say that I think we simply don't know enough yet because we often don't study people around the world as we just discussed, right?
00:17:05
Speaker
And so i I could bet that even the simplest design wouldn't work for everybody and maybe because it is too simple. So ultimately, my my big belief is that it would be way better if we had people around the world designing technology and coming together. In in my field of human-computer interaction, we often say we need to study people, we need to do user-centered design, we need to go into those communities and study people and then develop software for them. But even that is difficult because if you come in from one set of values...
00:17:37
Speaker
and take them and design for another, let's say, cultural group, you will still impose your own values on that group, right? It's really difficult to not to do that. So ultimately, I think the idea of local software is what the world needs.
00:17:53
Speaker
And that's, of course, really difficult given that we have, you know, the the largest tech companies are in the US and that might not change anytime soon.

Cultural Alignment in Platforms

00:18:03
Speaker
Now, you have something, speaking of the larger tech companies, we have minimalist Google versus the busy, what is it, Naver? Naver, yeah. Naver is a South Korean search engine. and Yes.
00:18:18
Speaker
used to be for the longest time, really the one with the largest market share in South Korea. I believe at this point, Google has sort of caught up to it. But for the longest time, they weren't sure what made Neva so much more popular among people in South Korea. yeah So as living in the US, I've never heard of that before.
00:18:39
Speaker
Now, was there anything in particular that made Naver more successful, especially when you're looking at how culture shapes what people search for using the engine?
00:18:49
Speaker
Yeah. Yeah. so So lots of different things. So when you go to the website, naver.com, it's actually really difficult to provide an image of it because what's really fascinating about it is that it's it's not a static website. I'm looking at it right now.
00:19:03
Speaker
Yeah, it's amazing. I mean, it's blinking and it has bright colors. It actually used to be even... more colorful than it is today. And so I think partly that already appeals to the visual preferences of many people in South Korea, right? We can't stereotype here.
00:19:18
Speaker
There is lots of variation. But by and large, I will say that South Korean websites tend to be a little bit more colorful than those in Germany, for example, right? But there is another aspect about it, and that's that Google has a lot of answers for everybody, but they are usually English. It's the English internet that it'll give you, right?
00:19:37
Speaker
The South Korean market is, of course, much, much smaller. And so when people were searching for answers, they didn't necessarily want to just, like, it you know, a search result from the U.S. or something wouldn't be necessarily relevant. um And so...
00:19:54
Speaker
One of the big features on Naver is that it's not just random search results that you get returned, but it's the community that answers. And so that's really interesting because, again, it goes back to this idea of socializing and building trust with people. And, you know, South Koreans, then maybe that's a really large in-group for themselves, but it basically means that these people are more likely to trust the search results that are given by people like themselves. So essentially, it's kind of ah digital corner store instead of sterile search box.
00:20:27
Speaker
Right, exactly. Now, yeah you can are there cultural values that are wrapped into that?
00:20:36
Speaker
I, you know, that part I am not sure about. So I i think what we've seen is that in East Asian countries, we often have these one-stop shop corner store ideas in the online world, right? And platforms that where you can do anything from searching to seeing the weather to trading stocks to going shopping and all of that, right, in one platform.
00:20:59
Speaker
And these mega platforms tend to not be a thing in the Western world. But I actually don't know which cultural values would align with that idea of designing such huge online platforms.
00:21:14
Speaker
Because what I've seen, just thinking about this, you have Yahoo Japan and Baidu is the other platform. And it seems as if Google is so slow to recognize that a minimal impact interface wasn't just a one-size-fits-all solution let's take it back to even aol in the 90s you would pop up with that home page and all the color is right there are we just seeing that it's being as like a cut and paste solution for us in the western world making things look so sterile and standard Well, that's it's really interesting. I mean, that there seems to be something here, right?
00:21:57
Speaker
Somebody came along and said, we need to make this really simple. And, you know, like the simplistic design was was basically what made Google really famous and people loved it, right? Yeah.
00:22:09
Speaker
in the Western world. And it's it's fascinating to me, actually, because I i do think that this, it it again, it doesn't translate to other contexts, but I also think there was something there already that many usability experts were always saying, you know, the least information you have on the user interface, the better. That was always, you know, we were told that's the case. And clearly in this case, it it didn't hold true for other contexts and for other countries around the world, right?
00:22:38
Speaker
And so it's it's interesting. Somebody made that assumption. They clearly had that value. It worked out okay for people invest in Western countries. But then that promise sort of broke when they tried to enter other markets around the world. So we also see this parallel in using StackExchange. I'll date myself a few years back when I was in grad school. I noticed that there was a huge contrast in students and how they searched and sourced their information because a lot of folks here just know of Stack Exchange.
00:23:13
Speaker
While you have another very huge platform, which is Zihu. And those two platforms are built around questions and answers, but very different expectations.
00:23:24
Speaker
Do you want to talk about The differences in them and how they source their answers. Yeah. So again, the interesting part is that stack exchange, prem promise of stack exchange is to make everything way more efficient, right?
00:23:39
Speaker
You want an answer, it's going to be right there on top. And it's being sourced from the ancestors of a community that is contributing to Stack Exchange, right? A few years ago, a student came to me. His name is Nijini Oliveira.
00:23:54
Speaker
And he's from Brazil and has actually worked with me ever since. But back then when he came to me, he was saying, you know, Katarina, I think Stack Exchange is really one of these examples that's super individualistic. And I was like, really? how so?
00:24:07
Speaker
And he was pointing out all these different things on the platform that I had never really noticed. But it is this this focus on efficiency again that he was pointing out. So it'll say things like, there's no chit-chat here.
00:24:18
Speaker
And actually in its guidelines, tech Stack Exchange will even say, don't thank people when they provide an answer. You can just upvote the answer instead. but But there shouldn't be this clutter on the interface when you think somebody. And so he went through the entire platform and did a value-sensitive design analysis of the whole interface.
00:24:39
Speaker
And he found ton of these examples that basically showed everything should be about efficiency. And again, that's really different from how many people prefer to work with each other. And you know just taking an answer from some somebody that you haven't even known,
00:24:53
Speaker
And making it all about competition, that's really the difference here. And now, you know, Stack Exchange is also really interesting because this upvoting has worked in many ways in the Western context, right? How a lot of Westerners are being motivated to contribute something to the platform. Right.
00:25:09
Speaker
And so they they will go on and answer questions or pose and post questions too, and they will gain a little score. But when we asked Indians and Chinese participants about their feeling about this and how they might want to redesign this, they repeatedly said to us, we don't like being reduced to a score. score And we also don't really like this form of competition. Instead, we'd like to work together in a group and change.
00:25:35
Speaker
become experts at this and and really develop relationships with each other so that we can help each other. Now, thinking about this in another sense, in places like Nambia and Qatar, people adapt Facebook to local hierarchies and norms,

Cultural Limitations in Social Media

00:25:51
Speaker
right?
00:25:51
Speaker
So, for example, who's allowed to friend whom? And what do these adaptations tell us about the limits of supposedly egalitarian global platforms?
00:26:02
Speaker
Right. i I think it's super interesting and to look at Facebook because the assumption was always everybody should just be able to reach out to anybody else and befriend that person, right? Everybody's also suddenly on a first name basis, which is why my friends and students in India always laugh at this. They're like, I would not even call my dad this, you know, his first first name. It's just not something you do. Right.
00:26:27
Speaker
and So in Western societies, it might be way more common, but I think, again, it's it's especially common in the U.S. and not so much in in many other places. But yeah, this idea of just reaching out to, you know, a person and befriending them, that that assumes that it's a very flat society. But we know from lots of cultural theory that there's, for example, this concept called power distance, right? So different cultures have a different different power differential in society. They see equality differently.
00:26:56
Speaker
They might have more of a hierarchy and so on. And so this idea of reaching out to somebody might actually be almost offensive when that person is higher up in hierarchy. And it could also be the other way around.
00:27:09
Speaker
But again, because Facebook doesn't have any affordance to change the way somebody reaches out to somebody else, it It's sort of, you know, it opposes this Western mindset on many other societies. And actually, that's that's one of the examples where when I tell people about this concept of digital culture shock, that's one of the examples they often will refer to and say, that's when I first noticed that software is just simply not designed for us.
00:27:34
Speaker
Now, as we move deeper in the book, the stakes get higher. It's not just about preference. It's also about power. So whether this is ride-sharing apps in DACA with English interfaces fixed or when AI image generators redraw South Asia to look like Western

Digital Colonialism in Tech

00:27:52
Speaker
Europe.
00:27:52
Speaker
So what happens when technology stops being a tool? It starts feeling like a new empire and especially one that colonizes meaning instead of territory. What in your view made cases more than just a bad localization job. Let's look take Uber, for example, and make it look like a form of digital colonialism.
00:28:15
Speaker
What, in your view, made that case for more than just a bad localization job? For me, the imperialist discussion isn't necessarily only about content moderation, right? That's maybe one of the very, it's a very um strong example of it, I think.
00:28:31
Speaker
Sure. It's very apparent, right? Somebody owns this platform, makes the decisions. he I talk about um some of the examples that came up from other researchers in my community, in the human-computer interaction community. And many of them have studied, for example, Uber being deployed in Dakar and Bangladesh, right? Or Quora.
00:28:50
Speaker
being deployed also actually in Bangladesh and and this this idea that it's now Bengali platform. So it serves the Bengali language market, but that market actually spans um both Bangladesh and India.
00:29:05
Speaker
And so there is a lot of, you know, I described this in more detail in the book, but there's, um you know, basically people assuming Maybe the people who are moderating and making the decisions about which post goes through are maybe Indians or maybe they are on the Bangladesh side. Like people are sort of speculating about it online.
00:29:25
Speaker
There's also the example of um Facebook making moderation decisions about what photos they will accept and which ones they will moderate and and ultimately remove. And so...
00:29:36
Speaker
you know In Islam, people often, maybe not not routinely, but know for certain festivals around the year, they they will post photos of animal sacrifices on Facebook. And of course, for Westerner, this might be shocking to see. And so at some point, Facebook made the decision that whenever that happens, a photo like that appears, it will be censored and taken down. and And that's a decision, right? You can argue, okay, Facebook is a Western company, maybe they can make that decision, but it has huge implications for those communities who are also users of the system.
00:30:08
Speaker
Facebook is also profiting off those people and making money off of them, collecting the data and so on. Ultimately, a fellow researcher said, you know, we have this tendency of secularizing everything and trying to make technology seem like it is completely secular, but it is not.
00:30:26
Speaker
Because by making decisions like this, of taking down animal sacrifices and photos of them, we are basically imposing a very Western and maybe Christian viewpoint on these platforms.
00:30:39
Speaker
So generative AI, most likely, and when you're taking those examples, generative AI will produce Western-looking results. Right. Now, what kind of harm does that actually cause for implications for those realities who are affected? Yeah. so So ultimately, people feel marginalized, right? when When you see, let's say, you're trying to use a text-to-image model and you're asking it to generate houses of worship and all it generates is maybe churches.
00:31:09
Speaker
right rather than any of the various different houses of worship that could be more familiar to people in places around the world. Then that that makes people feel marginalized and sort of not seen. It feels them being overruled by the technology and and its decisions. um And so, you know, in in human computer and interaction and in many other fields, people have called this a form of digital colonialism. It's sort of this idea of, you know, really, it's a new form of colonialism where decisions are being made by tech companies that are often situated in Western context, not always, but many times. And they make these decisions and basically transfer values and impose them on people around the world. Now, when an AI sounds like it shares the values of this digital colonialism, what worries you the most about these systems, especially if they become everyday infrastructure?

Power and Diversity in Tech Development

00:32:02
Speaker
Yeah, one of the big worries I have is that we become more more dependent on them, right? And as you said, like it is a form of infrastructure. It is, you know, not that different from our electricity grid or, you know, maybe, you know, other utilities that we rely on every day. And putting that power into a hand...
00:32:19
Speaker
into the hands of and maybe a handful of people who really rule most of these technologies, that could be really difficult. like it it basically means that those people get to decide what all of us use. and We have seen um that technology can really, over time, change how people do things and ultimately change people's values.
00:32:41
Speaker
I would say AI is, is ah you know... even more powerful than all of the technology we already use because it makes it seem so human-like. And so this anthropomorphization really leads us to more quickly adopt the values. So essentially, should these systems code switch, whether that's changing tone, behavior, values, of visual style, to fit different contexts?
00:33:10
Speaker
So the short answer is maybe. It's a difficult answer. It's actually part of our ongoing research because what we found is that when we make these systems code switch, some people actually find it creepy, right? It's almost like, well, come on, we know this technology was designed by some Western guy in Silicon Valley, maybe, and now it's pretending to be like me? that's That's basically, it amounts to mimicry and it's something that not everybody appreciates.
00:33:38
Speaker
and So I don't For a long time I have thought that would be the answer, right? Just adapt these systems and then everything will be good. But I actually no longer believe that's the case. i think what we really need is is more voices and technology development. And ultimately, you know, that this is about technology hegemony and who has the power to to rule the technology that the world uses.
00:34:01
Speaker
But I actually firmly believe that we should distribute that power a little bit more and and have people around the world develop technology for themselves. So when you're developing technology for yourselves,
00:34:12
Speaker
or yourself For listeners who may never design an interface, but use technology constantly, what are some concrete questions that they could start asking, whether it's about their apps or the tools that they use or things that they should notice when you have systems that are aligned with their values and when they're not?
00:34:34
Speaker
I mean, I think as soon as they notice it's not aligned with the their values, I think that's actually the the big

Addressing Cultural Biases in Tech

00:34:41
Speaker
step. and And that's a huge achievement already, because I think the big issue is that we often assume technology is culturally neutral. And that assumption is what often happens.
00:34:51
Speaker
leads us to you know just sort of change our behaviors and so on but as soon as we notice like no it's not culturally neutral this was designed by somebody and it's not necessarily designed for me and it's not me who I have to blame for not knowing how to use it not feeling comfortable with it and all of these things right I think that's a really big first step and something that, that right, pushing back against that is is maybe the next step. But I think just noticing it and making sure that you don't blame yourself for any of the usability issues you might experience, I think that's already a ah huge step in the right direction. So I heard that you were running something called Lab in the Wild.
00:35:30
Speaker
What is it? Yeah, so Lab in the Wild is my response to collecting to the weird problem. Lab in the Wild is a ah virtual online platform that anybody can participate in. It's almost like a citizen science platform where people around the world can go to take part in our experiments and test themselves. um You can learn something about yourself. So every experiment will, at the end, tell you how you performed in comparison to others, how you know where you stand on different cultural dimensions, for example.
00:36:01
Speaker
we We developed Lab in the Wild to learn more about how people use technology around the world. But there are lots of tests on there now that I also integrated into the book so that people can actually test their own cultural background.
00:36:14
Speaker
And I'm hoping that this enables people to just become a little bit more aware of their own implicit biases, where they sit on the scale of, let's say, individualism versus collectivism. We talked about clock time earlier and event time and all of these things, right?
00:36:27
Speaker
what kind of communicator they are. All of these are influenced by our cultural background. And so knowing more about that, I think, can really be helpful.
00:36:36
Speaker
Is there any big takeaway that you want people to know from whether it's this conversation or your research book? or the but Yeah, I mean, I think I said most of these things already, but I, you know, ultimately the reason I wrote the book is to increase people's awareness that technology isn't culturally neutral.
00:36:58
Speaker
It's also the fact that technology companies have so much power over our lives in making little decisions that many of the technology developers might might might not even notice themselves they're making, right? There's so many very little design decisions everybody has to make when they develop technology that...
00:37:18
Speaker
It is often not apparent to people that, wow, that was actually based on my own cultural experience. That was the reason why I made this choice. And so because of that, I think it's really important for people to know that technology is never culturally neutral. It cannot be. And when we use it, we need to be very aware of that and very careful in how it influences us and how we might want to use it and which technology but we we might want to choose.
00:37:43
Speaker
Trina, thank you so much for coming on the show. Today's conversation reminds us that technology isn't just about circuits and code. It's culture in digital form. And now every interface that we interact with, every algorithm, every perfectly rounded button that carries assumptions about how people think, behave, plan, and even relate to each other.
00:38:04
Speaker
And the big thing is when those assumptions collide with different realities and the shock isn't just a glitch, it's a signal. And if there's one thing to take with you today, it's this. Technology doesn't become universal by pretending we're all the same.
00:38:20
Speaker
It becomes universal by learning how to honor how different we truly are. So until next time on Breaking Math, stay curious, stay critical, and be aware of the systems that shape your life.