Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Does AI Worsen Gender Inequities? image

Does AI Worsen Gender Inequities?

S2 E2 ยท Unpacking Us
Avatar
2.3k Plays1 year ago

Machine learning algorithms are increasingly being used to make highly consequential decisions for citizens of the Global South. I talk to Genevieve Smith about how algorithmic decision making in the realm of financial inclusion can lead to inequitable outcomes along gender lines, how that compares to the status quo, and how we can do better as practitioners and researchers.

Genevieve Smith is the founding co-director of the Responsible and Equitable AI Initiative at the Berkley AI Research Lab and is also part of the faculty at Haas. She also serves as a Gender & AI Fellow at USAID and leads research partnerships with big tech firms.

Transcript

Introduction to the Episode and Guest

00:00:04
Speaker
Hello and welcome to Unpacking Us. I'm your host, Asad Lyakat, and I'm very excited to bring you another episode of the podcast. In the previous episode, we talked about generative AI and the ways in which it does or doesn't represent those in the global South well. Today, we're going to focus on more traditional forms of AI or machine learning. And in particular, we're going to talk about the inequitable outcomes that often result from application of ML algorithms in the global South with respect to gender or other social categories. We'll discuss what it is about these systems that leads to these inequitable outcomes and what can be done better. I'd like to welcome our guest for today, Genevieve Smith.
00:00:47
Speaker
Genevieve is the founding co-director of the Responsible and Equitable AI Initiative um at the Berkeley AI Research Lab, and is also part of the faculty at Haas, the business school at Berkeley. She also wears many other hats, serving as the Gender and AI Fellow at USAID, and leads research partnerships with big tech firms.

Personal Experience with Biased Algorithms

00:01:06
Speaker
Genevieve, welcome to the show. Thanks so much. It's wonderful to be here. um So our focus today is on the Global South, but I'd like to start today's conversation on a more personal note. um So you have written about how you personally experienced a case of an ML algorithm displaying gender bias when you and your husband both applied for the same credit card. ah Can you please tell us what happened?
00:01:33
Speaker
Yeah, sure. And this is also you know something I didn't even really realize it happened until I there was ah saw the news article that it was in 2019 where a husband and wife had applied for the same Apple credit card and the husband had received a limit that was something like 20 times higher than the wife's limit. And when they asked why that happened, because they'd applied with all of the same um credentials and information and shared accounts, um they The credit card company had been unable to say why that was the case and pretty much responded saying, well, the algorithm said this and so and that's why it happened, but couldn't say as to what to why that was the case.
00:02:18
Speaker
So that caused a big media hurrah. And it was around that same time period that my husband and I had applied for the same credit card. And kind of a similar similar story that was echoed, which is that he got a higher credit limit despite um you know similar application processes. And I wasn't able to understand why he had gotten a higher credit limit. And it was, again, kind of something that was informed by an algorithm, but very unclear and kind of opaque as to what exactly had happened. so It was and was interesting to see my story ah echoed by another story in the media and just really a reminder of the different ways that machine learning is impacting our ah our lives and present in our lives in ways that we don't fully understand or recognize many

Global Impact of Machine Learning Biases

00:03:01
Speaker
times.
00:03:01
Speaker
Yeah, what strikes me often is is the is how omnipresent and ML algorithms are in our lives ah in avenues that we don't understand. right And this is true in the global north and the global south. right And so in your in your work, you have highlighted a lot of these examples in the US, in Europe, and then also in the global south. um So what would you say are some of the kind of the key examples of inequitable outcomes that we know have occurred ah in the global south. And I guess, like do you also have a view on whether this is kind of more common or more problematic in the global south versus the global north?
00:03:40
Speaker
Yeah, and I'm really happy to have this conversation because I think it is one that doesn't happen enough, you know thinking about some of the implications and impacts that are in the global South. And I would say, you know I think around what are some of the you know key outcomes or challenges when it comes to the global South. I mean, I think they're very similar to the ones that are happening in the global North. It's just that a lot of times we're not having those types of conversations. I think there's a big narrative around AI being implemented, especially when we're talking about these ah more discriminative AI systems that are more traditional versus the generative AI systems. um They're implemented and under kind of these AI for good types of initiatives. and I think sometimes that lexicon can lead us to not not examining thoroughly enough the potential unintended consequences of these types of technologies.
00:04:35
Speaker
and so I also do research around machine learning based credit assessment tools in the global South, which are filling a really important gap, which is that about 1.5 billion people globally are unbanked or underbanked. So they don't have access to finance. And this is linked to histories of ah exclusion from formal financial systems for a variety of reasons. And women are predominantly ones that are unbanked. um Again, linked to different biases that can exist in formal financial systems. So machine learning really opens up new opportunities for financial inclusion. um But at the same time, there are different ways that bias can be embedded within these tools. And we need to really be aware that of those different of those different ways that it can happen. I'm happy to talk through you know some of those different methods and mechanisms.
00:05:28
Speaker
I will also say, you know to your point around the previous episode around generative AI in the global South, we actually just finished a research project that was looking at linguistic bias in chat GPT and specifically GPT 3.5 and GPT 4. And we looked at 10 different English language varieties globally, because there's this presumed assumption that chat GPT is excellent in English. So we wanted to understand, but what about for Kenyan English, Indian English, you know, different forms of English, um, globally. And we found that chat GPT and or GPT 3.5 GPT four, um, default to quote unquote standard American English. So they're, they're more likely to respond in standard and or they, they do respond and standard American English and.
00:06:15
Speaker
they're more likely to have outputs that are seen as discriminatory, stereotyping, and condescending, and quote-unquote, non-standard English language varieties. So, Kenyan English, Indian English, um Scottish English, um you know things like that, which has you know implications as we're starting to adopt these tools and integrate these different tools into various types of services. So, just to note, that's that That's super interesting. So i um I guess on that point, I'm i'm working with an edtech startup in Pakistan that's trying to build some chatbots to kind of speed up English language learning for teachers. um And we're we're kind of grappling with some of these same issues that we're like relying on, chat GPD-powered transcription of their Urdu to English. And that is kind of you know right now producing some like interesting outcomes that we're that we're working with to figure out like what is the best
00:07:07
Speaker
way forward you know facing some of these same challenges that you're describing. So that's super interesting. And I and i hope to talk about and of you know that particular problem in a future future episode as well. um So you brought up kind of the case of providing credit to women in a Global South context. um So in that case, I guess like I want to dig a little bit deeper on what could be the cause of some of these inequitable outcomes that we see.

Causes of Inequitable Outcomes in AI

00:07:34
Speaker
Is it the case that like we just don't have enough data on on women because they haven't been part of the system, so it's just like hard to produce good predictions about women? Or is it like something about the system or society that's that's that's more at blame?
00:07:46
Speaker
Yeah. So I think it's a really important question. And also just to note, you know, I'm not all doom and gloom when it comes to AI in the global South, right? I mean, I do think that just as an important caveat, right? I do think there is huge potential in leveraging AI for different types of outcomes, you know, educational outcomes. And you mentioned there are critical things to keep in mind, but also really big potential. um you know, for for climate change and resilience and helping inform different climate strategies. I think there's a lot of really exciting things happening. Um, but yeah, I also don't think that we're necessarily thinking about the unintended consequences enough and proactively building to anticipate those so that we can create more equitable outcomes. So in terms of what the cause of some inequitable outcomes can be in
00:08:37
Speaker
you know, and thinking about the financial inclusion space and thinking about the credit assessment space. There are a couple of different, you know, reasons for this. I think that in terms of doing, you know, is it that we don't have enough data, so it's hard to to, to produce good predictions. I don't think that is so much the case for these particular types of technologies. The ways that they work are that they're predominantly in the form of apps. So they're apps that people can download from you Google play because Android is ah the most common, um, versus iPhone, you know, it's just more expensive. And so get download from Google play and then essentially get
00:09:18
Speaker
grant permission to the app to look at most data on one cell phone. And there's a lot of data that these different fintechs and these different tools have access for. I will say that it's it appears as though there are, you know, given gender digital divides in the Global South, it is men tend to be getting loans more and perhaps there's also a piece around they're applying for loans more and so there can be something around
00:09:50
Speaker
There may be more data related to men, so perhaps there's a greater accuracy when it comes to men. um But I don't think that's so much the reason for the inequitable outcomes. I think it's really more around just the fact that data can be gendered and features and proxies that machines can learn from can be gendered. And so when we think about something like financial inclusion and access to finance, That's a very gendered space, right? I mean, women have historically, and we're talking in, let's say, you know, India or Kenya, which are really big markets, for these types of technologies, they have historically not had as much access to finance as male counterparts. And this is linked to, you know, all sorts of different things in India, which is considered to be more gender inequitable. According to the gender gap index, the women tend to
00:10:42
Speaker
get married fairly young and um be part of their the ah move to the man's village and be part of that household. And men are more likely to be the breadwinners, which is common in lots of places, more likely to also control the household finances. So there's all these different types of um things in society that these machines can also learn from. um And so what really matters, what's something that I found in my research is that Interestingly, even though more men are getting access to these loans and having higher loans, women are actually more likely to repay and repay on time. So it kind of starts to illustrate that perhaps how we're thinking about credit worthiness is flawed and has this reflection of you know gender inequities in society that aren't true realities around what credit worthiness means in one's actual ability.
00:11:42
Speaker
and willingness to repay a loan. So I think it might it's ah it's a it highlights that we might need to rethink how we're thinking about the concept of credit worthiness. When you were describing ah you know just like a lack of credit history that these women have, right and so like the algorithm is perhaps kind of taking into account that there's just not enough this's not enough history to go by. And so that you know taking that into account, it's more likely. It's it's basically saying the repayment ah probability is probably higher for men just because they have ah you know a longer history of financial transactions than women. right but But when you're saying you find that like when you do start giving loans to women,
00:12:20
Speaker
then their repayment frequency may be higher. And I was thinking of the period of, like I don't know, 20, 30 years when microfinance was a huge thing. I think it still is. you know and And the microfinance that I'm thinking of is like based on like a lot like very deep person-to-person contact. right When these microfinance agents go to a village, they get to know people, and then give out loans predominantly to women, based on these deep ties. And you know and essentially, like the repayment is is is guaranteed by these social networks as well. um And in that case, like my understanding from the literature is that one, repayment rates are extremely high, that we women try not to be extremely trustworthy.
00:12:58
Speaker
um and This is a way in which you know I think it's a good example of the contrast between like what could be the counterfactual if these ML algorithms weren't making the decisions. And this is a case of like you could have a model, or I guess we've had models, which is probably a more costly model in which like we have like more investments in those like personal connections. And then on the on the alternative, we have you know just let's just like take a look at the person's behavior on you know digitally and make decisions based on that.

Algorithmic Decision-Making: Costs and Benefits

00:13:31
Speaker
Do you have a sense of like um going forward like in your work, um the relative like costs and benefits of these two approaches? like is Is the move towards ah making these decisions algorithmically just kind of inevitable? Or or are these like you know are these decisions like reversible in some contexts?
00:13:54
Speaker
Yeah, I think it's a really good question. And one thing I'll note is that it's not necessarily, it's like your, one of your first points, it's not necessarily that men, that these algorithms are learning from like histories of financial exclusion. Well, just to, to clarify a lot of the tools, they will enter a market and they'll just distribute a bunch of loans. randomly and see who repays, and then use that to kind of inform the features and proxies of the algorithm than learns from. um So it's less about kind of like histories of financial exclusion that are baked into these tools. I think it's more around um just like and not to get too like nerdy into the specifics of it, but I think it's more around how these tools are essentially assessing
00:14:42
Speaker
one's ability to repay, which is how much money you have, job stability, cash flow, stuff like that. In addition to one's willingness to pay, which is one's behavior, trustworthiness, um things like that. And I think it's actually encoding ability to repay as like income and job stability, which um in many locations, men have higher incomes, they might have more formal jobs, you know, versus women but more often in informal economies.
00:15:14
Speaker
And so it's encoding those types of gender differences, um which impact who then gets a loan or who's deemed as more credit worthy. But yeah if you have a micro-loan, that doesn't necessarily dictate you know whether or not you have like a formal job. It does not necessarily dictate whether you will actually repay this loan. So I think that's kind of actually where perhaps it's coming from is there's some like gendered aspects of like features and proxies. um and and And are they able to incorporate like familial tiles in some way? Because you know it is the case that like you have access to a network that also affects your ability to pay. Is that something that in your experience is that is is is accounted for?
00:15:55
Speaker
Yeah, that's definitely part of what goes into the algorithms too, is like who you know how many but who who who's in your network, how close are you are to your family, ah like how many contacts you have. you know because yeah so And that can be gendered as well. you know One's network can be gendered, and again, depends on you know the country and gender gaps and things as well. But a lot of this stuff is gendered. you know That just becomes encoded in different ways in these tools. And I think in some ways they can be, you know, quote unquote accurate and that there is like an aspect of that that can be linked to credit worthiness, but.
00:16:33
Speaker
There's also, I think, credit worthiness is interesting because it's such an American concept and like whether someone will actually repay a loan. is um it like so It can be like a self-fulfilling prophecy as well, so it's really hard to actually even understand like what is accurate in some of these cases. but um Anyways, but yeah and to your to your other point around like, you know, the relative costs and benefits of different approaches and the move to making decisions algorithmically and is that inevitable. um
00:17:05
Speaker
I certainly think that there's this pressure related to you know being more efficient, um you know being being able to unlock new opportunities, these market-driven incentives that are pushing towards this sentiment that moving towards algorithmic decisions is a bit inevitable. um I don't think that it is inevitable, but I think that you know a lot of the incentive structures are towards greater productivity and efficiency are making it seem like it is.
00:17:37
Speaker
um And I also don't think that incorporating you know algorithms and into aspects of human decision-making is necessarily a bad thing, but it needs to be done responsibly. you know um And happy to talk through like what I see as some of the strategies for that. um And I guess just that like a high level to kind of conclude this this like initial part on challenges and problems. I think the main thing to to keep in mind and that for folks that are developing or managing different algorithms to keep in mind is that, you know, machine learning tools are reflections of society that we live in, right? They're pattern recognition machines. And so they're going to learn from inequities that exist in society. And we need to be really mindful of what they are learning from and what we are predicting into the future because it doesn't necessarily have to be that way.
00:18:37
Speaker
I wanted to bring up one reason why, in some countries, people prefer algorithmic decision is because of this long history of corruption and nepotism in many decisions. And so there is this notion that like if we can reduce human involvement in ah in a decision, and if a decision is you know done by a machine, then it's less likely to be biased. And people often rely on that as a crutch of being like, look, there's nothing I can do. ah like Just like in your case of the of the of of the credit card limit being different, it's like there's nothing I can do. It's it's this machine that made this decision, right? um But then obviously what's what's striking about this is that that machine is is a human creation. um And so a lot of our biases are feeding into it, but it's almost like allowing the decision makers, like you know people who've designed the system,
00:19:26
Speaker
to shirk away from responsibility. One more thing that I want to touch on is is the is the is the idea of like incentives to look at these biases, incentives to correct these biases. One thing that's inevitable in a lot of product development is that you're often working against quick timelines. um There's often no one you're accountable to directly ah ah if your algorithm produces inequitable outcomes. And so it's it's kind of nobody's job often to look at this. is that Is that part of the problem in your opinion? Yeah, absolutely. I think that's a huge challenge is the working against quick timelines. I think also in development, you you know a lot of
00:20:11
Speaker
A lot of those that are developing and scaling AI technologies are getting investment, which is you know a good thing, right? But can there can be a pressure from investors around delivering you know, value and delivering um a returning value in certain periods of time. And so that's certainly like a very real pressure that founders face, right? um And there's trade-offs. There's very real trade-offs that exist. And who is accountable if produced in equitable outcomes? I think that's a really important question. And that's one that policymakers are really grappling with, right? It's like, how do we think about policy to govern these different types of technologies?
00:20:52
Speaker
And I do want to go back to one thing you said before, which is like long histories of nepotism, corruption, et cetera, and how machines have the ability to reduce bias. And that's totally true. like In the example of prior on the financial inclusion and credit assessment tools, you know they're providing access to people who are outside of financial folds for whatever reason. And part of those reasons are related to bias and you know inequities that exist in formal financial systems. So there's certainly opportunities that machine learning and AI can play in helping to reduce some of the biases that exist in our and our current systems.
00:21:30
Speaker
I think one thing that makes me nervous is that a lot of these machine learning technologies can be, you know, considered quote unquote black box, where it's really hard to dissect the ways that they made that decision, you know, going back to the, to the credit card example, the bigger, very beginning of this, when the husband and wife asked, you know, why had the limit been so much higher for the husband and they couldn't answer. We don't fully understand the ways that these algorithms are outputting predictions and decisions. Right. Um, and that's, you know, generate AI tools and, um, some of these large models, you know, similarly, there's a, uh, uncertainty around how they're actually operating. And I think that's what makes me kind of nervous is that. Even though these tools can reduce bias and increase access to, you know, whatever it may be, which is.
00:22:23
Speaker
really exciting and important. If we're not able to understand how that decision was arrived at, it's hard to then critique or get accountability or push back if you think that a decision was perhaps inaccurate or wrong or biased. And that's what makes me more nervous.
00:22:45
Speaker
I hear you on this. I think i think there is like a ah sense of disempowerment that comes with not being able to tell. And if as builders, you know ah if if the builders also feel that sense of disempowerment, then like the end user who was disenfranchised to begin with, they might even, but but they were in ah in a world previously where they could you know at least speak to a human about it, if they could, right? Like there was there was some notion of like, there's somebody out there who has answers and how do I reach them? But now the question is like, nobody really knows. And so everybody might end up feeling disempowered in that sense. So that that is scary, yeah.
00:23:20
Speaker
um let's Let's move on to talking about ah

Strategies for Mitigating AI Bias

00:23:25
Speaker
solutions. um And so I want to hear from you from I guess a practitioner lens or from a policy lens. What do you think practitioners should be mindful of? What kind of policy solutions exist? um what do What do you think are the best ways to kind of ah mitigate some of these problems? Totally. And I love talking about solutions. I think there's a lot of different types of solutions. There's so much exciting research and work that's being done in this space as well. And you know i I like
00:23:55
Speaker
the openness around being able to talk about the challenges and problems, because then we can start talking about the solutions, right? As opposed to the saying, oh, well, it's it's just, it's better. So let's just leave it at that. It's like, okay, it's better, but, or it's less biased than maybe what exists, but also can still be biased. So what can we do to solve for this, help build some of these accountability structures, et cetera. So I love that. I think there's a couple of things to to keep in mind, a couple of categories to keep in mind. And this is kind of related to, you know, the proliferation of different like AI principles that um came about. So a lot of AI principles started in 2016, started to proliferate globally, both in
00:24:37
Speaker
ah companies and nonprofits and multilaterals around how AI should be developed, designed, and governed. and They kind of coalesced around a couple and I think they're really important ones to think in mind and then the solutions can kind ofol revolve around some of these. um But fairness, so you know thinking about fairness and bias considerations, um privacy, so think about data privacy considerations, um transparency, think about transparency. and safety and security. And then the last one is ah around accountability. So for each of those, you can think about some different types of solutions, tools, et cetera, that can be developed to mitigate some of the different problems that exist. But bigger than that, what I'm really interested in is kind of at a higher level around kind of who is developing and designing these different types of technologies and
00:25:34
Speaker
How do we incorporate the empowerment and agency of more marginalized groups in the design, management, and ownership of these types of technologies? I'm really inspired by more of movements towards participatory design and human-centered AI. um Because I think a lot of these a lot of these like challenges, problems are linked to power and, you know like you said before, like accountability and kind of who gets to decide what is designed and developed and and what kind of data these tools learn from and what are the features and proxies that then are built into these types of tools. So I think who develops and design these tools and technologies matters. so
00:26:16
Speaker
um Yeah, so I'm really inspired about um interventions, especially in the development space that are looking at how to utilize participatory design into um the ways that technology and AI solutions are designed and developed, and really thinks critically too about ownership and making sure that technologies are not extractive. you know i think there's this concept of like data colonialism and like AI colonialism that we had to be super mindful of in the international development space in which, you know, data can be really extractive, you know, and thinking about like how our benefits distributed. So anyways, I took a couple of turns in that answer. So sorry if I went too many places there, but yeah, happy to to dive in any of those.
00:27:05
Speaker
No, that's that's great. So i want I want to dig a little bit deeper into participatory design. um So so this is I guess this is a notion that um i've kind of I've heard being described often, and I'd love for you to to get a little bit deeper into like how in practice that might work. And I guess like some of the questions that I have related to this are, like technically speaking, um what are the gaps in the conversation that you may have with ah with like members of a community who may not be kind of that familiar with the technology to be able to kind of get some of their feedback into the system, right? So for instance, you know if we keep talking about your your example of um you know giving loans to men and women in in in a setting where women are frequently not part of the financial system,
00:27:52
Speaker
um like Let's say there's like engineers designing the system, there's data scientists like trying to get all the data in one place who are not aware of the local context. that you know They're sitting in Silicon Valley or they're sitting you know in some capital city and some in some developing country and they don't really understand what goes on locally. And then there's you know there the end users who are not very well versed in technology, who don't understand how their data flows you know through these systems. how would that conversation go? And how would it go beyond? because Because you've seen models of like, let's go get feedback from folks, right? And that ends up being like a box checking exercise often. So how does like how does a well-designed participatory design effort look like? Such a good question. And you're you know you're so right that it can be
00:28:39
Speaker
participatory design can also be this check the box, potentially extractive activity where it's like, okay, you tell me how I should adapt this. And then I just, you know, kind of don't report back and improve my technology, but maybe don't give us, give you credit or kind of redistribute some of those different benefits. And so I think it's really important to think about. You know, what do we mean by participatory design? And it's just saying that I think the space is grappled with a lot. And I will say that this, this is also a pretty new, I mean, participatory design research is not new. Um, but really thinking about like human centered AI and participatory design in the context of AI and the global South, I think this is a pretty new space. And that's one, that's why I'm also really excited about it. Um, I also draw, so.
00:29:25
Speaker
I think that there there's this great book called Data Feminism by Catherine D'Ignazio and Lauren Klein that talks through different principles for data feminism and talks about participatory design and um what that can look like and in really meaningful ways. One example that I think is a really good one is by the authors of Data Feminism, so Catherine D'Ignazio. And what she did is she worked with activists in Mexico who were targeting feminicide, which is the ah killing of of women, and which is very often underreported. It's an underreported phenomenon globally. And in Mexico, it's a really huge issue.
00:30:07
Speaker
And they worked with activists who were trying to bring awareness around the different ah around feminist side that's occurring. And they went through several iterations and workshops with activists in this space to design a machine learning tool that helped identify news instances of feminist side so that they could better track and account for the different ways ah and examples of a woman that were being killed in the country, so to then be able to use towards policy and other interventions. So that's a really cool example that you can also learn more about in Catherine de Ignazio's latest book around feminism.
00:30:49
Speaker
and data andant feminism And so in terms of like thinking about it, about it practically, there are real challenges that you mentioned, you know, in terms of people in that, especially when we're talking again about the global South. So like even, you know, data feminism, it was largely, it wasn't necessarily looking at some of the most marginalized communities you when we're talking about um machine learning based credit assessment tools and we're talking about people who are in villages in you know areas in rural and Kenya or India you know how do you engage them meaningfully and and technology and AI technology design when you know that's like a ah pretty foreign concept. um So I think there's like a couple of of of different ways and that it can work and
00:31:34
Speaker
It's also important to make sure that, you know, what is the problem that, that we're solving and, you know, at a high level is AI the best solution for that, you know, having that, like, cause that might, maybe it's

Aligning AI with Community Needs

00:31:47
Speaker
not, you know? And so I think there's an element of, if you're really doing participatory design in a way that is, um, you know, centering communities, you have to be open to. the fact that AI might not be the solution for that particular problem. So that can be tough too as researchers, right? Because our incentives, like thinking about like incentive mechanisms for researchers in addition to developers, like it it all can intersect and be kind of complicated. But that aside, I think there's a couple of different things. You know, one is
00:32:20
Speaker
you know, practically um engaging with different groups. So in the context of like financial inclusion, maybe working with women's groups and asking them, you know, how do you determine how loans are provided? And, you know, thinking about what are the kind of like features and proxies that are used to determine creditworthiness in the context of you know some of these different types of contexts. And so using different um types of design thinking activities can be helpful to kind of bring out some of these different concepts and then kind of working with them around you know what would technology in this realm look like. it's It's complicated for sure though. And like I said, I think that we have ideas for some of these different types of experiments.
00:33:03
Speaker
um But yeah, I think one challenge you mentioned before is like how do we how do we really, especially for you know working with really marginalized communities who might not really understand like what AI technology is, you know think there's an educational aspect that has to come along with that as well.

Policy and Governance for AI

00:33:22
Speaker
You mentioned accountability as is one key kind of you know category that that we should think about here. Are we thinking about like policy provisions that a government may have, or are we thinking about something else? Yeah, I think it's both. I think that, um, I think this, there certainly needs to be policy related to AI and a lot of countries don't have mature program or approaches right now to thinking about policy and governance in the global South and even, you know, even in the U S and other places, right? Um, because there needs to be some sort of mechanism to, um,
00:34:00
Speaker
help account for some of these different trade-offs that can exist. And policy is meant to help protect citizens, especially in light of different types of trade-offs. And um so we certainly need um policy. I still think that there's a lot that practitioners can still do. ah But yeah, greater policy around thinking about different types of technologies in different aspects, you know from a fairness perspective and from a privacy perspective and transparency perspective. I think the EU AI act is really interesting because they're you know starting to think about like what does it look like to to protect citizens for different types of AI technologies using risk-based categorization. So I think that's a really interesting thing to do about it. and But practitioners can also take it upon themselves to build more accountability structures, you know more opportunities to think about explainability, to um
00:34:57
Speaker
ah for systems to to users and and things like that. Thinking about like Gen. AI in the US, for instance, it almost seems like companies are asking the government to regulate them. And the government is kind of saying, look, this is happening too fast. We're going to need a lot of time to to do any regulation. And so it almost seems like, you know, neither practitioners nor governments are in a position to kind of take a lot of this very seriously. ah Either it's you know kind of like technical constraints, or like you know human resource constraints, or it's just like an unwillingness to take on some of these challenges. Is that your sense right now globally also, given that you've you've worked a lot on how tech leaders are are thinking about this, how governments are thinking about this? I think there's a real concern, and especially in kind of global-style contexts, that ah
00:35:50
Speaker
regulation will impede innovation. And especially when we think about AI and the economic potential of AI, you know countries don't want to shoot themselves in the foot, so to speak, by and putting into place regulation that can hold back innovation and hold back economic growth and you know job creation and like all these different types types of things. So I think that's and think that's where attention rests, right? And even And you see that even in the US, there's this perceived tension and trade-off between regulation and innovation. Now, I don't necessarily agree with that. I actually think that regulation can be healthy for innovation, right? And that's also why companies are calling for regulation too, because you need to have a sense of the rules to be able to like innovate within those, you know because if regulation comes in a year, two years, five years, whatever it might be,
00:36:43
Speaker
Then there's all this backtracking and there that can make it much harder and then companies can be liable for different things. So I think there could be this like false sense of um one or the other ah that I think is kind of holding back some of the regulation that we need. um But I do find inspiration within the kind of the EU and and just you know in Europe in terms of government taking this really seriously and thinking about like what does that what what does regulation in the space look like. And it's not perfect, but it's something and it's starting to be you know implemented or it will be implemented and and um going into effect later this year. And so, um yeah, I think that's a big piece of it. But in the global South, yeah, it's tricky, right? Because
00:37:31
Speaker
you don't wanna fall behind. You don't wanna fall behind, you're already behind and you don't wanna fall further behind by impeding you know innovation and and growth opportunities.

The Role of Stakeholders in Equitable AI

00:37:41
Speaker
But again, I think that can be a false trade-off and that there's ways to have regulation while also supporting innovation and ways that are healthy and and sustainable for citizens and society. Great. um as As we're nearing the end of our conversation, I'd like to ask you if you have any suggestions for people who are listening, if they want to learn more or if if there's one action that you wish people would take um on this issue, what would that be? Yeah, I mean, I think the main thing is just to
00:38:19
Speaker
depending upon where you're coming from. So maybe you're a developer of AI technologies or maybe you're a manager of AI technologies or maybe you're just an interested citizen or user of technologies. I think the main thing that I would recommend is just remembering that these technologies, they're not objective. They are not reflections of this objective reality. In truth, they are reflections of the society that we live in. And we have the opportunity to you know think critically about what are we embedding within these technologies? And what do we what would we prefer? What is the world that we want to live in? And how do we... work with AI to get towards that world that we want to live in that is more equitable and inclusive and sustainable and peaceful and not accepting the status quo. So that would be the main

Encouraging Inclusive AI Development

00:39:20
Speaker
thing. and And I think, you know, there's different readings that that folks can do. um I really like some some articles with SSIR has some really good ones around AI and the Global South. I've written a couple. So shameless plug there. What's the, could you could you say what SSIR stands for? ah Stanford Social Innovation Review has been doing some different, so that's it. I think there's a great kind of accessible place for different articles and actions around practitioners in the development space. um So those were just some things I would recommend. And and and for the funders ah on the who are listening as well, similarly, you know, how can we invest not only in technological outputs,
00:40:02
Speaker
But how can we invest in inclusive and equitable processes in developing technologies? I think there's such a focus on what are we going to, you know, how are we going to embed LLMs, large language models, or how are we going to create this AI tool to, you know, combat this educational inequity or whatever it might be. There's such a focus on that output as a focus as opposed to a focus on the process by which that technology is developed. And so I would really encourage if there's any funders who are listening or practitioners as well to think really critically about and invest in the process and build in more opportunities for centering the agency and empowerment of the people that we're trying to impact.
00:40:50
Speaker
I'll second your plug for the SSIR, ah and it was your article in the SSIR that kind of made me aware of your work in the first place. Thank you so much, Genevieve. It was a pleasure having you and talking to you. Thank you. Thank you.
00:41:09
Speaker
I'd love to hear what you thought about this episode. Please leave a rating and a review on Google Podcasts, Spotify, Apple Podcasts, or wherever you're listening. This really helps the podcast reach the right audience. You can also email me at asadyakat at gmail dot.com with any feedback or any ideas you have for topics to cover or guests to invite. Hearing from people who are listening is a large part of what motivates me to record more episodes. So please don't hesitate to write. And thank you for listening.