Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
AI Ethics with Madison Mohns image

AI Ethics with Madison Mohns

Empathy in Tech
Avatar
60 Plays3 months ago

Madison Mohns is an AI product manager at Indeed.com who specializes in developing deep learning approaches to enhance metadata generation for candidate job matching in 19 global markets. A passionate advocate for AI ethics, she takes an optimistic yet conscientious approach to the use of AI solutions. Her expert-in-the-loop approach to scaling taxonomy coverage was recognized and featured in a published paper at RecSys Conference 2022. She manages a team of more than 120 taxonomists, data scientists and engineers, and has pioneered her team's ability to drive innovation by augmenting her teams' work with AI solutions.

Madison is currently pursuing a master's degree in AI Ethics and Society at The University of Cambridge. Her research interests center around the intersection of AI and the future of work, a topic she has elucidated upon in a thought-provoking TED talk aimed at guiding managers through the complexities of workforce transformation in the era of AI.

  • Madison's Ted Talk - https://www.ted.com/talks/madison_mohns_ai_and_the_paradox_of_self_replacing_workers?subtitle=en
  • Madison's LinkedIn - https://www.linkedin.com/in/madison-mohns



ABOUT EMPATHY IN TECH

Empathy in Tech’s mission is to accelerate the responsible adoption of empathy in the tech industry to help humanity solve our most pressing and complex problems. We do this by focusing on three key areas:

  • Technical Empathy - Close the empathy skills gap in the tech industry by leading a scientific revolution that embraces new research.
  • Ethical Empathy - Ensure empathy is used for social good through ethical, equitable, and responsible choices.
  • Actionable Empathy - Build a thriving community that makes effective empathy training accessible, affordable, and widely available.

Learn more at https://empathyintech.com

Transcript

Introduction and Theme

00:00:01
Speaker
I think technology is only good when it is participatory and technology can only be participatory when things are clear, transparent, and ultimately accessible to more individuals for them to be able to understand. Welcome to Empathy in Tech, where we explore the deeply technical side of empathy. And the critical need for empathy in technology.

Meet the Hosts and Guest

00:00:26
Speaker
I'm your host, Andre Goulet. And I'm Ray Myers. Today on the podcast, we have Madison Mons. Madison is an AI product manager at Indeed.com who specializes in developing deep learning approaches to enhance metadata generation
00:00:40
Speaker
for candidate job matching in 19 global markets. A passionate advocate for AI ethics, she takes an optimistic yet conscientious approach to the use of AI solutions.

Madison's Journey and AI Advocacy

00:00:50
Speaker
Her expert-in-the-loop approach to scaling taxonomy coverage was recognized and featured in a published paper at Rex's conference in 2022. She manages a team of more than 120 taxonomists, data scientists, and engineers, and has pioneered her team's ability to drive innovation by augmenting her team's work with AI solutions. In addition to our professional endeavors, Madison is currently pursuing a master's degree in AI ethics and society at the University of Cambridge, underscoring her commitment to advancing ethical frameworks within the realm of artificial intelligence. Her research interests center around the intersection of AI and the future of work, a topic she is elucidated upon in a thought-provoking TED Talk aimed at guiding managers through the complexities of workforce transformation in the era of AI. Madison, thank you for being here. Thanks so much for having me.
00:01:40
Speaker
So to get things off, would you tell us a little bit about yourself and how you got interested in AI? Yeah, I ah started off ah really getting interested in AI once I was ah pursuing my undergrad and at the time was really interested in entrepreneurship.

Early AI Projects and Career Path

00:01:56
Speaker
Also just interested in design ah overall. I wanted to kind of understand how I could build kind of a career that bridge both my analytical and creative skills and at the time as I was kind of pursuing business ideas and trying to build out my design portfolio, I had stumbled across a technology that was being built out by OpenAI at the time that was leveraging computer vision to basically reconstruct figures. and I was really interested in building out kind of this business idea where people were able to try on clothes and in an online environment by kind of reproducing their figure into a digital avatar, ultimately like alleviating some of the discomfort that a lot of people feel um in going into in-person shopping experiences, and also just to like reduce all of the toil of having to buy clothes online and return them. and so
00:02:48
Speaker
Yeah, I got really interested in in AI at that moment. I didn't really understand what it was. I just knew that it would help kind of me be able to pioneer this idea

AI Ethics and Dark Patterns

00:02:56
Speaker
forward. And since then, I interned and built out a portfolio of projects across a bunch of different startups and ultimately found my way to Indeed, where I now work on a team that heavily leverages AI to transform a lot of our operations workflows that have historically been manually done by a team of analysts. and how do we intervene and ultimately bring AI as a tool to augment those workflows and hopefully make them more enjoyable in terms of the work that they're pursuing. Awesome. When did you decide that your focus on AI was going to move towards in the direction of AI ethics?
00:03:34
Speaker
Yeah, really good question. So I um did an internship at IBM and there was a really interesting seminar that I went to um because, again, I was kind of more in the UX design, UX research part of my career at the time. And they had a whole talk about ethical product design, and there was a huge emphasis in that talk on the concept of dark patterns, which I'm happy to get into. But essentially, I was super excited on being on the forefront of building out these really exciting technologies and being able to
00:04:09
Speaker
build out these business propositions that were hopefully going to be able to solve some massive problems. But I also realized in building and facilitating these types of new experiences that you have to be really, really intentional with how you're doing that.

Ethical Design Challenges and Solutions

00:04:24
Speaker
And you can do intentional things in good ways, and you can do intentional things in very evil ways. Technology companies are kind of the purveyors of that mindset within their own organization. And there are really compelling ways to make really great products. And there are very compelling ways to make great products that hide a lot of really negative things at the same time. So yeah, it definitely opened my eyes. I wanted to be in tech, but I didn't really kind of realize the full picture of everything until I had entered into that conversation. And so that was more of like an ethics and design stint. And then once I started leveraging AI more in my career,
00:05:01
Speaker
and It kind of opened up a new slew of problems since AI is you know a different beast to deal with. Let's go into those dark patterns a little bit, because I had a very similar experience coming from strategic communication and detouring into UX a little bit. so That was a huge surprise for me, when i especially when I started studying empathy in more depth, is just how it can kind of be weaponized. and We don't think about that. A lot of times in the UX or in the technology world, there is a sense of optimism and just, oh, this is all great, this is all hopeful, but I love the take that you've got. It's intentional. We have to pay attention and know where things could go wrong, and we're not just following that happy path of optimism. so Could you go into kind of what some of those dark patterns are and how our listeners could and should be paying attention?
00:05:51
Speaker
Yeah, totally. And I love the call out that like design is participatory to a certain extent, too. like In any type of technology process, there is a person who is interacting with the technology. There's the technology themselves. And then there's this ultimate third actor involved who is actually designing like the mastermind behind that technology. And like what is going through that person or that team or that company's mind when they're building that product that you're ultimately engaging with? but they're kind of blurred behind the lines. And so yeah, a couple of good examples of dark patterns. One very tangible one is think about your email inbox. You get hundreds and hundreds of emails per day. Probably you've signed up for some sort of subscription here or there. You've signed up to buy something online and now you get a million emails from the place that you bought one shirt from one time three years ago and you can't figure out how to get these people to stop sending you emails.
00:06:49
Speaker
So a dark pattern would be in that email if I clicked in and I had the intention to unsubscribe from those communications. Something that a designer could be doing in order to make that process hard for you or introduce a lot of friction is they could, for instance, make that text really, really small. They could hide it. They could put like 15 different steps between clicking the unsubscribe button and unsubscribe was confirmed. There's a lot of things that they can do in between in order to make that a really difficult process for you. And a lot of the reason why these dark patterns kind of slip in is because folks are trying to optimize for some sort of top line metric. And they're not really thinking about what they should actually be measuring. So you know you could have this email list of 40,000 subscribers, but if no one's actually engaging with your content, that isn't really a good metric of someone actually wanting to be there. But that's a really flashy number in in business. It's fun to throw those numbers on a PowerPoint slide and be able to
00:07:46
Speaker
present that to your executives. And it can also get into a place where it's really harmful. Like that's obviously really annoying friction. But um if you think about, let's say there's like a hypothetical situation where um you're on social media and you post a picture and over time that picture, you get like a hundred likes on that picture and you get this little dopamine hit every time you get a like or a comment or some sort of engagement with that photo. Something that a designer could do, because the way that social media platforms are making money is through exposing you to advertising, is they could triage out that number of likes over a longer period of time to keep you coming back and checking that app to make sure that you are still getting the traction and stuff that you're wanting, even if maybe in the first minute you got
00:08:34
Speaker
all of those likes at once, they might kind of space it off over time so that you have more and more chances to engage with the advertisers, which is ultimately how that service is making money. So there's a lot of things that you could do that obviously are like good for the business, but ultimately could be really, really harmful to your

Empathy and Diversity in AI Design

00:08:52
Speaker
users. And ultimately, like even in certain cases, intervene with mental health issues and other types of things that we need to be aware of. Yeah. and I think the the thing I'm hearing there is that as people who are building technologies, one of our core responsibilities is to stop and think about how could this be used because it's so easy to justify and rationalize like, it's only six seconds, what's the big deal? But really to think through,
00:09:18
Speaker
how would that happen at scale and to have those conversations and then also for leaders and executives to recognize like hey what you're measuring your team against that matters too. I'm curious though are there any dark patterns that start off with really good intentions because both of those it's like you know When you explain them, it's like, well, they maybe didn't have the best of intentions. Is there anything where it's like, I really tried to help this person, but then it kind of went awry, because that's some of the things I've seen with empathy, too, is that you can get so emotionally drawn into one person's narrative, for example.
00:09:51
Speaker
that then you get so emotionally engaged that your cognitive side of your brain kind of shuts off and there's like, oh, I'm not going to actually pay attention to the ethics of it and not pay attention to how this operates on like a more group level. Are there any patterns that you've seen there? Yeah, I think that's a really great question. And I don't mean to like fully move the question around because I don't have any like super tangible examples that I can think of that are in my area of expertise. But one of the major things you need to pay attention to that could kind of lead to this like good intentions, bad outcome situation is oftentimes
00:10:28
Speaker
You are not your user. So you could think in your head like, oh, I have all of these experiences that I can draw from and these are the pain points I experience in my life. And if I had a product like this, it would solve problems for everyone. And those are obviously good intentioned. like You have deep and intimate experiences with the problems that you're prop posing to to try to solve for. However, if you're not actually in the context and being able to sit with the actual users of the the product that you're using, you could end up skewing the direction in a way that is self-serving to people that look like you, think like you, operate like you, um and not actually be
00:11:10
Speaker
solving the root of the issue for the wide majority of people or even the minority of people that might ah need to benefit or potentially get ostracized in that process. So I think that it's really important from this like intentionality perspective, really like checking in with yourself to understand, like hey, the people that I'm building for might not think the same way as me. They might not look the same as me. They might not want to do the same types of things. They may not have the same values or the same morals. They might not have the same goals. Before even getting into the design process of thinking, well, what should we actually do here? And what's the right thing to do? We need to actually
00:11:50
Speaker
get to the point where we actually understand the problem and all the angles that we could approach

Ethical AI Implementation and Organizational Strategies

00:11:55
Speaker
it from. And maybe at the end of the day, we choose one segment or direction to go in for the purposes of scoping the problem to be less daunting. But at least that was an intentional choice rather than an oversight just because we are biased to think um in the ways that naturally are self-serving to us. Yeah, that makes sense. And I think a concrete example where good intentions have had led to bad places is maybe recommendation algorithms and and social feeds that's gotten a lot of exposure. You know, we're trying to ultimately promote engagement because we think
00:12:30
Speaker
Engagement on our platform is good, but then oftentimes what people engage with is is negativity and divisiveness or there's been some study on YouTube recommendation algorithms where what you're trying to produce is just that people spend more time watching stuff on your platform, but in some cases that produces radicalization pipelines. And even certain companies are trying to reduce that by intentionally intervening in upstream parts of their algorithms. Like I think of a couple of weeks ago, there was a big scandal with Gemini. Gemini is very like known for image generation capabilities, like trying to imbue diversity into AI generated images.
00:13:11
Speaker
And ultimately like that could be good from a representation perspective, especially in a lot of people aren't able to see themselves in ah traditional media. And this could give um opportunities for more people to, you know, be reflected more ubiquitously. But when there's a prompt that is historically focused around like, please generate a image of World War II, and then you have minority people, you have Black people and Asian people showing up in Nazi uniforms.
00:13:44
Speaker
um We can be in a really like uncomfortable position because ultimately that is not a reflection of our history. That's not something we want to perpetuate. um And the intention there was like diversity and imagery can help with this representation issue. But what circumstances is that appropriate? And what circumstances is that actually really harmful and manipulative of um things that we actually need to look back on in our history in order to continue to solve um for those problems in the future? What what that really that situation underscored to me, because a lot of people were, I think, reading a lot of you know intention that Google didn't really have. like There was a problem in the distribution of their training data. They tried to patch it with a system prompt, and the patch went wrong. But what's interesting is that even the experts in this type of technology are that unable to really anticipate how their fixes are going to affect things. right like Their inability to control it to me is more of the interesting story than
00:14:42
Speaker
you know, some some people kind of have tried to weaponize that as though they had some nefarious agenda. Moving from dark patterns and product design and and ethical product design, you've also been pretty outspoken in the organizational factors, ethically actually introducing this technology into an organization. Like in your in your TED Talk, you addressed managing a process of AI adoption without creating fear. And I think you focused on three aspects of your strategy there, which were transformational transparency, collaborative AI augmentation, and reskilling to realize potential. So could you tell us a little bit more about these and where empathy plays a role in them?
00:15:24
Speaker
Yeah, totally. So to build a little bit of context for why I came to thinking about this problem in the first place, it was one of those problems I had to face head on. When I joined my team at Indeed, I was going into ah a team that had done things a certain way for quite a bit of time. And I was asked to come in and introduce machine learning models to hopefully at the time speed up certain workflows and hopefully help us scale to more markets and countries than we were able to do with just like a limited set of individuals. And that was not really like a popular um role to like come in like most people probably saw me as um being very threatening to that team. I did feel at at the beginning like I had specific goals that I was told to have and
00:16:13
Speaker
There was this team of people that I was ultimately entering in as, as a guest, right? Through some really empathy building conversations, for lack of a better word, with this team. To me, I was like, excited about this technology. I'm like, this is going to make your life so much faster. You're going to do so many more things. You can spend time doing things that you don't have time to do right now because you're doing all of these rote rehearsal tasks. I can't think of a reason why ah you wouldn't want this to happen. And you kind of, through conversations and engaging with ah the team members themselves as you're trying to do this workforce transformation by imbuing AI on a a workflow that hasn't experienced it before,
00:16:58
Speaker
there is a lot of fear. And a lot of that fear is really justified. And a lot of that fear is because people just don't really understand the implications of what's going on. And so for my particular case, because the machine learning models were doing very similar tasks to what this operations team was doing in the first place, They were performing quite badly at first. And in order to actually make those models be performant enough to actually intervene and do some of those tasks, it took quite a large effort for my team to be engaged in training and fine-tuning those algorithms to be to the level of quality and standards that our humans were actually naturally able to achieve because these people went to school and have some of them even PhDs.
00:17:44
Speaker
And it was really um challenging, I think, to go to those people that are highly educated, highly skilled, and ah introduce them a problem where they're basically training an algorithm that is replacing a lot of the core of what they've worked on for their entire careers, right? Why would they want to do that? What is their incentive to do that? What does it give

Empathy in AI Integration and Workforce Transformation

00:18:07
Speaker
them? It was a really challenging problem to solve because I was being pushed and my incentive systems were to move as quickly as possible and to make it as performant as possible. And you give these folks with PhDs a task of labeling yes, no questions. It's reductive in nature. And if you don't actually zoom out and give folks the full picture, it can be really, really demoralizing, I think. So
00:18:36
Speaker
um Yeah, that's the reason I kind of came to doing this TED Talk in the in the first place. And that transparency in the upfront of like, hey, this is what's happening. i Realistically, like the workforce is fundamentally going to be transformed by the introduction of AI. Nearly every single occupation is exposed in one way, shape, or form. And those that are going to pick up and embrace and learn those technologies ah might not be you know replaced by the technology themselves, but they might be replaced by people that are like actively trying to learn how to use these skills. um And if you don't give people transparency into what's actually going on and get their consent in that process, ultimately you're going to run into a really bad conundrum where you have folks that are demoralized. And oftentimes we've seen, not in my particular team, but
00:19:28
Speaker
Folks will um you know rebel against the system and try to make it worse so that they ultimately don't have their jobs lost to these systems. And so and that transparency upfront is really important. And then like sitting down with your team and asking them, what do you like to do? like What do you enjoy doing? What would you rather like hand off to someone else? And let's give you more time in your role to work through those things that are challenging and stimulating and invigorating for you in your career and your career growth. And let's get rid of some of the stuff that you don't even enjoy doing.
00:20:03
Speaker
um and targeting the intervention of AI on those types of tasks so that people actually have the room and space to grow and ultimately take ah charge of their own professional destiny. And then we get into, you know, reskilling, like some of these tasks that people have done for for years it's not going to be the most efficient way to do it anymore. And so if management is not giving people leeway and free space and time to discover what else they want to do, whether that's re-skilling into another area or up-skilling in the same area that they're in right now, people are going to naturally get stuck. And so that's something that it's deeply important. Like, you know, as a individual contributor at work,
00:20:55
Speaker
you don't have the power to carve that time out naturally for yourself. So placing that responsibility on managers and management to really set that as a forefront ah value of the team of continuous learning and development and actually committing to giving people the time and energy to actually pursue those things is deeply important in this era where everything is kind of being ah transformed in front of our eyes. Yeah. I love what you're saying about reskilling because you're talking about very much a bottom-up approach. When we talk about reskilling a lot of times, it's, oh, let me bring in this expert. And then I, as the CEO or the VP, I already know exactly what you need.
00:21:46
Speaker
But and that there is a place for that. I'm not saying not to, but I think that point that you made about letting people discover it for themselves, I just want to emphasize that so, so much. That's exactly what I've seen across my career and with the teams that I've led. And that is where innovation comes from too, because when you have that empathy piece where people recognize that they've got the psychological safety, they recognize that they can explore new things and they have that idea of experimentation and learning. That just creates an amazing culture of innovation, but it takes time. It's not something, and it's not direct. It's not linear, but you do get it. I do have a question too. I love this quote from your TED Talk where you said, the same exponential improvement in AI systems is becoming a looming existential threat to the team I manage. I think a lot of us, ah there was just some research from
00:22:42
Speaker
Kat Hicks and her team over at Pluralsight, they were in the developer success lab, about just how much anxiety developers are facing. This was focused on software developers and I think it was almost half, it was like 45% of developers have this like really, really deep anxiety around themselves being replaced. So in addition to kind of the organizational frameworks, like what are some things that you're doing to help you and your um team navigate these feelings?

Building Safety and Experimentation in Teams

00:23:10
Speaker
Yeah, that's a really great point. And I think some of that anxiety, like people don't want other coworkers at work to know that they're using AI for that exact reason of like, if people found out that I was using AI, they're going to think I'm lazy, or that something else could do this way faster or quicker than I am. And so I think
00:23:31
Speaker
I'm actually encouraging people to find interesting like ways to like AI is not going to help you in every part of your career at this point. It's just that the point that we're at in terms of the development of these technologies. But there are points in which you can intervene in your day to day and like make things easier for yourselves. And I think having those conversations, um again, come from management to create that aura of psychological safety, but also like in the communities that you do have in the workplace, or even with your friends that might span multiple industries, make it a show and tell. like Making this very like transparent and upfront about how people are leveraging the technology to help them in their day-to-day, I think fosters a
00:24:19
Speaker
culture of experimentation without fear and penalty because it really helps people ground themselves in the reality of what is possible and what's not possible right now. And actually stress testing these systems and like putting yourself in the space where you can actually test the limits of what ah what these technologies can do for you and what are really like your unique value propositions as an employee, really takes away this kind of more elusive, like, everything is coming for your jobs, like, the whole world's gonna go out of business. Once you actually spend time actually, like, engaging with these technologies and talking with other people to understand how they're using them,
00:25:03
Speaker
Um, I think it can make things a lot less scary because you have that kind of direct interface, um, to really, uh, make it clear where and where not you want to implement that. And I think that culture of just experimentation really helps with that psychological safety element.

Empathy in Tech's Mission and Ethical AI Development

00:25:24
Speaker
Excellent. Excellent. I was hoping you would, uh, you would say the the magic word psychological safety. ah which which has been found just to be so foundational to how things get done well. Yeah, that was actually to interrupt you just real quick. That was actually one of the outcomes from the researcher from Kat Hicks was that one of the most important things to help with that anxiety is culture and belonging.
00:25:46
Speaker
So having people feel exactly what you said, Madison, where it's like, okay, I'm scared that my job is going to be at risk. And yeah, there is some of it. But then helping people understand, okay, where can you make it better? great And then having that culture of just like now it's a like, we're going to help make sure that you feel like a sense of belonging. And I think there's just so much intention. with all of this. This has been such a such a great conversation. I'm so excited. Yeah. so so We have one final question. We ask to all of our guests, what is the most important thing you think should happen at the intersection of empathy and technology?
00:26:24
Speaker
Yeah, that's a really great question. I think there's been so much research from like developers of technology um needing to build those empathetic points of view in order to actually build good technologies. But I think technology is only good when it is participatory and technology can only be participatory when things are clear, transparent, and ultimately accessible to more individuals for them to be able to understand. And so my main thing is, you know I think with AI in particular, a lot of the rhetoric or the narrative around artificial intelligence is this kind of black box mentality, like you're never gonna understand why the model does what it does. And
00:27:15
Speaker
There are more people questioning AI now because it has become more accessible. Think about chat GPT as a user interface. like We're engaging with AI in a way that's a lot more visible than I think it has been to folks in the past. like We've had Siri forever. We've had Google Maps forever. We've had ah recommendations algorithms. We've we've had um many, many different applications of AI that have happened in folks' lives in a more Invisible way that people find convenient but they haven't really scrutinized it in a way that helps us understand like how are they making these recommendations what data have I shared in order for um TikTok to know me that well that it knows my very niche ah habits and ah interests and stuff and so I think
00:28:04
Speaker
Not being like overly critical, because a lot of these technologies are really great, but I think one of the main ways in technology that we as ah technology developers think about things is dogfooding your own stuff, which is eat your own dog food. right Play with your own products and understand what role you, your data, play in that. and where you want to um you know adopt more or adopt less of that. And I think being a really conscious consumer is deeply important so that as these technologies progress and they grow and regulation adapts and grows and new things emerge on the market, you can actually be a part of the conversation rather than being a passive consumer.

Conclusion and Call to Action

00:28:49
Speaker
And then you can hold the the people that build these technologies accountable because you have that knowledge. Whether you have the technical skill set to explain exactly what's happening or not doesn't really matter. But I think you really need to be able to consciously consume technology, understand where it intervenes in your life, to be able to advocate for yourself as an individual and the communities that you're a part of. Oh my gosh. I feel like I could talk about this for days with you. And I'm just so grateful that you came on the show. Thank you so much for sharing your time and your expertise. And um how can people who want to follow your work, how can they get in touch to to learn more?
00:29:29
Speaker
Yeah, feel free to reach out to me on LinkedIn. um My name is my name, so you can just search it there. I'm very open to having conversations there. Just shoot me a message or connect with me there. And I would love to hear from all of you. This is, again, it's gonna impact everyone's lives. And so um it's really important to kind of see the whole playing field of scary things, the exciting things, um's just like any change that we need to experience together. Yeah, ah such good stuff. And thank you so much again, Madison, for coming. And thanks to you for listening. and Empathy in Tech is here. The reason we exist is because we are on a mission to responsibly accelerate the adoption of empathy in the tech industry.
00:30:10
Speaker
and we're looking to do that by closing the empathy skills gap like we had just talked about giving people that time to discover and doing that by treating empathy as a technical skill, teaching technical empathy through accessible, affordable, and actionable training just like this, um building our community, and also breaking down the harmful stereotypes and tropes that exist around empathy, and then really promoting technical empathy for ethics, equity, and social justice. And just Madison, thank you so much because I think what you're talking about really encompasses all of that. So if you found this conversation interesting, dear listener, head over to empathyintech dot.com to keep the conversation going and join our community, our growing community of compassionate technologists. Thanks again for listening and we'll see you all in the next episode.