Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Episode #185: Arathi Sethumadhavan image

Episode #185: Arathi Sethumadhavan

S7 E185 ยท The PolicyViz Podcast
Avatar
266 Plays4 years ago

Arathi Sethumadhavan, Head of User Research for Ethics & Society at Microsoft's Cloud+AI, visits the podcast to talk about her work and what it means to use data ethically.

The post Episode #185: Arathi Sethumadhavan appeared first on PolicyViz.

Recommended
Transcript

Introduction to Policy Viz and IEEE Viz Conference

00:00:15
Speaker
Welcome back to the Policy Viz Podcast. I'm your host, John Schwabisch. I hope you are all well and healthy. Last week was a surprise episode of the podcast where I talked to Robert Casara and Elvita Ottley about the IEEE Viz annual conference that took place just a few weeks ago. So I hope you had a chance to listen to that. You should also check out the Data Stories Podcast that was released a few days later that also cover the IEEE Viz conference.
00:00:40
Speaker
As with any conference like that, it's not going to be covered in any single or pair of podcasts. So there's a lot of great information that you should consider checking out over at the IEEE Viz website.

Who is Arathi Sethitamadavan?

00:00:54
Speaker
So on this week's episode of the show, we turn back to data and data visualization and ethical use of data. And I'm happy to have Arathi Sethitamadavan on the show.
00:01:04
Speaker
She is the head of user research for ethics and society at Microsoft and I found out about her through a LinkedIn article that was very interesting on the work that she and her team are doing over at Microsoft. Arthi has published over I think 40 articles on a range of topics.
00:01:22
Speaker
patient safety, human robot interaction, effective computing. And she also has a book designed for health applications of human factors that came out early this year. So we talk about in this week's episode, we talk about her work, talk about her team's work.

What is the Role of Ethics in Data and AI at Microsoft?

00:01:38
Speaker
and what it means to think about ethics and society as it relates to data, as it relates to products at Microsoft, and as it relates to artificial intelligence, which is clearly going to be and is becoming a major force in all of our lives, especially those of us who are working in data and those of us who are working in the field of data visualization. So I hope you'll enjoy this week's episode of the show. And here is my discussion with Aditi.
00:02:10
Speaker
Hi, Arthi. How are you? Thank you for coming on the show and taking time out of your day. Well, thank you so much. I'm really happy to be here. I am excited to chat with you. I have been reading about you and your work, and it sounds very exciting. And I want to learn more about it because the ethics around data, product development, communication is certainly important, especially in the moment that we're having here in the United States. So I thought we would start with having you maybe talk a little bit about yourself and your background.
00:02:39
Speaker
and the team that you're leading at Microsoft, and then we can get into talking about what the role is and what the team does and how you work with all sorts of folks over there at Microsoft.
00:02:50
Speaker
So my background is my undergrad is in computer science and I grew up in India and then I moved to the United States for grad school and my PhD work is in engineering psychology. So essentially I study how people interact with complex systems. And most of my work in my grad school days was focused on aviation and then I moved from aviation to another safety critical industry post graduation.
00:03:19
Speaker
which is healthcare. And I worked on medical device product development for the longest time. And fast forward to today, I'm on a team at Microsoft within Microsoft's cloud and artificial intelligence business. And the team is called Ethics and Society.

Why is Proactive Ethics Important in Tech?

00:03:36
Speaker
So, Arithi, when it comes to the work that you do, what do you mean by ethics in terms of the work that you are all doing?
00:03:45
Speaker
Ah, I'm glad you asked that question. Ethics means a few different things to me. Ethics is a responsibility to understand and respect the values, needs and concerns of end users and other impacted community members, including those who are not a direct paying customer. Ethics is being proactive.
00:04:06
Speaker
and not reactive, right? So this means considering potentially harmful consequences of the technology and mitigating those prior to release. And that actually can result in bigger trust in our brand too. Ethics for me is also translating principles to everyday work.
00:04:26
Speaker
So what I mean to say that it's very important to have a role-based training and tools and practices that engineering teams can use to translate these principles into practice. And lastly, I do want to say one thing. I view of ethics as innovation. Embedding a multidisciplinary team and using multidisciplinary approaches
00:04:50
Speaker
I think it's really, really possible to create exceptional products, services, and tools for our customers. So this can actually be a competitive advantage. So ethics doesn't have to be viewed as one compliance thing that you have to do. But instead, it can actually be a competitive advantage for you. Yeah. I mean, it's fascinating. I guess it shouldn't really surprise me. But the group itself is not that old, right?
00:05:17
Speaker
That's right. The group started in its current state during April of 2018 or so. So we've been around a little over two years.

Exploring the Ethics and Society Team's Multidisciplinary Approach

00:05:28
Speaker
But the manager has been leading a similar sort of a team. It was at that point called BusAI and Ethics. She had that prior to this particular team being formed.
00:05:43
Speaker
Can you talk a little bit about like how big is the team are the folks on the team? Are they computer scientists that you have a PhD in engineering psychology is a whole other conversation we can have at some point. I'm fascinated. What are the background of the folks on your team?
00:05:58
Speaker
Yeah, that's a great question. We actually quite a multidisciplinary team and that's intentionally so because we believe that we can innovate responsibly if we are able to bring diverse perspectives, right? And so that we're able to challenge these dominant views
00:06:16
Speaker
So therefore our team comprises people like me. So I lead the user research discipline within the team. And my role is really to bring the perspectives of impacted community members into shaping products. We also have designers and project managers and engineers. So it's quite an interdisciplinary team. I would say about 30 or so is our team size at the moment.
00:06:42
Speaker
Wow, yeah, that's pretty sizable. So this moment of responsible development of technology, it has, I think, taken shape and hold at several organizations around the world. And I was hoping that you could talk about why this is important, especially right now in the conversation that we're having around the country.
00:07:02
Speaker
Yeah, I mean, there are obviously really remarkable and well-intentioned applications of technology, right? Especially if you think of AI and other emerging technologies. I mean, AI is transforming a lot of major industries that you can think of from healthcare to agriculture to transportation, but there's a flip side, right? And that is that a lot of these technologies are being deployed with very little assessment of the impact that these can have on individuals and societies.
00:07:31
Speaker
I don't know whether you've seen this, John, but last year there was a news article that came out where a voice deep fake was used to scam a CEO based in UK. Yeah, I remember that. Yeah. Then you may have seen the article that came out on an AI recruiting tool that automatically catalyzed male candidates as being superior to female candidates.
00:07:55
Speaker
Of course, you know about the role of social media in terms of misleading vulnerable voters. I mean, there was a great tech talk by Carol on the role of social media on Brexit. You hear news about racial disparities in automatic speech recognition systems. We hear about facial recognition systems and how that discriminates against certain groups and so on. Right.
00:08:21
Speaker
So the point here is that it's very important to define the current and next generation of technological experiences with intention. And that's why organizations are starting to pay a lot of attention to it.
00:08:37
Speaker
Because essentially the argument that you're making, right? Is that these processes and programs are good for the bottom line, which is, I think something that is an argument that more and more people are starting to make where it's not just, you don't just do these because you feel like you need to have more women on the board. You need to do these things because having more women on the board makes you a better, more profitable company. It's an argument, I think that goes a long way.
00:09:02
Speaker
from your role at Microsoft? Is it just on the products that are going outside the organization to what I can buy at the Microsoft Store? Or is it also embedded within the internal work and also, I would think, spreading to the culture of the work inside the organization?
00:09:22
Speaker
Ah, that's a really interesting question. So here's the thing, right? Of course, it has to manifest in the products that you're building.

Why Must Tech Development Be Responsible?

00:09:33
Speaker
But in order to do that, well, you got to have the right processes in place. And you got to acknowledge that it's people who build these technologies, right? So it's very important to have the right sort of organizational culture and mindset.
00:09:47
Speaker
And we do that in a few different ways. We do that through developing role-specific workshops for different disciplines. We try to obtain leadership sponsorship for ethical product development. We also have to do a lot of work in terms of incentivizing ethics and making that a core priority in how people think about products. So that's super important. The culture is really, really important because
00:10:15
Speaker
end of the day people create these products. Then it's really bringing the perspectives of diverse individuals into product development. And by that I mean talking to impacted groups and community members and really using that to challenge dominant views.
00:10:33
Speaker
And it's really about pursuing principle product development. And luckily for us, we are in a space where our leaders have created that for us. Microsoft published something called the Future Computer. It was a publication of Microsoft's where our leaders, Harry Shambrat-Smith, talks about six ethical principles for AI, which is a fan as an inclusion, transparency, privacy and security, reliability and safety and accountability.
00:11:01
Speaker
And we apply these principles when creating products. So these are all the things that you need to do within the organization. But you also have to realize that we are in a
00:11:13
Speaker
an environment where things are constantly shifting, things are constantly changing, new regulations are emerging, you know, and national events, international events, all of these can even instill feelings of trust, feelings of fear amongst people or end users, right, towards technologies. So it's very important to take these into account and respond to new knowledge as it emerges as well.
00:11:38
Speaker
I'm curious, as you already mentioned, you have a pretty heterogeneous group of people in the group itself. And I'm curious, you mentioned also that you talk to stakeholders and members of the community. And I'm curious, for the more quantitative people on the team, is that hard to do? This is something that I've been curious about and talking about with people lately.
00:11:57
Speaker
People who are trained and tend to do quantitative methods and quantitative work, this idea of talking to actual people, talking to people that we study and people that we communicate with, is a pretty foreign idea in terms of
00:12:12
Speaker
You know, we don't do that. We download data or we collect data and we analyze it, but we don't actually talk to people. So like, do you find that it's hard for some people to do that? And do you find that by having this broad team with all these different skill sets, you're able in some sense, kind of train people on how to be good at having these, these stakeholder meetings and outreach efforts?
00:12:32
Speaker
Yeah, yeah. So luckily for us, we are a large company, right? So that's sort of, I'd say the privilege of having different disciplines. So we don't expect an engineer or a data scientist necessarily to go and engage directly with stakeholders. We do expect them to think through some of these questions and the benefits and harms that the technologies that they're working on can have on these human beings.
00:13:02
Speaker
But we don't necessarily expect them to do the direct engagement with the community because that might not be the area of expertise. It's a totally different skill set. So at Microsoft, we have human-centered disciplines like user researchers and designers. So this kind of onus we put on those disciplines. So now for the Ethics and Society team, I lead the user research discipline.
00:13:27
Speaker
And what my team does is just that, which is engage with the community. And we do that through a variety of qualitative and quantitative research techniques. So when it comes to this responsible product development, you bring together these different perspectives. So how does that ultimately inform? You've talked about this a little bit, but I'm just curious how you take those perspectives and inform them the final product. And to expand on that a little bit,
00:13:57
Speaker
How do you convince the developers at other places at Microsoft that these are components that they should bring into their work? I'll give you a good example from my experience, which is obviously very different. But very early on in my tenure at Urban, I was trying to reduce the number of pie charts that were people were creating at my organization. So people are making pie charts with 12 slices in them. And it's not a good technique.
00:14:24
Speaker
But whenever I would try to argue that someone should not use that, they wanted evidence to support my argument. They're researchers, so they want this sort of hard evidence to support my argument. So when it comes to the responsible product development, how do you bring the different perspectives of people that you've talked to and your team and convince your other colleagues at Microsoft to embody and embrace these concepts and ideas? Yeah, so I think the answer sort of lies in your question itself.
00:14:54
Speaker
I think there's a lot of power in the actual perspectives of these different stakeholders. So like I said earlier, we use a lot of qualitative and quantitative research techniques to solicit the feedback from end users and other community members. And what we do is we conduct our interviews or large public perception surveys,
00:15:20
Speaker
We do community juries where we assemble and line up product teams with the impacted community members so that they can hear the perspectives of these individuals directly. These quotes are extremely powerful. So what do you mean by bringing perspectives in from other people into the work that you all do?
00:15:44
Speaker
There are a few ways we do this. One is, as part of your product development process, ensure that you have considered a diverse pool of end users. And most importantly, including end users who are typically forgotten or excluded. So this means that you're intentionally going out and recruiting people from the LGBTQ plus community, racial minority groups, women, introverts, those with visual impairments, speech impairments, and so on.
00:16:14
Speaker
We believe that when we can address the needs and concerns of marginalized communities, we are able to better address the needs of a broader range of people. The second point I will make here is, think about your indirect stakeholders in addition to your end users and other direct stakeholders. So these could be individuals whose jobs could be impacted by the technology you're building, for example.
00:16:39
Speaker
Third point here is to seek advice from domain experts and human rights groups, especially as you work in novel complex domains.

How Does Community Feedback Shape Product Development?

00:16:48
Speaker
We have worked with experts on situation awareness, policy law, and human rights on different projects as we realize our expertise in some of these spaces may be limited and these individuals can actually help us.
00:17:03
Speaker
So when product teams hear firsthand the needs and the values and concerns of the community directly, then that's really powerful. So that helps a lot. And the important thing to realize is that getting feedback is not like a one time thing that you do and then you call it a day.
00:17:25
Speaker
I'm talking about getting feedback from the community throughout your product development lifecycle. If you think about it, right from envisioning to defining the problem space, to prototyping and building, to post deployment, throughout all of these phases, you've got to engage the community and learn. I mean, that's the only way you can create a superior product.
00:17:50
Speaker
Now you asked about convincing. So the data speaks for itself. So that helps immensely. And two, I like to think that most people have good intentions in mind. So when they see data, it's easy to persuade them. Three, I have to say that when we engage with different product teams, we have a formal handshake that happens with the leadership of that product team. People create products. And unless otherwise you change
00:18:20
Speaker
the organizational sort of mindset, it's very difficult to do anything anywhere. We work very closely with product teams where there's like strong interest, strong buy-in on the kind of work that we are bringing to the table. So that immensely helps as well.
00:18:39
Speaker
Yeah, no, it's it's interesting because it seems like not only does it affect the product, it affects the culture of the of the people that you work with. And then that continues through the lifespan of the product and into the next product or what have you. I do have to say, John, that when the teams that we partner with, when they see the rigor and the processes that we bring to the table, you know, we do end to end engagement with them. And that includes
00:19:07
Speaker
are doing harm's modeling sort of exercises to anticipate what can go wrong with the technology and what could be the impact to different stakeholders. From that to the research with the community, right? It could be qualitative research activities or more quantitative sessions. I'm trying to get my head around some of this. Do you have an example of a product that
00:19:36
Speaker
could harm a customer or a stakeholder and then how your team would come in and say, with evidence and say, this is how we might go about addressing. I don't think it sounds like for what we've talked about already, it doesn't sound like you come in with a fix, you come in with suggestions and data, but I'm just curious if, can you give us like a concrete example so I can get my head around like, what would a harmful product look like?
00:20:04
Speaker
Certainly.

What are the Ethical Concerns with Custom URI Voice?

00:20:07
Speaker
So there was a product that we worked on called custom URI voice. So the whole idea here is you can take snippets of someone's voice. So you just need 500 to 1000 utterances of somebody's voice and this can result in a voice font.
00:20:22
Speaker
But the thing about that is it could then say things that you never uttered actually lead to like a voice deep fake, right? So if you think about it, the huge repercussions, if this is not developed right. So we did a lot of very interesting work in this front. We created a gating process around this particular technology so that it's not available to everyone freely in the market.
00:20:49
Speaker
We vet the enterprise customers that we would be providing the service to. We worked with voice actors because they are a group of individuals whose jobs could be impacted by this particular technology. To understand their perspectives and this resulted in a set of guidelines around how companies that use the service need to be transparent.
00:21:13
Speaker
with this group of individuals and so that became a part of our terms and conditions. And this service actually has a lot of benefits too, right? If you think about it, it can be hugely advantageous for people who don't have a voice.
00:21:27
Speaker
who have speech impediments, like it can be a confidence booster. So we also did a lot of primary research with individuals with speech impairments to try to understand their unique needs and that resulted in a set of guidelines around
00:21:44
Speaker
how to create this service that caters to the needs of this group of individuals. And lastly, I have to say that it's very important when humans interact with different experiences that they don't feel deceived. And it's very easy in a situation like this, where you're interacting with a synthetic voice, because this can be extremely realistic sounding. So it's very easy to feel deceived.
00:22:11
Speaker
if you don't know that you're actually interacting with an agent, automated agent. So we also worked, in fact, my team, we did a bunch of studies to understand the right disclosure that's needed for consumers when interacting with synthetic voice. So we approach mitigations in like different angles.
00:22:33
Speaker
Right. So it's not necessarily making the voice sound more computerized necessarily to get away from that problem, but it's, it can be about warnings on the product so that, so that the consumer is aware of it.
00:22:47
Speaker
That's right, because extremely low fidelity kind of voice can actually be really disturbing. That hampers the user experience. How do you keep it high fidelity, but at the same time, make sure that it's an authentic experience and there is no deception that's happening? I want to add though, John, that there are absolutely certain situations that we recommend a high fidelity synthetic voice not be used.
00:23:28
Speaker
Even if there's some sort of disclosure, it can give you a false sense of confidence on the consumer because you tend to equate a high fidelity sounding voice to having high level of capabilities. And that may not be the case all the time because it's still an artificial agent. So there are absolutely certain situations where you want to avoid using that fidelity.
00:23:40
Speaker
Oh, okay. Yeah.
00:23:52
Speaker
And we outline all of those in our guidelines for responsible development of this tech. Right. That's really interesting. Okay, so I want to close up by maybe taking a practical concrete approach to this.

Techniques for Responsible Tech Development

00:24:08
Speaker
So are there specific tools, techniques, actions that you would recommend for responsible and ethical development of technology? And I might even
00:24:19
Speaker
throw in use of data as well. I think probably a lot of people listening to this podcast are working with data day in and day out. They may not be creating physical products, but they're working with data. And so I'm just curious about what sort of techniques and tools you and your team would recommend for those folks.
00:24:39
Speaker
Yeah, so I suggest that technologies actually do the following and I'm going to talk about like 10 things. Okay. So the first one is really simple. It's pretty basic. If you think about it, it's really trying to determine what problem are you trying to solve.
00:24:57
Speaker
Is there actually a technology need for this problem? We actually discuss this a lot. Is this a human problem or is this a problem that can actually be solved through technology? So you want to understand that first. Two, who are your impacted stakeholders? And by that I mean end users as well as other stakeholders who can be indirectly impacted by the technology. For example, people
00:25:21
Speaker
whose jobs could be impacted by the technologies that you're building. Third step is really thinking through what are the benefits of this technology for each of the stakeholders that you just identified.
00:25:33
Speaker
And then what could be the potential harms? I suggest using some sort of analytical approach to systematically think through these benefits and harms. Internally, we have developed certain tools that help us do this in a systematic manner. But there are also frameworks available in the public, such as value-sensitive design frameworks that can help think through
00:25:56
Speaker
what are the values and concerns and beliefs of different stakeholders and what could be the potential harms that these technologies can bring. Then I would really suggest that you think through some of the key ethical principles, such as fairness, reliability, privacy and security, inclusion, transparency, all of that. So by that, I mean asking yourself some key questions, which is
00:26:25
Speaker
Does your system treat all stakeholders equitably and prevent undesirable stereotypes? Does the system perform safely even in the worst case scenario? Is the data protected from misuse and unintentional access? Has the system been created in an inclusive manner to make sure that there are no barriers that could unintentionally exclude certain groups of people? Are the outputs of the system understandable to the end users?
00:26:51
Speaker
And are you finally taking accountability for how the systems are operating and scaling and its impact on society? The sixth point that I would say is really including diverse disciplines as part of your product development process that includes social scientists, human rights groups, designers. And that's really important, like I said earlier, to challenge dominant perspectives. Point seven is really to make sure that once you've identified these harms,
00:27:21
Speaker
create the right sort of work streams to mitigate these harms, which includes involving diverse stakeholders throughout all stages of product development from envisioning to post-deployment. Then, eight poor hinders, like I said earlier, acknowledging that people develop technologies, so you've got to create structures where people are actually incentivized for making ethics a core priority.
00:27:50
Speaker
of what 0.9 is making sure that you have developed role-based training, best practices and tools that product teams can use because principles will only go a certain way. So unless you have tools and best practices that teams can adopt and run with, you're not going to be successful. And of course, you also need processes that will hold product teams accountable.
00:28:17
Speaker
So those are sort of my 10 points.

Why Do Humility and Empathy Matter in Tech?

00:28:20
Speaker
But I want to close with saying that it's really important to recognize your domains of ignorance, right? Because this is a new space. For all of us, this is a new space. We are learning by doing. And so having that humility is very, very important.
00:28:39
Speaker
Yeah, I think that's a great point to end on. And I would probably add empathy to that, but the humility of saying that you don't know and you're willing to have these conversations and make these tough choices is such a key part of everything that you're doing.
00:28:54
Speaker
Well, Arthi, thank you so much for coming on the show. This is fascinating stuff. I hope others will learn from your experience at Microsoft and be able to hopefully take some of these tips. We've got 10 tips, which is great. Take these into account in their own work. Well, thank you so much.
00:29:16
Speaker
And thanks to everyone for tuning into this week's episode of the podcast. I hope you joined it. I hope you will check out Arithi's work and the work of her team over at Microsoft. And I hope you will consider supporting the podcast. Please tell your friends and colleagues about it. Write a review on iTunes or wherever you listen to this podcast. Or head over to my Patreon page where for just a few bucks a month you can help support the podcast, the transcription, the web servicing, the audio auditing.
00:29:39
Speaker
All of that good stuff that allow me to bring the show to you. All right, well, until next time, this has been the Policyviz Podcast. Thanks so much for listening. A number of people helped bring you the Policyviz Podcast. Music is provided by the NRIs, audio editing is provided by Ken Skaggs, and each episode is transcribed by Jenny Transcription Services.
00:30:06
Speaker
If you would like to help support the podcast, please visit our Patreon page at patreon.com slash policybiz.