Kandrea Wade visits the podcast to talk about her work on algorithmic identity and the digital surveillance of marginalized groups. More information and links in the full show notes.
Welcome back to the Policy This podcast. I'm your host, John Schwabisch. I hope you and your friends and families are well and healthy and safe in these strange and turbulent times. I've spent the last few months doing a lot of reading, spending a lot of time outside, and hanging out a lot with my kids and my wife.
00:00:32
Speaker
We are gearing up for a new very different looking school year where I'll be working side by side with my kids from our home in Northern Virginia. But I'm very excited to kick off season seven of the podcast and I have a lot of great guests coming your way. You'll also notice a couple of new things about the show, no intro music, new outro music for one, but the same high quality sound editing. I'll be providing transcriptions of the show and of course great guests.
00:00:55
Speaker
I also have a few new things on the blog. I have a new series called the On series where I write short, sometimes undeveloped thoughts or ideas about data visualization and presentation skills.
Upcoming Projects and Focus on Racial Equity
00:01:06
Speaker
I'm also entering the final stages of working on my next book, Better Data Visualizations, which is set to come out in January 2021.
00:01:14
Speaker
Personally, in response to the many protests about police brutality and inequality that have been rocking the United States. I'm trying to take a more racial equity lens to my research and to my data visualization work, and I'm trying to extend that perspective to the podcast so in that vein.
00:01:32
Speaker
In this new season of the podcast, you'll hear more from people of color and from people doing work to serve underrepresented groups and communities. I'll also be doing a number of talks in the next few weeks, one for the New York data visualization meetup and another one in October for the IEEE VIS conference. And I'll put those links on the show notes page if you would like to join me and learn more about the work that I've been doing in these areas.
00:01:58
Speaker
So to kick off this season of the podcast, I am excited to welcome Candrea Wade to the show.
Introducing Candrea Wade and Her Research
00:02:03
Speaker
Candrea is a PhD student in the information science department at CU Boulder, and she focuses on algorithmic identity and the digital surveillance of marginalized groups. We talk about what brought her to this area of research, which is a really interesting story, and the different projects that she has underway in her lab at CU Boulder.
00:02:20
Speaker
So, I'm looking forward to bringing you a great set of guests this year, and I hope you'll continue to support the podcast by sharing it with your friends, rating and reviewing it on your favorite podcast provider, and if you're able to support it financially by going over to my Patreon page. So, let's start season seven. Here's my conversation with Candrea Wade. Hi, Candrea. Thanks so much for coming on the show. How are you? I'm doing very well today, John. How are you? Thanks for having me.
00:02:45
Speaker
I'm doing great. I'm really excited to have you on the show and talk about the work that you're doing. I wanted to start by letting folks know how I found out about you. So earlier in the year, I signed up my kids for this Skype a scientist program, which is a great free endeavor that folks are doing where
00:03:03
Speaker
uh, scientists and all sorts of different fields get up and, uh, virtually talk to talk about their work and talk with kids and answer questions. And so I had signed my kids up and they, they attended, you know, one on turtles and one on someone doing something with whales or something like that. And they were sort of. Moderately engaged. And so I saw yours come up and I was like, Ooh, this one looks really good. And I said, you guys, you want to watch this? This woman's going to talk about, you know, bias and algorithms and machine learning. And of course they, they just rolled their eyes at me and walked away. But I watched the,
00:03:32
Speaker
I watched the whole thing and found it really interesting. And so I'm really happy that you were able to chat with me because you've got this new paper out and I just want to learn more about the work that you're doing. So maybe we can start by just having you talk a little bit about yourself and your background and then we can talk about about the work you're doing. Absolutely. Thank you. Thank you so much. So about me, I've always been in love with art and media and technology, which has led me to
00:04:00
Speaker
a super diverse path. And so I have a bachelor's in technical theater and I worked as a technical director and a lighting designer for over 12 years. And in that time I taught at a college, I worked at several concert venues, I held technical director positions. And then from there I started working in events and film and TV. I worked for South by Southwest and Viacom, Bravo, Disney, ABC, and several other production companies. And over the time,
00:04:27
Speaker
In addition to entertainment, I've also worked in education for 15 years. And so I worked as a senior admissions advisor for the Princeton Review. And I did that before I went to NYU for my master's in media culture and communications. And so at NYU, I wanted to study demographic programming for production companies and TV, film, streaming services. And as I moved through my program, I realized that in order to do demographic programming, you have to have user data, of course.
00:04:56
Speaker
And this made me start thinking about what exactly these companies can see and what they're doing with that user information. So this introduced me to the whole world of user data ethics and the bigger questions of how we as humans are sorted and categorized in systems. And so I started taking all of my electives and data science and applied statistics and then made my master's program.
00:05:18
Speaker
a combination of media studies focus with data ethics emphasis. And I started to focus on bias and ethical dilemmas in the usage of user data. Specifically now looking at tech, corporate systems, operations, government. And this led me to CU Boulder, where I'm currently working on my PhD. And my focus is in algorithmic identity and the digital surveillance of marginalized groups. And I am a part of the identity lab, which is led by Jed Brubaker.
00:05:47
Speaker
and the Internet Rules Lab, where we focus on ethics and policy, and that lab is led by Casey Fiesler. So that's just a little of an overview of me. Yeah, you have really come to this from far afield. Absolutely, absolutely. But it's putting together both sides of my brain that I really love. It's art and technology, and it's computers and people, and so it's really a match made in heaven perfectly. Well, and also you seem to have the practical background to see how
00:06:17
Speaker
the actual production work is done. And so how all this feeds together in that whole ecosystem of advertising and actually producing the media. Absolutely. And you know, that's a lot of it when it came down to when I was looking at the demographic programming, um, I was looking at streaming services a lot and, um, the streaming services kind of like Netflix being a black box. You don't really know how they're curating your, you know, what you're seeing on your page. Everyone has different,
00:06:42
Speaker
sorts that come up whenever you look at your Netflix. And so
Understanding Algorithmic Identity
00:06:45
Speaker
I found that to be super fascinating. And so in trying to target audiences for what media they wanted to watch, I was like, oh, audience targeting, people targeting. This is all very fascinating to me. And so it just kind of, you know, trying to understand how people work and what they want and how that's now being done through computers and algorithms was just super fascinating to me.
00:07:05
Speaker
Right. Can you talk a little bit about this phrase that you just mentioned, algorithmic identity? What does that mean to you? What does that mean in the lab and where you work? Absolutely. So, you know, we all have our physical identity. You know, you have your race, your gender, your cultural identity, your ethnicity, you know, where you come from as a person.
00:07:24
Speaker
But all of that now, especially as we all move forward with these profiles and all the digital footprints that we create with all the movements that we do online, especially now and the times that we're in where everyone's on the computer, you now have a digital copy of yourself. And that copy of you is not necessarily completely representative of who you are as a person. And so your algorithmic identity is kind of like, I would say, a proxy of who you are as an actual physical person. So for every one of us that's physically walking around in the world,
00:07:54
Speaker
a digital copy of us that's existing inside of these systems, that's being categorized and looked at and sorted all the time. And so in my lab, we look at algorithmic identity, and we're trying to figure out ways to define this, ways to explain this to other people, ways that we need to maybe have protections for what we now call the digital citizen. Because it's not just about your rights in the physical world anymore. It's what rights do we have like GDPR to be forgotten?
00:08:21
Speaker
or what rights do we have to delete our data or what rights do we have to even pull that data to understand how many data points there are on us floating out there in the world. And so there aren't as many regulations and policies and laws as there are for us in the physical world. And so we look at the algorithmic identity as an extension of the self and what we need to be doing to make sure that that self is protected as well.
00:08:47
Speaker
I want to ask about the paper that you published in May, but maybe I'll get there through a quick segue question. When you are working on these algorithmic identity measures and profiles, and you mentioned earlier distinguishing between different demographic groups in the paper, and specifically you're talking about race and gender.
00:09:07
Speaker
How does that interplay with how our virtual avatar exists and how companies and governments are using that information differentially across these different racial, ethnic, gender, and other groups? Well, that's a great question. Something that happens a lot of times is that as we create these profiles, something that we have as agency in that. We get to determine
00:09:35
Speaker
For instance, on Facebook or Snapchat or whatever your social media profile is, you get to make the determinations and self-report your gender, your race, your identity, your age, if you want to, if you don't want to.
00:09:48
Speaker
A lot of times, that's a really great way that we can actually represent ourselves. But there are also other data points that are being tracked by healthcare companies, insurance companies, credit scoring, things like that, that we don't necessarily have as much control over. And so it's what maybe the government
00:10:08
Speaker
defines us as. And what markers we have to tick on boxes for census purposes or birth certificates, those can be literal boxes that we're put into that we have to define ourselves. And those little data points get put into systems where there are basically assumptions made about us or there are matching that's done for certain profiles. And so there are different implications that come from that. And what we see a lot in this field is that typically
00:10:33
Speaker
the same sort of socioeconomic, really real life physical bias that we see and some of the discrimination that happens in the real physical world is now unfortunately being transferred into these digital systems. And so the implications that we see a lot of times happen to be really affecting, like I study marginalized groups, they really affect people of color, LGBTQIA plus groups,
00:10:58
Speaker
people with disabilities and they are either not being considered or they do to bad historical data that's been collected on them or biased historical data that's been collected. We're training these systems in a digital space now to reflect the same bias and discrimination that happens in the physical world. That's a lot of what we look at and how we can maybe mitigate and solve a lot of that because
00:11:23
Speaker
You know, computer systems can do what humans can, but faster and at a bigger scale. And so we want to make sure that if we're going to be categorizing humans in this way in systems now, that we can account for the bias that's being input into those systems.
00:11:35
Speaker
Right, so I want to ask about the data collection part, but I want to give you a chance to talk about this paper that you've published because it goes hand in hand with what you just said.
Challenges in Facial Recognition and Image Databases
00:11:45
Speaker
So the title of this paper is, and I'll post it to the show notes for folks who are interested, how we've taught algorithms to see identity, constructing race and gender, and image databases for facial analysis. So I thought maybe we could start by having you just give us the overview of the paper in general terms, and then we can dive into some of these details.
00:12:05
Speaker
a few questions for you about the paper itself. But yeah, maybe just give us an overview of what the hypothesis is, and then also maybe talk a little bit about how the data are collected in both this paper and just more generally. I mean, I certainly don't have a background in this, so I'm just curious how the analysis and the data are actually collected and used. Absolutely. So this was a study of 92 image databases that are utilized as training data for facial recognition systems.
00:12:35
Speaker
So we wanted to analyze how they expressed gender and race and how those decisions were made and annotated. I mean, if they even were annotated.
00:12:43
Speaker
and how diverse and robust those data, you know, image databases are. So our findings showed that there are actually issues with image databases as to be expected. But those issues specifically surround the lack of explanation and annotation when it comes to how race and gender is defined within those databases. And so we often found that A, gender was only represented as a binary. So that is as a male, female.
00:13:10
Speaker
except for a few instances that accounted for it in their reporting, but still only contained images that were listed in the binary. And then B, we came across issues of race either being defined as something insignificant or indisputable or apolitical when we know that in the physical world, there are many layers of sociopolitical factors like status, income, country of origin, parental lineage.
00:13:35
Speaker
You know, all of these things play into how someone's race or ethnicity is defined. But we also noted that the diversity of these databases was often lacking. So that, again, contributes to the problems that we see so often in facial recognition systems and their ability to recognize diverse faces, and especially those of color and individuals of trans identity.
00:13:56
Speaker
And so when the facial databases have this information, I assume it's being collected in multiple ways. So one is I as the individual, there's a picture of myself on Facebook and I can tag myself gender, race, but whatever Facebook, whatever tool options they give me, and then that informs how an algorithm might assign those characteristics to that image as well. Is that correct? Yes, that's correct.
00:14:22
Speaker
Yeah. And, um, Oh, go ahead. Go ahead. No, go ahead. No, you're right. I was going to say in this paper in particular, we were looking at, um, databases that had already been built for, for, you know, public use, for corporate use, for things like that. And so these had already been built.
00:14:39
Speaker
specifically let's say from a lab that recruited people, they wanted to build a database, they wanted to give it to people, sell it to people, but they just went ahead and put out a call for faces and then they decided to collect those images and then they categorize them. A lot of these databases were already pre-built as a package to be given or shared with the world to train their own systems on images.
00:15:06
Speaker
Interesting. I want to get more into the content of the paper, but I do want to start with the title because I think the title is actually telling with how the language in the rest of the paper. So the first part of the title of the paper is how we've taught algorithms to see identity. And I was wondering if you talk both about
00:15:25
Speaker
the connotation of that, of how it's not just algorithms don't just exist, like someone has to build them and train them. And also how you and your co-authors and also, I guess, the folks in your lab view how these algorithms are constructed in terms of how they reinforce the stereotypes and the discrimination and the racism and prejudice that you've already mentioned. But yeah, I guess just that overall sense of how the title here and the language throughout the paper is more
00:15:55
Speaker
Uh, I don't know, it's, it's, it's certainly active, but it's also kind of takes responsibility for these algorithms as opposed to just saying, yeah, they just kind of exist and, and off they go. Yeah. So, um, I would say that the use of this language could be considered the, you know, the, how we've taught is maybe we could say it is the Royal we, um, which accounts for all of us in the field. So that's researchers, practitioners, coders.
00:16:21
Speaker
Even the participants who provide the images that go into these databases, we're all responsible for teaching systems of AI and machine learning how to do the jobs that we're asking them to complete. So it's up to us to do a better job of ensuring that those systems are fair and equitable for all races, cultures, gender identities, you know, and these systems are really no smarter than a toddler, essentially.
00:16:44
Speaker
And we'll never do anything more than what they're told with the information that they're given. So when it comes to machine learning, well, that machine needs to be taught. And like I said, it's up to us as those teachers to give those algorithms their best shot at being as, you know, accurate and equitable, fair and representative as possible of all of the people that it's trying to assess.
00:17:06
Speaker
There's a lot of talk in the in the data and data visualization field about being responsible consumers and users of data. And I'm curious if you thought about how consumers of
00:17:20
Speaker
I guess this information, which is sort of a weird thing, because I think about a lot of the media that we talked about earlier, it's my ads on Gmail and Google are being targeted towards me. But what can we do as consumers of information to, I guess, try to be a responsible consumer of this information? Maybe is the easiest way to say it.
00:17:37
Speaker
So, and it being just the every person who's using the computer or things like that, it depends on if you want to be identified or not. I mean, there are two different ways to look at this. It depends on if you want to be identified and if you do want to be identified, ensuring that it's accurate. And so from what I have gathered generally in my research, a lot of people are very uncomfortable with being identified, especially in marginalized or
00:18:02
Speaker
communities or protected classes, groups, and others that may be at risk of surveillance, essentially. And so what those people do is a lot of people spend time obfuscating their identity. They would rather not be found in a system whatsoever, and if they are found, they don't want that to be accurate. A lot of people don't know that there's a backend feature to Google, and there's also, I think you can do it in Facebook as well,
00:18:29
Speaker
where you can go in and see what they think of you for marketing and ads. You can go in and see what they think your political leaning is, what they have assessed your race to be if you've never even entered that information. You can go look this up and a lot of people would rather that information be inaccurate because they don't want our ads targeted to them. They don't want an online
00:18:52
Speaker
system that can make a determination about credit scores or things like that. They don't want that to find the information about them. But if you do want to be found online, which is completely reasonable too, or if you do want to leave this digital footprint, the most you can do as a consumer is just ensure that it's accurate. And so there are ways that you can go into your own profiles and edit your information to make sure that it is as in line as possible with who you are as a physical person. But also going to the backend of Google, see who they think you are.
00:19:21
Speaker
And then you can either change the settings in there manually, or you can change your user behavior to be more in line with who you are as a person. But it all just depends on how much you want to be involved in this digital space. And that's up to every individual to make that determination for themselves.
00:19:39
Speaker
Right. Before I ask a little bit more about the paper, I wanted to turn back to something you just mentioned because I want to make clear for folks who may not be thinking about this, that what we're talking about here is not just you're scrolling through your newsfeed and there's an ad for the thing you just looked at on Amazon just shows up in your app. It's not just about advertising.
00:20:02
Speaker
And I was just wondering, because you had just mentioned credit scores, which I think is a great example, credit scores, health, housing. I was just wondering if you could talk about a few of the things where these algorithms can reinforce these stereotypes and discrimination that you've already been talking about.
00:20:18
Speaker
Absolutely. I'll talk about credit scores and I'll talk about when it comes to insurance and maybe loan determinations. There's a lot of different data points that are used, like we've been talking about. They're all proxies of who you are as a person. They don't know exactly who you are, so they have to use other data points to make assumptions.
00:20:40
Speaker
a problem anytime we make assumptions, that's not a good thing.
Socioeconomic Biases in Digital Systems
00:20:44
Speaker
But zip code, zip code is one that is used really widely to make determinations about who you are as a person. And this goes into bias. This is how we have, you know, gerrymandering. We have a lot of lines that are drawn that are, you know, separate people from different resources, school systems, hospitals that are also put into these algorithms that make determinations about if you are worthy or deemed worthy of receiving something like a loan.
00:21:10
Speaker
are deemed worthy of receiving a certain type of health care or at a certain rate. And so these things are determined by where they think you live and what type of neighborhood that is. Any person can decide to buy a house in any neighborhood, but they make these assumptions based off of what they typically and historically have seen of those neighborhoods, whether it be a more disadvantaged neighborhood or they see it as being a very wealthy and lucrative neighborhood, something as simple as your zip code can make determinations about.
00:21:37
Speaker
you know, how worthy they deem you to be. When it comes to things like healthcare, there are, you know, different issues that come into play with the reporting that doctors have done. When, you know, doctors have in physical world been, there's been a lot of discrimination in people not recognizing how the symptoms may be different in African Americans or in black people.
00:22:04
Speaker
versus symptoms that represent, let's say, for a heart attack in white people. And so those same biases that were written into charts get input into these algorithms. And so there are misdiagnoses that happen even with the assistance of AI based off of there not being fair reporting on what these symptoms are looking like and who deserves to have treatment for them or not.
00:22:28
Speaker
And then you have things like I was saying, you have finances that are being tracked as you make your purchases online. And it's not just about the ads that you see, it's about these algorithms also being able to see what you're buying and when you're buying them and how much money you have in your bank account and how much credit you have. And with all of those things being put together, it's making a profile in who you are as a buyer, as a consumer.
00:22:56
Speaker
And so that's also leading into determinations about what you may be deemed worthy to receive or not when it comes to making requests or what ads you see or making requests for loans or even purchases that you have. So those are all things that are being tracked in systems all the time. They're not just trying to target you to sell you things, they're also trying to see what you're buying
00:23:17
Speaker
to make determinations about who you are as a person and where you're buying these things. Are you shopping at Walmart or are you shopping at Nordstrom? Are you shopping at Barney's New York? Those are all very different things. Right. Yeah, I think it's Virginia Eubanks has this great book on this topic and she, I think it's her, has described this as the virtual redlining of our society where we've moved now from these physical maps of housing discrimination into the virtual world, which as you mentioned,
00:23:47
Speaker
It works a lot faster and a lot broader because we have computers doing it all, or computers are doing it now, informed by the decisions that people are making when they build the algorithms. Absolutely.
00:24:00
Speaker
I wanted to ask one last question about the paper. So there's a, there's a sentence in the paper for me really struck me because I focus a lot on, on good annotation and data viz and, and I thought this was really interesting. So in the paper, you and your co-authors say, further, when they are all annotated with race and gender information, database authors rarely describe the process of annotation. And I was hoping you could talk a little bit about what annotation means in the context of your field and your research.
00:24:28
Speaker
Absolutely. So in this particular study, we're looking at images, right? So when it comes to writing a description of an image of a person and giving that description to a computer, those algorithms need information like race and gender, or at least in the systems as they're built as this moment. They need markers like race and gender to be able to start to sort and categorize and match similar images. Well, those descriptions are written by the people who are developing those databases of images.
00:24:58
Speaker
And we often found that this process was done in a very vague but determinate way. And so when the individuals who collected these images saw what they presume to be a black person, they labeled the images black. When they saw who they thought to be, you know, white, Asian, Indian, male, female.
00:25:16
Speaker
They made these determinations for the subject in the image and typically made this with no clear distinctions or justifications for why and how these assignments were made aside from, well, we just did it. So these distinctions and justifications would be the annotations. So it would be some sort of previously defined set of rules or guidelines that would inform exactly why the images were labeled as they were.
00:25:42
Speaker
There would need to be guidelines for what is defined as a woman visually, what is defined as a man visually, what is determined as to be black and white and so on. But without these clearly defined rules and guidelines, which could then be argued, disputed, iterated, improved upon, we're just left with determinations on images of individuals that may not be accurate or true to what they would see as their own identity.
00:26:08
Speaker
And then there's no way to then argue them or refine them to be better and more accurate for these people. So it's basically just like someone said so. And that's why that's not a real annotation or justification. It's just because I said so. One of the main issues with databases attempting to determine identity is that the identity of the subject is often reported for the subject instead of allowing the subject to self-identify.
00:26:36
Speaker
And then again, those in charge of creating the databases have now essentially defined race and gender for an entire set of individuals without considering their lived embodied experiences or their positionality. So in the paper, we talk about this as the visible versus invisible features of identity. And so think about it. If you don't have a diverse group of people making these determinations,
00:27:01
Speaker
And they're only using categorizations that they defined and do not explain. It's really easy to see how this leads to contextual collapse and it removes a lot of potential variance and diversity that we could have in these databases. Not to mention that again, the individuals who are typically pulled for these images are not representative of many diverse populations that we have in the world and they often skew toward being white or white appearing people. So if we take all of that into account and then we teach it to a system,
00:27:30
Speaker
and we tell it to read a face, and let's say it's a black Native American trans person's face, the system already cannot recognize it as a trans person due to it only being capable of reading gender in the binary, male, female. It also now has trouble reading a black face due to a lack of training images in the database, and then it's lost in finding distinctions between black and Native American because it was never told that Native American was something to look for.
00:27:56
Speaker
And so now we have a system that can't identify a person that it's attempting to read. And if it does, it will output incorrect information, just doing the best with what it's been told. So this leads to many issues and facial recognition that have, you know, really serious implications. Like right now we have a lot of issues of misidentification
00:28:15
Speaker
with things like traffic cams and street surveillance being used, especially right now during the protests. There are a lot of issues of a harder time with identity verification, especially for diverse people when it comes to like passports and IDs at airports. And they use a lot of facial recognition for that. And it slows down the process of diverse groups of people being able to even get through, you know, security to get to another country.
00:28:41
Speaker
And systems, you know, just not identifying diverse subjects at all and giving back feedback error responses. And so there's, you know, there's an entire world of issues that plays into this, but that's essentially what we were looking at in the paper. And that's, that's where we were, you know, that's where we were looking at annotation, the direction we were going there.
00:29:01
Speaker
Really interesting. Well, I'll post the paper or link to the paper on the show notes. Before we go, I wanted to ask what work you have on the horizon. What's the future bring for you in terms of your work and your dissertation and the lab?
00:29:19
Speaker
Oh, thank you for asking. Yeah, so I'm doing some work right now looking at how qualitative researchers conduct their work. And we're trying to find some ways that maybe, you know, we can have a better understanding of how qualitative researchers, you know, conduct their work and their analysis. And if there are ways that we can, you know, maybe potentially build tools to be able to help them to be able to do these things. I'm also looking at some issues of where
00:29:45
Speaker
In the country, do we see the most, um, lacking of data literacy and how do we help those communities to be able to inform themselves, educate themselves on how to be, you know, smart consumers of data? Like we were just talking about, um, also looking at right now, the things that are going on with the protests and protest surveillance and, um, moving forward, you know, looking at a dissertation or, you know, work later down the road.
00:30:11
Speaker
Like I said, I have a background in media and entertainment and I really have a love for theater arts because I feel that it has a way to connect with people in a way that a lot of other things can't. And so entertainment is extremely powerful and being able to disseminate messages. And so data literacy is a very huge subject to try to teach to people. And so I do see down the road that I'd like to incorporate data literacy into
00:30:36
Speaker
uh, messages of passive learning via entertainment theater arts. And so trying to make that, it's a bigger goal, but you know, I'll refine it as we go, but those are really, you know, what's on their horizon for
Future Directions and Data Literacy
00:30:47
Speaker
me. That's great. That sounds like great stuff and sorely needed. And, uh, I think you have a lot of work to do. Uh, sounds like great work. Thank you so much. Kendra, thanks for coming on the show. This was really interesting. It was great to chat with you and I really appreciate it.
00:31:04
Speaker
Absolutely thank you for having me.
00:31:13
Speaker
And thanks to everyone for tuning into this week's episode of the Policy Viz Podcast. I hope you enjoyed that interview with Candria Wade, and I hope you'll check out the various links and resources that I have put up on the show notes page. You can go check out Candria's bio, you can check out her research and her work over at her lab at CU Boulder, and you can check out the various talks that I'll be giving over the next couple of weeks. So until next time, this has been the Policy Viz Podcast. Thanks so much for listening.
00:31:43
Speaker
A number of people go into helping making the Policy Vis podcast what it is. Music is provided by the NRIs. Audio editing is provided by Ken Skaggs. Transcription services is provided by Pranash Mehta. And the show and website are hosted on WP Engine. If you'd like to support the podcast and Policy Vis, please head over to our Patreon page, where for just a couple dollars a month, you can help support all the elements that are needed to bring this podcast to you.
00:32:13
Speaker
Sleeping at night Eyes stuck open In a pool of wine I can feel the end So hard to make a mess It's coming