Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Imagine: What if we designed and built AI in an inclusive way? image

Imagine: What if we designed and built AI in an inclusive way?

Imagine A World
Avatar
34 Plays1 year ago

How does who is involved in the design of AI affect the possibilities for our future? Why isn’t the design of AI inclusive already? Can technology solve all our problems? Can human nature change? Do we want either of these things to happen?

Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.

In this second episode of Imagine A World we explore the fictional worldbuild titled 'Crossing Points', a second place entry in FLI's worldbuilding contest.

Joining Guillaume Riesen on the Imagine a World podcast this time are two members of the Crossing Points team, Elaine Czech and Vanessa Hanschke, both academics at the University of Bristol. Elaine has a background in art and design, and is studying the accessibility of technologies for the elderly. Vanessa is studying responsible AI practices of technologists, using methods like storytelling to promote diverse voices in AI research. Their teammates in the contest were Tashi Namgyal, a University of Bristol PhD studying the controllability of deep generative models, Dr. Susan Lechelt, who researches the applications and implications of emerging technologies at the University of Edinburgh, and Nicol Ogston, a British civil servant.

This world puts an emphasis on the unanticipated impacts of new technologies on those who weren't considered during their development. From urban families in Indonesia to anti-technology extremists in America, we're shown that there's something to learn from every human story. This world emphasizes the importance of broadening our lens and empowering marginalized voices in order to build a future that would be bright for more than just a privileged few.

The world of Crossing Points looks pretty different from our own, with advanced AIs debating philosophy on TV and hybrid 3D printed meats and grocery stores. But the people in this world are still basically the same. Our hopes and dreams haven't fundamentally changed, and neither have our blindspots and shortcomings. Crossing Points embraces humanity in all its diversity and looks for the solutions that human nature presents alongside the problems. It shows that there's something to learn from everyone's experience and that even the most radical attitudes can offer insights that help to build a better world.

Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.

Explore this worldbuild: https://worldbuild.ai/crossing-points

The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected].

You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.

Works referenced in this episode:

https://en.wikipedia.org/wiki/The_Legend_of_Zelda  https://en.wikipedia.org/wiki/Ainu_people  https://www.goodreads.com/book/show/34846958-radicals  http://www.historyofmasks.net/famous-masks/noh-mask/

Recommended
Transcript

Critical Thinking in Technology

00:00:00
Speaker
On this episode of Imagine a World. I guess the hope is that people will think more critically about everything, about the systems, about the technology, but then also recognizing that, you know, everyone needs to be a part of this. And we don't want just the people that are hyper tech focused and working on technology involved. It's very much like voting. You think that you're one person, your vote doesn't matter, but
00:00:27
Speaker
In actuality, it's like, no, if you and everyone else thinks like that, then it does affect things. So it's very much getting everyone involved and making sure that everyone is thinking about this.

'Imagine a World' Mini-Series

00:00:46
Speaker
Welcome to Imagine a World, a mini-series from the Future of Life Institute. This podcast is based on a contest we ran to gather ideas from around the world about what a more positive future might look like in 2045. We hope the diverse ideas you're about to hear will spark discussions and maybe even collaborations. But you should know that the ideas in this podcast are not to be taken as FLI endorsed positions. And now, over to our host, Kiam Reason.

The World of 'Crossing Points'

00:01:26
Speaker
Welcome to the Imagine a World podcast by the Future of Life Institute. I'm your host, Guillaume Reason. In this episode, we'll be exploring a world called Crossing Points, which was a third place winner of FLI's world building contest. This team put a lot of thought into how new technologies might work alongside human nature, rather than trying to change us directly. The most advanced AI systems in this world are extremely large and expensive to run, so are mostly limited to political and corporate use.
00:01:52
Speaker
Crossing Points has a special focus on exploring how simpler tools are experienced by a range of diverse individuals. There's an emphasis on the unanticipated impacts of new technologies on those who weren't considered during their development. From urban families in Indonesia to anti-technology extremists in America, we're shown that there's something to learn from every human story. This world emphasizes the importance of broadening our lens and empowering marginalized voices in order to build a future that will be bright for more than just a privileged few.
00:02:21
Speaker
This world was created by a diverse team of five living in the UK. Our guests today are Elaine Chek and Vanessa Hanschka, both PhD candidates at the University of Bristol. Elaine has a background in art and design and is studying the accessibility of technologies for older adults. Vanessa is studying responsible AI practices of technologists, using methods like storytelling to promote diverse voices in AI research.
00:02:44
Speaker
Their third teammate, Tashi Namgyal, is also a University of Bristol PhD candidate who is studying the controllability of deep generative models. The remaining team members are Dr. Susan Lekelt, who researches the applications and implications of emerging technologies at the University of Edinburgh, and Nicole Oxton, a British civil servant delivering welfare policy to UK citizens.

The Team Behind 'Crossing Points'

00:03:06
Speaker
Well, hi Vanessa and Elaine, thanks for joining us. Hello. Hi, thank you for having us. Thanks for being here.
00:03:14
Speaker
So I'm curious, first of all, how the five of you on your team came to work together on this project. So I think Tashi was the one who first spotted the competition for the Future of Life Institute. And Tashi, Elaine, and I, we all do our research in the same lab, so the Bristol Interaction Group. So yeah, that's how we know each other. And then we were thinking about who else to pull in.
00:03:42
Speaker
And Susan and Nicole, they both did their undergrad at Edinburgh with me. So I knew them and I knew they'd be sort of interested. And Nicole's like more of the policy person. So we needed him. So all of, all the rest of us were kind of in the human computer interaction space. And yeah, Nicole, we pulled in for some politics knowledge.
00:04:10
Speaker
Can you say a little bit more about your particular skill sets? I guess thinking about me and Susan were both more in the creative side of things. My background is in art and design. I know that Susan has been doing a lot of creative informatics. Tashi also is in the creative sphere. I think he's our
00:04:34
Speaker
musician in residence, I guess. Cool. Artist, yeah. Yeah, he does music and AI stuff. Yeah, I saw he's doing like music from People's Doodles, which is very cool.
00:04:45
Speaker
Yeah, that was his like last project, but also like in the AI space. And I do kind of research and responsible AI. So thinking about making it more ethical and thinking about how to take all the kind of responsible AI work that's based on guidelines and very abstract just now and taking it into the more concrete. So yeah, that's how I fit in.
00:05:14
Speaker
How would you say that your personal perspectives, like where you live or your educational backgrounds have influenced how you think about the future? So that that's really, I think, part of why we came together to do this is because we all were coming from sort of diverse backgrounds. So I, if you can't tell from my accent, are American. So I grew up in the US, but then I did my master's in Japan. I know Vanessa is very multicultural. She's from
00:05:44
Speaker
I'm going to let you talk about yourself. I'm going to talk about you right here. Well, yeah, I'm, I was born in Germany and like half German half Indonesian. That's why I like brought in that Indonesian story. But I've like lived in several places like, like Elaine. Now I'm studying in the UK and yeah, I've lived in Cameroon. I've worked in Italy. So it was quite
00:06:06
Speaker
important for me to kind of see that like little stories that actually happen all around the world and kind of think of things in a more decentralized way, as opposed to like humanity always going into one direction. But actually, there's many things going around that don't feature so often like the big media landscape. Yeah.

Technology and Fulfillment in 'Crossing Points'

00:06:40
Speaker
The world of Crossing Points looks pretty different from our own, with AGIs debating philosophy on TV and hybrid 3D printed meats in grocery stores. But the people in this world are still basically the same. There is still a dazzling array of languages and cultures which shape our attitudes toward technologies. Our hopes and dreams haven't fundamentally changed, and neither have our blind spots and shortcomings.
00:07:02
Speaker
I wanted to hear more about how the people of Crossing Points navigate their new world and how AI tools manage to help so many of them flourish. How do people find fulfillment in your world? Like what's a good lifelike? Yeah, I think there's kind of like two aspects to that. So there's one that we kind of brought out, which was like the self-expression side. So people like to be creative and I guess
00:07:30
Speaker
It's something that kind of exists now as well. Like they try to improve themselves and become better. But then on the other hand, we also thought that like the social side was very important. So if technology takes over a lot of the burden of work, then ideally we would also have more time to engage with each other socially and help other people and volunteer maybe.
00:07:55
Speaker
Yeah, yeah. One big factor in your world you describe is the spread of personalized AI tutors that kind of helps people do this personal growth and connection stuff. The most popular one you describe is skill jump.
00:08:06
Speaker
And it's this system that basically gets to know you and then recommends new things you should learn to try to reach your goals. I really like this one's origin story. It kind of started as a corporate employee training system, but then it gets taken over by an activist collective when the company goes under and kind of becomes like a tool for the people. And I'm curious to hear more about how people in your world actually use this. Like you're saying, things have really changed. You know, you have universal basic income and many jobs can now be automated.
00:08:31
Speaker
So what do people actually aspire to what are they trying to do with their lives with these tools. Yeah i think they try to go above and beyond what people could actually do before so one of the ideas was i was a bit like an immersive system with me are and like the whole world is kind of open to you kind of like you know the internet but in a lot more immersive way and.
00:08:54
Speaker
It also would take away your physical limitations that you might have. And so we were thinking of all kinds of hobbies and creative stuff that people would explore without the physical limitation of where you can actually go and what your body could actually do. Yeah. You mentioned people like rock climbing in VR with like muscle feedback and stuff like that. Yeah.
00:09:19
Speaker
So in your world, AGIs are fairly rare, mostly because they have this huge scale. They just, it turns out they require a lot of processing power in your world. And so there's a lot of associated costs. They're heavily regulated by a UN consortium. And so they're mostly used in industrial and governmental contexts. Like there are AGI oracles in the US and EU that help to predict policy outcomes. I'm curious how people outside of these contexts relate to those AGIs.

Accessibility and AI Limitations

00:09:44
Speaker
Like what do they know about them and what they're capable of?
00:09:47
Speaker
Yeah, I think what you described there is actually like quite an important part of how we view AGI is like the fact that they're big resource consuming things because I think that in a lot of the like AI conversation right now imagines AI just being everywhere because it's something that lives in the cloud and just like magically does things for you. But actually,
00:10:11
Speaker
this technology is all rooted in a material sense. It lives in a data center, which consumes a lot of energy. ChatGPT apparently took a hundred million dollars to just train. And yeah, chatGPT is nowhere close to AGI yet.
00:10:28
Speaker
So, how much would anything else kind of like consume? It's kind of like the state or the economy. People like know about these things and interact with those AGIs in like a limited sense in very defined settings, but it's not like they're always interacting. Like they know them, but they don't necessarily have a specialist. Like, yeah.
00:10:53
Speaker
Another theme I liked in your world was how diets have shifted. So you have most meat products now being hybrids of lab grown in plant-based materials. They use AI-assisted gene editing to develop new textures and flavors. So there's this stuff called chicken, which is this really popular combination of chicken and bacon. Sounds great to me. And you cite an outbreak of bovine diseases in the 2030s as kind of speeding along this transition. I'm curious how this shift in diets has affected the way people think about animals, if it has.
00:11:22
Speaker
I mean, I think something that is sort of a theme throughout our world is that we don't really see a whole lot changing in human behavior because I think things are very gradual. And I think we kind of even thought about this as the shift towards animals thinking about like whale meat that's consumed in Japan. Um, it's very common and normal in some areas. So especially amongst like the, I knew an indigenous people of Japan.
00:11:51
Speaker
eating whale meat as cultural and very accepted. But, you know, the majority of the country is like, no, let's not do that. So I think there is always sort of these shifts that occur with our like attitudes toward animals. But like the people who are die hard, like the vegans who really care about animals and animal cruelty will always be there. But I don't think you're going to have like a mass change of people's minds. Interesting.
00:12:27
Speaker
Crossing Points embraces humanity in all its diversity, and looks for the solutions that human nature presents alongside the problems. It shows that there's something to learn from everyone's experience, and that even the most radical attitudes can offer insights that help to build a better world. I wanted to hear more about this human-centered, inclusive approach to problem solving, and how it allows the people of Crossing Points to create a world where so many of them can flourish.
00:12:52
Speaker
I really appreciated how your world focused so much on like kind of unconsidered perspectives or under-considered perspectives. People who we don't often think about when we're developing anti-technologies and systems. There's still this ongoing struggle for accessibility and inclusion. Like in one of your short stories, there's this Indonesian grandpa who complains that this super advanced AI healthcare tool that he's using still doesn't understand his dialect. I'm curious to hear some of the other approaches that are being taken in your world to tackle these issues.
00:13:22
Speaker
So my research is working with accessibility and accessibility of technology. And I work with particularly older adults and older adults who have dementia. So I think I'm very hyper aware of the limitations that technology even currently has in terms of accessibility and inclusion. So I think knowing how the limitations are now, you know,
00:13:47
Speaker
there's this constant sort of need to create accessibility and accessibility is still sort of oftentimes a second thought or something that has to be built after the product is built. I think it's hard to see that going away in the future, which is really sad because if you make something accessible to one group, it tends to make it accessible to everyone. So there's really no reason to not have
00:14:17
Speaker
accessibility designed and built into these systems that we're creating, but we live in an ableist society. And so that really keeps this stuff from happening. And this sort of is why we continue to have these failures of accessibility and inclusion.
00:14:33
Speaker
Can you imagine a future where, you know, as you're saying that the rising tide lifts all boats in this accessibility area, like what if AI systems were able to give us that foresight and consideration? Could we imagine a world where there aren't as many accessibility issues or is this just something kind of inherent to any technology?
00:14:52
Speaker
That is a tough one. Actually, I think this one I'm going to kind of turn over to Vanessa because I think, Vanessa, you really work with sort of getting companies who are creating these AI systems to think about inclusion and think about the ethics. And I think that's part of where we need to go.
00:15:12
Speaker
if we want this to happen. I mean, the way that I see that AI is just now is that it's being trained on usually historical data. And so kind of being trained on the world as it is right now. So yeah, I find it, I struggled to imagine a world where
00:15:34
Speaker
It could overcome most of those challenges. I mean, it's, it's the way I also, I guess I would see it in my research where I'm like working with developers to reflect on their technology. And then sometimes I'll hear them ask, Oh, so are we, are we there yet? Are we like ethical now?
00:15:51
Speaker
And I think it's the same with like accessibility is like, are we there yet? Are we in that accessible world? Although it's like a continuous process where we are sometimes redefining also what we think and learning. So I can't imagine us arriving at that big goal. We probably have to.
00:16:11
Speaker
continuously be open to engaging and reflecting and questioning and what will the accessibility needs of the people of the future be? We don't know. Yeah. Yeah. I think you brought up an excellent point of like constantly needing to be questioning and aware and constantly changing our views. I mean, I think in any science, we're constantly reevaluating and
00:16:37
Speaker
renegotiating what our theories are, what our hypotheses are, that sort of thing. And I know from the accessibility standpoint, we have moved from sort of the medical model of how we approach technology design and how we approach even medical care.
00:16:54
Speaker
to more of like thinking about an independent approach to now even thinking about sort of this interdependent approach where it's people are interconnected and there are reliances that occur whether you're able bodied or not able bodied. Yeah. Interesting.
00:17:12
Speaker
And I guess even when things aren't designed with certain groups in mind, people sometimes just like appropriate technology in different ways and then just make them their own and manage in ways I can't even imagine. I don't know if you have a good example of that. I think of a great example. So there was recently someone did a research project and the paper was called My Zelda Cane and it was looking at
00:17:41
Speaker
You know, in video games, there's not a lot of accessibility games created or games that have those sort of accessibility elements, but people would hack the games. So for instance, the Legend of Zelda, they would use their sword and they would hack at things to kind of know where they were in space.
00:18:01
Speaker
So very much like how people with visual impairments use canes, they would use their sword like a cane. That's fascinating. Yeah. I love that. Great research. Yeah. Yeah. So sometimes people will just make it their own. You won't even have to have someone to imagine what it could be. Yeah, right.
00:18:21
Speaker
Although that's not to say that accessibility should not be designed for. They'll figure it out. But maybe let's actually make it more inclusive.
00:18:34
Speaker
So in your world, there's this huge impact on the labor market from AI and robotics technologies. One of your other short stories has this household robot that can do all this different types of labor, like cooking and cleaning, even harvesting these plants that are being grown in the attic. And it seems like the people in this house really just need to delegate. There's not much else for them to do physically in the house to take care of things. And the devaluation of manual labor in general that results from this in your world leads to the manual labor revolution.
00:19:03
Speaker
I'm curious to hear a little bit more about this revolution and its impacts on your world. So I want to shout out Susan because she was she really made a great point about this. You know, technology has historically been partial to automation. So the creation of the microwave, the creation of the washing machine, sort of all of these home appliances in this home appliance boom that occurred in the 20s.

Automation and Social Change

00:19:28
Speaker
the 20th century led to the decrease in household labor and so a decrease from almost 60 hours a week in the 1900s to 18 hours a week by 1975. So that led to this change in the workforce. So this is one reason why we see more women in the workforce is because they don't need to be spending all their time at home necessarily doing these domestic tasks. So I think
00:19:56
Speaker
being one of three women on this team. We do very much think about this feminist perspective on automation and the impact on society. But that then also means that we have to think about sort of the invisible labor that occurs. So oftentimes when we have technologies that are created to promote automation that creates sort of this invisible labor, which means that someone has to maintain it. Someone has to be the one
00:20:26
Speaker
you know, doing the updates. Someone has to be the one who, you know, make sure that it's plugged in and it's charged. And again, this labor tends to fall on women and carers.
00:20:39
Speaker
I think the idea we had there kind of came from asking the question, where are we going with all of this? When we're like automating different tasks that humans usually do, then we kind of need to rethink about what kind of tasks are left for the humans and how should we reshape the way that people then were. Yeah. That reminded me. Yeah. We did think that like one of the major sort of
00:21:08
Speaker
results of having this manual labor revolution would be greater expansion of UBI. And I think we tried to look into, well, where are there already sort of cases of UBI working? And sort of thinking then, if this becomes that some people will not have jobs, how do we become a sustainable society? And how do we sort of think about how we can take care of each other?
00:21:38
Speaker
Yeah, I really like how your world kind of takes human nature and tries to work with it so much. There's a really big dose of kind of psychological inertia in there that I think is true for how we respond to things. And one example of that is how you approach reducing global conflicts. So you have this AGI system that decides where to put stockpiles and trade routes.
00:22:02
Speaker
And it kind of puts them in key locations to try to reduce geopolitical tensions. Of course, now controlling this AGI that decides where stuff goes then becomes a focus of conflict in itself. But the AGI sees this coming and it creates this virtual war space where countries can compete for shares that give control over how to update the AGI. And I thought this is a really interesting twist on how to curtail some of our worst instincts and kind of shift the game through AI. Can you say a little bit more about how this whole system works?
00:22:33
Speaker
So so we have to give credit where credit is due. This is definitely Nichols baby. So our political person. So it definitely was kind of I'm not exactly sure where Nicole was coming from. But when I've read what he's talked about with this sort of virtual war space, it kind of is we know that humans come with an aggressive nature. It's our evolution. It's just kind of it's going to be there.
00:23:03
Speaker
So having sort of something that is going to give people stakes where there is higher stakes is some way that you can actually sort of maybe mitigate that aggression. Because like I was talking with Vanessa earlier and we talked, you know, the Olympics was supposed to be sort of that kind of thing where it's this global competition, not violent.
00:23:27
Speaker
as a way of kind of being like, Ooh, who's the strongest country? I never thought about that. Can you say anything about how, like, what, what is the virtual war activities involve? Is it like a game of some kind or like? So we had Nicole give us like a rundown of what he was thinking. And it sounded very much to me like betting on sports. So if Russia and China were in a war and it was okay, everyone
00:23:54
Speaker
in this virtual space has a share. They can put it towards either China or Russia, whoever they back, and then that can give them sort of virtual assets that they can use to combat each other in this world. It sounds a bit like a big game of risk. Is that sort of in the ballpark, you think? Yeah, loves risk. Maybe that's why. Oh, there you go. There you go. But I think you're right. I think you're right. I'm sure he was thinking about risk and
00:24:24
Speaker
like other ways that we can kind of mitigate against, you know, our violent nature. Yeah. I'm curious kind of from this more philosophical level, you know, some of our other winning worlds have technologies that let people change themselves directly, like become different types of people, have different desires even than they had before. What are your thoughts on like what parts of human nature can or should be changed and which ones we should try to work around in this way?
00:24:50
Speaker
Yeah, I think I think Elaine, you said it before, like I don't think we really saw humans changing so much as one. We have a very diverse world there where there is no like one human nature and like not one kind of human that will be in the future. I think if anything, we wanted to preserve human nature, maybe. Yeah. To me, that almost seems dystopian, where it's like if I was to change my will,
00:25:17
Speaker
if I was to be like, well, I don't want to be so altruistic. I want to be able to make a lot of money and just turn off that part of my brain that is like, don't use people. It feels like it would be a bit, I don't know, inhumane. I really appreciated this angle that your team took on of like, let's change the things around us and and see if we can make things better in that way rather than trying to change ourselves. That's an interesting approach.
00:25:45
Speaker
I feel like this also gets into some conversations about ableism as you were saying earlier, this idea of like we should change the affordances of our environment and the way that our society is built to make people's lives easier the way they are instead of trying to like quote fix people. Yeah, for sure.
00:26:02
Speaker
Well, even though technologies in your world don't try to like change human nature so much in this way, not everyone is on board with these technologies.

Diversity and Human Nature in Tech

00:26:10
Speaker
So one particular group that you highlight calls themselves human that's HU men for heavily unplugged men. And they're kind of an extremist, sometimes terroristic anti-technology group. They, they overlap with QAnon and anti-fax folks that we have today. And their most extreme members live off grid and like these tiny homes and are, as I said, sometimes do like terroristic acts.
00:26:30
Speaker
I'm just curious what the AGI systems in your world think of these folks. Is it a priority to try to de-radicalize them or bring them into the fold? I am crediting Susan for this because Susan is a big fan of this book, Radicals by Jamie Bartlett. In that book, he discusses that there are different radical groups that have existed all across time and space. Radicals can go from evangelicals to
00:26:59
Speaker
transhumanist. So these groups exist and have exist and oftentimes they exist because they're demonstrating a symptom of where the status quo is failing and it's not working for everyone. So even though they can offer visions of the future, they are often flawed and dystopian, but they still are sort of telling us about what is going on and what we need to be critical of. So kind of getting people to think more critically, I mean,
00:27:30
Speaker
Obviously, I think there's something where we can compare it to today where we do have the anti-vaxxer folks. And so thinking about, well, why do they not feel that vaccines are safe? Where are we failing in society that these people don't feel like this is something that's for them? Yeah.
00:27:54
Speaker
kind of taking it as feedback rather than a challenge to neutralize. Yeah. Yeah. So it ties back to us being like, well, we don't want to take away free will. So, you know, that's sort of a trade off of giving people free will is that you will have these radical groups emerge. Interesting.
00:28:23
Speaker
This team has deep expertise in understanding human interactions with technology, and they've clearly spent a lot of time thinking about how new tools might impact individuals as well as society at large. I was curious to hear their thoughts on current discussions around issues like inclusive AI development, data privacy, and the potential for AI tools to replace human creative labor.
00:28:43
Speaker
A major theme in your world is this focus on the unpredictable kind of trickle down impacts of AI systems across society. And you show how AI tools don't need to be incredibly powerful or develop their own agency or anything in order to be really transformative. You have these really pretty simple innovations that have unexpected impacts on different people.
00:29:01
Speaker
This is especially true for minority groups that aren't being considered during development. So what are your thoughts on the current discussions around diversity and accessibility in AI? I'm sure you have many of them. I'd love to hear. Yeah.
00:29:14
Speaker
Yeah, so many. Yeah. I think Tashi mentioned this in conversation that currently like in AI, diversity often means how can we get different people to be more like us rather than like, how can we learn from others to be more like them? So there's a lot of diversity going on.
00:29:38
Speaker
for the sake of selling more products or just having more people in a diverse tech team. But when they're in that diverse tech team, they're not necessarily able to bring their whole diversity self. They are just having to assimilate to what the tech world is like.
00:30:02
Speaker
I guess it kind of, well, it kind of comes back to what we were saying before that inclusivity is really hard. Accessibility is hard. Like, and we just like sticking a rainbow flag on something is not really.
00:30:18
Speaker
Yeah, it's not really what we need to do. We need to constantly question ourselves and realize that sometimes there are groups which have opposing views and then trying to manage that conflict is also quite difficult. So I think it's also something that is becoming more of a discussion. I think particularly since the Black Lives Matter movement is thinking about equity and not equality. So really trying to move towards what is equity and understanding how to actually create that.
00:30:47
Speaker
I was recently reading an article about there was sort of this organization or nonprofit organization that was created by a black woman and a white woman. And the black woman eventually was like, I can't be a part of this organization. You're not treating me equitably. And it kind of became in the articles that I was reading a very much a she said, she said kind of thing. And to identify myself, I am a white woman from the US. And so I, in reading that article was like,
00:31:17
Speaker
Huh, well, what? How would I react? What is the issue here? And I think what I recognize is that, you know, when there's someone who's traditionally not in a position of power and they ask for something to be heard, it needs to be honored sort of the first time they're asked. Because if I'm asking for equitable pay or I'm saying that like this maybe isn't great, even if I'm asking in a nice way and it seems like I'm asking for a discussion,
00:31:46
Speaker
that's been allowing and perpetuating these microaggressions that people are constantly facing. So I think it is when we're thinking about creating these tools and developing to ensure that there's more outcomes for more diverse users, thinking about how we can listen and create equity.
00:32:07
Speaker
So you're kind of saying like the barriers to being heard and to expressing yourself in a safe way for some people are just much higher. And so we should be kind of turning up the gain on that feedback from them. Yeah, yeah, for sure. Thinking about people who are or who have been historically in power or given positions of power, being more hyper aware of the work that they need to be doing.
00:32:31
Speaker
How would you challenge people who are developing AI tools to think differently about that work to ensure better outcomes for all their users?
00:32:38
Speaker
there's several aspects to this. So there's firstly create a team, a developer team that's diverse and doesn't only represent one type of person. Then the second part is, but also leveraging that and letting developers bring in those diverse views. I mean, obviously you can't always have a developer team that represents every single section of society, but you can also like
00:33:08
Speaker
Like in our research, there are other ways of engaging with communities like participatory design, like just engage with the end users and issues they might see with whatever you're designing. Yeah. Well, I think that's kind of a limitation that we faced being from academia. So I think like both me and Vanessa are PhD students. So there's a lot of time and
00:33:38
Speaker
sort of energy that can be put into looking at user and doing user research that I think because we oftentimes are sort of creating these technologies from a consumer standpoint, from a capitalist standpoint, there is less time that's given to user needs and understanding people. So in some ways we need to change that sort of culture that occurs in tech industries.
00:34:07
Speaker
But yeah, it's tough. So if I can try to summarize, I guess I'm hearing to listen more and to be prepared to truly be challenged and take these perspectives into account in their development work. Yeah. Did I listen good? Yeah, you did listen great. Awesome.
00:34:31
Speaker
Yeah. One, one other thing I would say is understanding the context that you're building something in and questioning sometimes if AI is the right tool. So I think sometimes it's kind of like that hammer and a nail story, you know, like when you've got hammer, everything looks like a nail and
00:34:50
Speaker
Sometimes that's what AI feels right now. Like we've got AI and there's a problem. How can we fix this problem with AI instead of understanding where the problem comes from? Because the AI might just be like a band-aid for a bigger structural.
00:35:07
Speaker
issue that kind of lies underneath it. Or sometimes there might be like an easier fix that doesn't require AI. So I think treating AI more preciously sometimes instead of just thinking every problem needs AI as a fix. Yeah. Oh man. I want, I want to be a total millennial and be like, ah, snaps for Vanessa. This is great. Because it is very much like, I think even
00:35:35
Speaker
just technology in general, people are like, ooh, technology will fix everything. Technology is the answer. And then like, I do a lot of community based research, so community based participatory design. And so I'm working with people and oftentimes technology is not what they need. They need policy changes. They need, they need better transportation. They need like resources that governments need to be able to give them, but you know, technology will fix it.
00:36:07
Speaker
That brings me around to the human group in your world. And again, I really appreciate how you're really kind of living what you're saying in your world about taking these fringe groups as feedback. You know, this group is extreme and sometimes they're touristic and that is obviously a huge problem. But nonetheless, their concerns are showing us something real about the world.
00:36:27
Speaker
And one of those concerns that people really take seriously is this conversation that develops around data ownership and consent, because a lot of the basic necessities in your world, like just like accessing goods and using a passport are monitored by

Data Privacy Concerns

00:36:39
Speaker
a system. So this can make it really hard for people to operate with their privacy intact. You also talk about the other side of the coin where people who don't have data in the system who are attempting to like immigrate, for example, are discriminated against for being data lists because nothing is known about them.
00:36:55
Speaker
So I'm curious to hear more of your thoughts about current narratives around data privacy. Oh, I, uh, I have feelings about data privacy. I mean, I think part of it is, is I am extremely guilty of everything that is bad. We're being like, Oh yeah, whatever my data, no one's spying on me. Who cares? I don't, I don't have anything important, but I think,
00:37:23
Speaker
It always, at least for me, comes back to my data is not concrete. I can't see it. So it means I don't know. It's hard to know what's at stake until you have a problem.
00:37:36
Speaker
until someone steals your data. Yeah. And I think it was like my mom had her identity stolen. And well, I mean, I think it's actually pretty common and it's, but it's awful. And it's like, you don't realize how much is at stake when you have your data stolen and how, how much of your privacy is taken away until that sort of negative thing happens.
00:38:01
Speaker
And I don't think in today's world, we really do a great job of explaining data privacy and what's being done with our data. Interesting.
00:38:10
Speaker
Yeah. And I think it's quite tiring for an end user to try to understand what is going on. I mean, we're in the school of computer science, but we don't know everything. I can't imagine how complicated it must be for a lay user. And that's kind of like the conflict that's in the human. Like on the one hand, you need the services to do things.
00:38:33
Speaker
But on the other hand, you don't have the time to engage with all the privacy fine print to understand if this is really what you want to give your data up for these things. So there's a lot of questions around the accessibility of the language. And I don't think we've got all the answers yet. Not personally or in the computer science world.
00:38:57
Speaker
Yeah. Well, on the sort of positive end of it in your world, there is this trust index that a lot of democracies start using, which I think is loosely based on China's social credit system and is portrayed as this generally good thing that decreases isolation, increases trust. I'm curious how that relates to this broader discussion about data privacy. It's just, it's just like the good end of the spectrum. Yeah. I mean, I think it's sort of one of those things that brings up
00:39:23
Speaker
allowing the question of who owns data and sort of who you trust with your data. And I think that's something that is still really underexplored. I think this is something that as being a researcher, I've become more aware of and I think, well, being a researcher and being originally from the US is being aware of where server farms are and how that actually means someone else controls your data.
00:39:51
Speaker
Oftentimes, I think, especially through universities, we're very strict about where you can store certain files or certain information. And I think especially in the UK, things need to be on servers that are in Europe or in the UK. They can't be on servers in the US because the US government can potentially tap into those servers and see that data.
00:40:15
Speaker
And I think that's something that I wouldn't have known until I got into this because I would have thought, oh yeah, if I use this website and the company's based in the US, it doesn't matter. My data's saved in the cloud. It's saved wherever. And I think that there's definitely this disconnect that is happening and potentially
00:40:38
Speaker
needs to be remedied in some way. It sounds like in general you're calling for more transparency and concreteness of these discussions. What is the data? Where is the data? And what's at stake if it's taken? It does seem very abstract to me as well when I think about these issues. I think that's definitely my point of view. I don't want to speak necessarily for everyone else on the team.
00:41:03
Speaker
Another theme you explored that I really appreciated was AI's relationship with the arts.

AI Impact on Creativity

00:41:08
Speaker
You have a really nuanced portrayal of this. So things don't go well at first. The creative sector gets devalued and AI starts being criticized for kind of just making cookie cutter predictive media that people will like.
00:41:20
Speaker
But eventually systems get better and they start to make genuine contributions to culture like new types of creative work. And human creative work also starts regaining value because handmade items start coming back into vogue so there's a strong demand for what you call slow creative labor.
00:41:35
Speaker
And AI facilitated tracking tools also help people get credit for their work when they have like bit parts and different forms of media. So overall, at the end of this arc, it feels to me like the creative sector actually does kind of better than a lot of other parts of the labor market in your world. I'm curious if this is meant as kind of like a rebuttal to the current sense of pessimism about AI's impacts on creative work. I think for me, yes, I want to say yes. I know that like Susan's work has engaged with, you know, sort of questions over the past year.
00:42:03
Speaker
looking at this program called Creative Informatics and sort of investigating how data driven innovation can broadly support the creative industries and support creative practitioners. And I think Tashi also does that too with the music realm. And I think me, so my background is art and design and I studied traditional arts of Japan. And so I spent two years in Japan
00:42:30
Speaker
working on no masks and learning the art of no mask creation. And I think what is no mask? No is a Japanese theater that basically it's, if you've heard of Kabuki, it's similar to Kabuki, but with masks.
00:42:47
Speaker
Um, it is not for everyone. Uh, I particularly liked the mass for sort of their creepy elements. And they tend to be kind of white painted. Is that imagining the right thing? Yes, exactly. That's exactly it. But I think part of that and part of, I think something we can learn from how the Japanese preserve their arts and cultures, um, is something that I think really sort of influenced thinking about the future of art and craft.
00:43:16
Speaker
I think even though a lot of handicrafts seem to be dying off and there seem to be being replaced by automation and mass production, there still really is this love of keeping these crafts alive and there is this need for governments to support handicrafts.
00:43:37
Speaker
Yeah, interesting. So that that kind of cultural value gives it more of a lasting power than like manual labor. Like there isn't so much of that for like hand washing your dishes and your clothes. Yeah, interesting. And and even to think of like the arts and crafts movement that occurred, there has been over time through arts sort of this
00:43:59
Speaker
supporting and need to build up and support the handmade and hand-ground. So there's always this sort of dialogue and conversation that goes on in art. Very cool.
00:44:22
Speaker
The process of world building has great potential to make a positive future feel more attainable. This can be incredibly powerful, whether you're a creative person looking to produce rich works of fiction or have a more technical focus and are looking to reach policymakers or the public. I asked Elaine and Vanessa what kind of impact they hoped their work would have on the world.

Representation in Media

00:44:41
Speaker
Which aspects of your world would you most like to see popular media take on?
00:44:46
Speaker
Yeah, I think what we were continuously saying throughout today was just kind of the, um, the diversity part. So just not focusing on like the tech bro story or like the, what will the rich people do once they can fly from one building to the other and they're like autonomous drones, but also thinking about the stories that seem to be
00:45:14
Speaker
smaller, but actually maybe sometimes more common, like the day to day in all the corners of the world. Yeah. The Indonesian grandpa. Yeah. Yeah. Yeah. The older, the younger people and everyone in between. Yeah. Well, and I think this is something that Nicola pointed out is that thinking about, you know,
00:45:37
Speaker
maybe the more cooperative opportunities that can arise with new tech. So instead of thinking about technology as competition, and I think this is very relevant to how we're looking at AI right now is like this competition. Who's going to have the strongest AI first? Because it's almost become an arm's race, but thinking of, well, how can we collaborate and sort of build better technology through cooperation?
00:46:03
Speaker
Yeah, and I guess kind of seeing AI as like the side character of what we're creating and as the character that is interacting with humans. Yeah, the human-centered. Yeah. I like that. What kinds of expertise would you be most interested in having people bring to these discussions?
00:46:25
Speaker
None of us are economists. So the feasibility of our world is something we would definitely need some economists to sort of look at and be like, especially with like UBI, like universal basic income, that's definitely something that it's like,
00:46:44
Speaker
Well, we read some articles and it was really good and people have said it's good. But also there's a lot of political interdependencies and sort of economic interdependencies that occur that can be like, I mean, anytime we have a depression in any economy, there definitely is global impacts to that. So I think we need more people thinking about
00:47:11
Speaker
that aspect also of technology. Yeah. And what about in discussions about like how to develop AI systems, what sorts of voices or backgrounds would you want to kind of bolster in those areas? I guess basically the non-tech obsessed people, the people who are used and affected more than the people who just want to create and want to sell it.
00:47:39
Speaker
Yeah, it's kind of a catch-22. It's like that thing from Douglas Adams, like the only person who should be allowed to be a leader is the one who doesn't want to be. You need people who don't really care about AI development. Yeah, it is difficult because you, I guess that's a difficulty in user research as well, that you're
00:47:56
Speaker
taking our accessibility research, probably that you're taking the time from people who have other things to do and other things to worry about. And so, yeah, it definitely needs to be done in a way that benefits them as well.
00:48:14
Speaker
Yeah. I mean, that's, that's tough. I think you're raising really interesting and more fundamental problems that I'm asking about. I'm like, which voices do we need to be like added into this? And you're kind of saying, well, there's a lot of structural issues that make it hard for those voices to get found and included. So maybe, maybe the feedback is more like, we should be using some of these, um, like design research and audience research approaches to actually like find ways to co-develop the technologies that work for the people we're trying to hear from.

Fair Compensation in Research

00:48:42
Speaker
Yeah. But also I think.
00:48:44
Speaker
a point that Vanessa is making is compensation. So I think we often go, and I know academics are guilty of this, we often go into these populations and are like, tell us about you, tell us about what you want and what you need. And then we go, and here's a five pound voucher to some random place that you've never been to. And it's like, that's not helpful. So I think we need to get these voices heard, but we also need to recognize that there is
00:49:13
Speaker
this sort of equity that needs to be created where it's like, you know, we're compensating you for your time. We're compensating you for, you know, your opinions, like all of these sort of things and not just monetarily, like also thinking about just other ways that we can compensate. Yeah. And some of the hidden costs probably of sharing and taking the time. What do you hope that your world leaves people thinking about like long after they've read it?

Encouraging Critical Thinking for the Future

00:49:39
Speaker
I guess the hope is that people will think more critically about
00:49:43
Speaker
everything about the systems, about the technology, but then also recognizing that, you know, everyone needs to be a part of this. And so it's not just, we don't want just the people that are hyper tech focused and working on technology involved. It's very much like voting. You think that like you're one person, your vote doesn't matter. But in actuality, it's like, no, if you and everyone else thinks like that, then it does
00:50:13
Speaker
affect things. So it's very much getting everyone involved and making sure that everyone is thinking about this. Yeah. And if they, they read our thing and they kind of didn't imagine to hear the stories that we told about places that we kind of focused on, then maybe question like, so when you're thinking about the future and what the future looks like, whose voice is narrating this story?
00:50:43
Speaker
who is telling this story and who is included in that future. And I can see that even some people who are imagining a futuristic scene wouldn't even think about what their own story in this whole thing is. But I think, yeah, that's what we'd like people to think about. This has been a really great conversation. I really appreciate all the perspectives and insights that you guys have shared. Thanks so much for joining us. Yeah, thank you for having us. Thank you. It was fun talking.
00:51:21
Speaker
If this podcast has got you thinking about the future, you can find out more about this world and explore the ideas contained in the other worlds at www.worldbuild.ai. We want to hear your thoughts. Are these worlds you'd want to live in?
00:51:35
Speaker
If you've enjoyed this episode and would like to help more people discover and discuss these ideas, you can give us a rating or leave a comment wherever you're listening to this podcast. You read all the comments and appreciate every rating. This podcast is produced and edited by Worldview Studio and the Future of Life Institute. FLI is a nonprofit that works to reduce large-scale risks from transformative technologies and promote the development and use of these technologies to benefit all life on Earth.
00:51:57
Speaker
We run educational outreach and grants programs and advocate for better policymaking in the United Nations, US government, and European Union institutions. If you're a storyteller working on films or other creative projects about the future, we can also help you understand the science and storytelling potential of transformative technologies.
00:52:14
Speaker
If you'd like to get in touch with us or any of the teams featured on the podcast to collaborate, you can email worldbuild at futureoflife.org. A reminder, this podcast explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we all want. The ideas we discuss here are not to be taken as FLI positions. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
00:52:42
Speaker
Thanks for listening to Imagine a World. Stay tuned to explore more positive futures.