Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Imagine A World: What if some people could live forever? image

Imagine A World: What if some people could live forever?

Future of Life Institute Podcast
Avatar
200 Plays1 year ago
If you could extend your life, would you? How might life extension technologies create new social and political divides? How can the world unite to solve the great problems of our time, like AI risk? What if AI creators could agree on an inspection process to expose AI dangers before they're unleashed? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year In the fifth episode of Imagine A World, we explore the fictional worldbuild titled 'To Light’. Our host Guillaume Riesen speaks to Mako Yass, the first place winner of the FLI Worldbuilding Contest we ran last year. Mako lives in Auckland, New Zealand. He describes himself as a 'stray philosopher-designer', and has a background in computer programming and analytic philosophy. Mako’s world is particularly imaginative, with richly interwoven narrative threads and high-concept sci fi inventions. By 2045, his world has been deeply transformed. There’s an AI-designed miracle pill that greatly extends lifespan and eradicates most human diseases. Sachets of this life-saving medicine are distributed freely by dove-shaped drones. There’s a kind of mind uploading which lets anyone become whatever they wish, live indefinitely and gain augmented intelligence. The distribution of wealth is almost perfectly even, with every human assigned a share of all resources. Some people move into space, building massive structures around the sun where they practice esoteric arts in pursuit of a more perfect peace. While this peaceful, flourishing end state is deeply optimistic, Mako is also very conscious of the challenges facing humanity along the way. He sees a strong need for global collaboration and investment to avoid catastrophe as humanity develops more and more powerful technologies. He’s particularly concerned with the risks presented by artificial intelligence systems as they surpass us. An AI system that is more capable than a human at all tasks - not just playing chess or driving a car - is what we’d call an Artificial General Intelligence - abbreviated ‘AGI’. Mako proposes that we could build safe AIs through radical transparency. He imagines tests that could reveal the true intentions and expectations of AI systems before they are released into the world. Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/to-light The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and concepts referenced in the episode: https://en.wikipedia.org/wiki/Terra_Ignota https://en.wikipedia.org/wiki/The_Transparent_Society https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer https://en.wikipedia.org/wiki/The_Elephant_in_the_Brain https://en.wikipedia.org/wiki/The_Matrix https://aboutmako.makopool.com/
Recommended
Transcript

Introduction to 'Imagine a World'

00:00:00
Speaker
on this episode of Imagine a World. I think world building did turn out to be really useful. Thinking creatively about some kind of ambitious things we could do and depicting them in a way that makes them visceral and makes them feel real and possible. That is really useful for sort of bringing people's thinking up and letting them imagine and letting them believe they can build something big and complicated and new. The question is, what should we build?

World-Building Contest and 'To Light'

00:00:34
Speaker
Welcome to Imagine a World, a mini-series from the Future of Life Institute. This podcast is based on a contest we ran to gather ideas from around the world about what a more positive future might look like in 2045. We hope the diverse ideas you're about to hear will spark discussions, and maybe even collaborations. But you should know that the ideas in this podcast are not to be taken as FLI-endorsed positions. And now, over to our host, Piaum Reason.
00:01:15
Speaker
Welcome to the Imagine a World podcast by the Future of Life Institute. I'm your host, Guillaume Reason. In this episode, we'll be speaking with Mako Yas, whose world, which is called To Light, was the first place winner of FLI's world building contest.
00:01:29
Speaker
What stood out about Mako's world was its imaginative storytelling, with richly interwoven narrative threads and high-concept sci-fi inventions. It undergoes one of the most complete transformations we've seen in our entries. There are immortality drugs, augmentations that merge humans with machines and colonies orbiting the sun, all interlaced with care and attention towards AI safety.
00:01:51
Speaker
These fantastical transformations aren't without their growing pains, and not everyone in this world embraces them at first. Some resist even the life-saving miracle pills and choose to live in the lower-tech communities closer to nature. These holdouts are not pressured to accept enhancement technologies, but any who change their minds are welcomed by those who have.

Mako Yas: Background and Philosophy

00:02:10
Speaker
As Mako puts it, everyone who would like to is going to make it.
00:02:17
Speaker
Our guest today is the sole author of this world. He lives in Auckland, New Zealand and calls himself a stray philosopher-designer. One of Mako's passions is fostering media that moves us to protect humanity's future, which he explores by designing civically robust digital communities and positive-sum games about brokering peace. His creativity and pragmatic optimism made his entry a powerful fit for the contest, and I'm pleased to have him here with me to discuss his work today.
00:02:46
Speaker
Hi, Mako. It's great to have you here with us. First of all, can you say a little bit about what you mean when you call yourself a stray philosopher designer? That's tough. I've never thought about this before. Well, I'm curious if you could start out with what your educational background is and kind of what you've been up to in the years since. Yeah. So I think the first subject I really connected with academically was
00:03:15
Speaker
programming and explaining things to computers was how I experienced it and what was interesting about it to me. Because computers don't really know anything. So if you can explain something to a computer, then you really must understand it. And I think that turns out to be a really good basis for analytic philosophy. So
00:03:38
Speaker
I went into a degree about logic and computation and linguistics and a bit of computer science. Gotcha.
00:03:49
Speaker
What attracted you to try to enter the competition?

Challenges and Optimism in World-Building

00:03:52
Speaker
So I think it was mainly seeing other people talk about how hard it was or how impossible it was to meet the required specs. And they had a really good point, but there were holes in it. And I realized I could probably pass through those holes and make something that did the impossible. And that's always very exciting. When you find a concept through that route, you know, it's going to end up producing something good.
00:04:20
Speaker
I mean like that it's so highly constrained. Yeah. Yeah. Like it will be surprising to people the thing that you find. I see. Because yeah, they clearly had this very strong expectation that there wouldn't be a thing that could do that. Yeah. There it is. I like that. That's the challenge that motivated you. Yeah. A really good problem formulation is like most of the way to getting to a really useful solution.
00:04:45
Speaker
Yeah. It sounds like part of the, the match between what the problem that was posed and your skills was also kind of this positive bench to it. Like I noticed in your work, you're really interested in, you know, positive, some games and trying to get societies of people to interact in really effective ways. You have this kind of optimistic aim to a lot of your work and that's really at the heart of the FLI project. Did that kind of resonate with you?
00:05:11
Speaker
I think one of the things you learn from programming is you learn a joy of making things. So of course, the things you want to make are usually good. So it's kind of being asked to make a solution to this enormous problem that I've been aware of for a bit. And yeah, of course, I'm going to be interested in that. Right. Yes, I'm curious. What were your biggest sources of inspiration when creating this world?
00:05:38
Speaker
I probably should mention Terra Ignota, which is a sort of semi-utopian fiction series written by a historian. It's one of the few series that are trying to depict a positive future. Interesting. I think that prepared me pretty well to look at a future where everything goes well and still sort of find where the tensions are.
00:06:07
Speaker
Um, where, where the cruxes were and where it could have gone wrong and focus on those. And that's where the story is. So that was useful. And there are a few things from, again, like it's ambiguous as to whether this was an inspiration, but there are a few things from Terry ignored it sort of made it in. And one was the, the doves that was like, that was basically in Terry ignored it. There were these, the drones that are in the shapes of dubs. Yeah.
00:06:34
Speaker
Yeah, carrying around supplies of peace. Interesting. But when I decided to put that into light, it felt like there was no alternative. I didn't think I was taking it from somewhere else. It's just like, what should these drones look like? Well, doves. They're the most harmless animal. Yeah, it just seemed like there was only one way to do it. Yeah, that's funny.
00:07:06
Speaker
By 2045, Mako's world has undergone a drastic transformation. There is an AI-designed miracle pill called Bright Moss that greatly extends lifespan and eradicates most human diseases. Satchets of this life-saving medicine are distributed by dove-shaped drones, as you just mentioned.
00:07:24
Speaker
There's also a kind of mind uploading, which lets anyone become whatever they wish, live indefinitely and gain augmented intelligence.

Technological Changes and Societal Impacts

00:07:30
Speaker
The distribution of wealth is almost perfectly even, with every human assigned a share of all resources. And some people even move into space, building massive structures around the sun, where they practice esoteric arts in pursuit of a more perfect peace. Mako put a lot of thought into how these technologies are received by different elements of society, and how this impacts their relationships over time.
00:07:52
Speaker
I wanted to take a few minutes to get clear on how each of these technological facets of his world actually looked. Also, I thought it'd be nice to talk about some of the things that you imagined for our future that are in this positive direction. You have a lot of very fantastical kind of innovations and transformative events that happen to humanity, and I thought maybe we could explore some of them.
00:08:16
Speaker
So the first one is bright moss, which is this kind of cure all drug that gets rid of a lot of human diseases and can extend lifespan. I heard you say that it's partly named to allude to the fact that it's a living substance that stays in you. So are you imagining it as some kind of symbiotic living drug?
00:08:35
Speaker
Yeah, it's often very useful for a name to address the most hostile stories that are going to be told about the things. So I think in order to do the things that a general life extension drug will need to do, it's going to be very sophisticated. It's going to basically need to be alive in itself. I mean, in reality, life extension treatments are probably going to involve more than one medicine.
00:09:02
Speaker
But if you have to focus it all down to a single medicine then yeah that's going to be difficult so it's going to end up having this strange alien quality and people are going to analyze and find out about that and a lot of people are going to be scared by that if you need to go that route. So yeah I tried to find a name that would sort of address that and make that less scary for people so that fewer people would.
00:09:28
Speaker
Yeah, we should get into the fact that the biggest problem that exists in this world after solving the alignment problem, this sort of comes after that, is that people can still choose to not take the treatments and to keep aging and to die. And that's kind of a tragedy. Every single person who's lost is a bit of a tragedy. The entire future goes on without them.
00:09:55
Speaker
So most of what happens after that point is focusing on sort of reaching those people and convincing them to stay on board with the future.
00:10:05
Speaker
I like how you have in the name, there's this moss piece, which is, you know, mosses, as you mentioned, are often kind of medicinal in human history. And obviously, bright moss just feels friendly. And on the other side, you have the people you're talking about who are a bit hesitant to take these kinds of drugs or change themselves in a way they think is not possible to undo. And they call it apophis, which I think is a reference to some kind of flesh eating bacteria. Is that right?
00:10:31
Speaker
Yeah, you have to accept that some people will make their own names for things. It's interesting that your world has built in so thoughtfully, people who don't see it as a utopia as some kind of positive thing to aspire to. And so while you've taken on this challenge of writing a positive future that we can all aspire to on some level, there are still those within who don't see it as a positive future.
00:10:58
Speaker
What do you think about what should be done with them? Should they all be convinced, as you're saying, or should there be space made for people who don't want to live in that way? I think for it to be a positive future, there has to be space. But at the same time, of course, we don't want those people to die. We don't want those people to
00:11:16
Speaker
make this terrible choice. We want to reach them if we can. We want to convince them. So that ends up being the place that we focus because that's where the problems in the world still exist.
00:11:31
Speaker
Yeah, I'm not totally sure about the writing technique of focusing on the drama, focusing on the conflict. Are you saying that to kind of put this drama and conflict in its place as a smaller feature of a larger world that's mostly aligned? Right, it was small, yeah. Yeah, I never really communicated that. Probably very, very few people are living this way.

Transhumanist Ideas and Resource Distribution

00:11:53
Speaker
Brightmas not only extends lifespan, but also people's health span, the amount of healthy years that they have. I think that would make it more widely accepted, but I'd still suspect that more people than you might think would be hesitant to extend their lives. There are a lot of factors that could play into that hesitancy, like mental health concerns, religious or philosophical considerations. Do you personally feel that living longer is sort of just a general good, or do you have any hesitations about that?
00:12:23
Speaker
There are definitely causes for concern, but yeah, sure. On that, it's good. I mean, the longer I live, the more I get the sense that there's a richness to life or you get used to it. You get to like it more as you go further in, but you get better at it. Form more connections, you get better at living. Interesting.
00:12:46
Speaker
Another transformative technology in your world is tempering, which makes people live indefinitely, grants them these incredible mental powers. But you're a little vaguer about what this process actually involves. So as people become tempered, what happens to them? Like, do you have some specific process in mind? Or is this kind of a stand in for some kind of transhumanist or posthuman transformation?
00:13:10
Speaker
Yeah, I have no concrete ideas because I don't think that's something we could know from from here back here. It's basically mind uploading the thing that we currently call mind uploading, which is just essentially supposed to be referring to taking a human mind, all their behaviors, their personality, their memories and transitioning it to a sort of stronger and more flexible form.
00:13:37
Speaker
And I think calling that uploading is a bit confused because uploading is when you send something through a wire into a computer and the things we call computers today couldn't really run a human brain. Maybe they could run some approximate form of it. But I think when we do this for real, hopefully we will. We'll probably find another name for it.
00:14:01
Speaker
Another thing that features in your world is that by 2045, you have an almost totally even distribution of wealth, which is quite an accomplishment. It seems that this is partly because of this distribution of what you call cosmic endowment shares. So the allocation of some portion of kind of all observable or accessible resources to each human. Can you say more about how that works? That is a tricky topic because we don't know and
00:14:30
Speaker
We need to figure it out very soon. So I won't claim to know exactly how that's going to work, but roughly speaking, it's supposed to give an equal portion of the future resources of humanity to every human.
00:14:45
Speaker
A lot of the worlds we received featured some form of universal basic income, but yours has really this more hard reset where everyone's suddenly put on equal footing. Can you say something about how you go about ensuring that that kind of wealth equality lasts? One thing I can say is I think there's hope that if you
00:15:07
Speaker
start over the wealth distribution from scratch and then give everyone access to much better sort of financial advisors, essentially, just the best possible financial advisors. Make everyone into a financial advisor. You're going to end up with a much more fair world, probably indefinitely.

AI Alignment and Global Collaboration

00:15:26
Speaker
It'll probably stay fair. Yeah. So you're distributing not only wealth, but also knowledge to manage the wealth. Yeah.
00:15:35
Speaker
So it's quite possible that that will be enough. You won't really need any ongoing redistribution. But there are a lot of philosophical reasons to think that we might need ongoing redistribution. It's still unclear. So this is something we're working on. Interesting.
00:16:02
Speaker
While the peaceful flourishing end state of Mako's world is deeply optimistic, he's also very conscious of the challenges facing humanity along the way. He sees a strong need for global collaboration and investment to avoid catastrophe as humanity develops more and more powerful technologies. He's particularly concerned with the risks presented by artificial intelligence systems as they begin to surpass us.
00:16:25
Speaker
An AI system that's more capable than a human at all tasks, not just playing chess or driving a car, is what we call an artificial general intelligence, abbreviated AGI. Ensuring that such a system actually behaves as we intend, and faithfully pursues goals that are beneficial rather than harmful to humans, is an enormous challenge that we don't yet know how to solve.
00:16:46
Speaker
Maiko proposes that we could build safe AGIs through radical transparency. He imagines tests that could reveal the true intentions and expectations of AI systems before they're released into the world. Picture a research team reporting that a new AI system expects humans will go extinct within 50 years of its public release.
00:17:05
Speaker
Mako hopes that the jolt of fear from such a finding could catalyze a truly global alliance for developing safe AI. He imagines an international network of the best human thinkers, working together in highly secured virtual spaces to create a safe AGI. And he imagines us succeeding, so that beyond these dangerous years of development, a new chapter of human history begins to unfold. Let's dig deeper into how Mako sees this journey playing out.
00:17:31
Speaker
What are some other major challenges that you chose to explore when you were working on your world? So I guess the core problem that I'm taking on is I don't know if this position has a name because the literature that you can find it most easily and just sort of assumes that it's the truth and so they don't really name it. Other people disagree. I guess I will name it the naturalness of agency.
00:18:01
Speaker
The claim that agency or rationality or power-seeking behavior is a very natural sort of mathematical structure or natural tendency that machine learning systems are going to end up taking on pretty abruptly at some point. So it's the assumption that if we keep training more and more powerful machine learning systems, one day we're going to suddenly arrive at something that is sort of
00:18:31
Speaker
making its own decisions and setting its own strategies and going out into the world and acting on its own. And that's a very dangerous situation because you can no longer really turn it off. And if you messed it up and the issue is we tend to mess everything up the first time we try it. In this case, you don't get to try again. It's out. It's alive. It's standing on its own feet and it's stronger than you.
00:19:01
Speaker
So I wanted to take on that problem. If that is what machine learning or mathematics or whatever is like, if it tends to, as we call it,
00:19:17
Speaker
What is that word? Yeah, I don't think it's an acronym. Just like the sound of a blowtorch being lit. I think, I think it was taken from like a, a panel from a comic or something. It's on a matter of here. Yeah. Yeah. Okay. Yeah. One day you'll, you'll train a machine learning system and it'll make that sound. And at that point you need to get out of the room immediately.
00:19:45
Speaker
I have seen this assumption in a lot of approaches to thinking about how AI will develop. I haven't really fully understood why all of these things are assumed to come together. I get the idea of a system becoming so powerful or so complex that it develops its own agency and starts seeking its own goals.
00:20:05
Speaker
And also the idea that it could be smarter than us and also the idea that it could become more powerful than us. But couldn't these happen separately? Like what if a system became sort of rational but then was still stuck in the chassis chassis that it was in? That that would be a good outcome because then we could use it to make something that we can guarantee is human aligned and then we can release that into out into the world.
00:20:29
Speaker
Yeah, I mean, if we can make a system that's way smarter than humans, but also not power seeking, then we can use that to do it properly to make sure we've definitely gotten it right.
00:20:44
Speaker
I think we should probably take a step back and talk a little bit about the alignment problem and what that means because we're kind of using words associated with it without having really said what it is. So at the broadest level, my understanding is that it's just the idea that we need to get these systems to stay in line with our goals and what we actually intend for them to carry out in the world.
00:21:06
Speaker
So we don't want them to behave like an evil genie and try to twist our intentions and do something unexpected and damaging when we've put in something that we think is going to be benign. Right. Which is what happens if you if you try to write down exactly what you want them to do and tell them to do that, you're probably going to get it wrong and it's going to come out twisted. Yeah. So instead, what we want to do is we want to set them up so that they they learn on their own what it is that humans value. Yeah. So I would I would describe the alignment problem as the problem of
00:21:37
Speaker
getting powerful machine learning systems to want for the world the same things that we want for the world.
00:21:46
Speaker
Yeah, so that's kind of that's one end of the alignment problem and my understanding, which is given a goal, how do you get it to match that goal? Or how do you ensure that this powerful agent that you've made is going to actually be pursuing that same goal? But there's sort of the other end of it, which is maybe more philosophical, which is what goal should we be feeding into this device?
00:22:07
Speaker
And you've been saying kind of what humanity wants, but how do we determine what goal will sort of collectively fit all of our wills? And, and maybe even I would say, again, since I'm thinking about animals, the will of other creatures in the world, maybe. Hopefully we, we won't have to be very specific about it. And we would just be able to say, okay, you see these, these creatures out here, these humans, these animals. Figure out what they want. You figure it out and then do that.
00:22:37
Speaker
So sort of acting in a sort of more abstract way, defining it in a more abstract way. Interesting. What happens if those wants conflict? Yeah, that's a really interesting question. That's something I'm studying a lot in part for game design reasons. It's sort of the question of cooperative bargaining theory, which is the mathematical study of how to reach agreements when people want different things.
00:23:06
Speaker
And I think it's really beautiful stuff and more people should know about it. So I've been trying to make games about it. And also it turns out that these games are going to be really rich. Like once you remove the zero sum quality of that most board games have, and you allow people to make deals, make agreements and find mutually beneficial outcomes. I think the games often get a lot more rich.
00:23:32
Speaker
Yeah. So for those who don't know, a zero sum game is one where anytime someone gains something, someone else is losing it. So the sum is zero, right? Like you can't just create value. You have to kind of steal it from the common pool. Basically in the end, in these types of games, typically there can only be one winner. And if you win, that means everyone else loses. So in a non-zero sum game, you can have these situations where everyone actually benefits by collaborating.
00:24:00
Speaker
you might end up with everyone winning or everyone losing. And these changes move us away from the more typical, self-interested, competitive models of zero-sum games. I saw your calling these new non-zero-sum or positive-sum games piecewagers. Is that a term you're using to describe all games that aren't zero-sum in that way?
00:24:19
Speaker
I'm calling them peace wages. Yeah, I'm, I can't, I've lost my precise definition of what they are because it turns out that semi cooperative games already exists and I think they're a subset of that, but I'm not sure exactly which. Yeah, I look forward to seeing more about that. I saw there's some on your website, macopool.org. Is that right?
00:24:40
Speaker
There's not a lot of information there yet. There's a little description of what you're up to. Yeah, yeah. That will be enough for people to be able to tell whether they want to get on board and help out and make some of these.

Revealing Desires and Cultural Reconciliation

00:24:53
Speaker
Interesting.
00:24:55
Speaker
Another tool in your world's arsenal for building consensus is the ability to identify people's deepest wishes. Through some feat of technology, people in your world can basically print out a clear description of their core desires. Do you have anything to say about how that could impact people's relationships or society as a whole? Like if we really had a way to truly know our own or each other's deepest desires and motivations, what would that do?
00:25:22
Speaker
That could go a lot of interesting ways because we really I think we have no idea what we actually want. I don't think I don't think humans evolved to be good at telling people what they really want. And there's a book about that written by. I sometimes describe him as a Zeno economist because he's sort of good at
00:25:50
Speaker
Standing outside of our society is a bit of an alien. Also, if you wanted someone to analyze economies constructed by aliens, he would he would be the only person you could go to. Robin Hanson and wrote this book, Elephants in the Brain, which it's it's sort of a rigorous approach to
00:26:14
Speaker
the young in concept of the shadow, the idea that we have these hidden desires and then they sort of know what they're doing and we don't know what they're doing. We're just an outward representative who says what people want to hear. And he sort of studies the extent to which that's true. And I would recommend that book. So I think it's going to turn out it's the answer is going to surprise us. It may turn out to be hedonism.
00:26:41
Speaker
None of my homies are into hedonism. That's a great sentence. But yeah, it might turn out that actually we are. We just pretend not to be because we want to seem sophisticated and agentic. So you could turn on this device and it's like, my wife, I really want to know your deepest desire. And she's like, cake. And you're like, OK. It would be really unfortunate. So much for the mystery of our love.
00:27:10
Speaker
Yeah, all of your relationships were just in direction on your desire for cake, just an advanced strategy for getting more cake. Why did you marry a baker?
00:27:25
Speaker
And this brings me back to my other really higher level question about whose morality is being applied here. Like what is a sinister thought? Like what if to one society or culture, a certain thought is, is wonderful and freeing into another one is sinister and dangerous. So this would come down to corporate bargaining. How do you reconcile conflicting desires of different cultures?
00:27:51
Speaker
And yeah, really, that's a realm of theory that we need to develop a lot more as quickly as we possibly can. So it definitely makes sense to talk about an individual's desire. And the question is, when we have more than one desire, how do we reconcile those into a sort of compromised desire that pursues them both as well as possible? Yeah, that's a question we're deeply interested in. There's work to be done there.
00:28:18
Speaker
Well, all that gets to this higher level question of how we can collaborate to decide on what we want AI systems to do for us. But there's also this more technical end of alignment, which is just about getting a system to pursue a goal reliably. So like once you've decided what you want the system to do, how can you be sure it'll actually do that?
00:28:39
Speaker
One thing that you came up with to that end is this idea of somehow peering into these systems and figuring out what they truly expect or plan to do. So in particular, you wanted to look for indicators that they might do something really dangerous or damaging for humanity. And you call the safety testing process demancatting, which I think is short for demonstrating a catastrophic trajectory. Can you tell us about how that could work?
00:29:06
Speaker
So I think in practice that's going to end up being the same project as inspectability or explainability, which is you have this very complex and quite opaque machine learning system.
00:29:23
Speaker
And you want to figure out what's going on inside it. What does it really do and what will it do when we release it out into the world. Out of out of testing environments and out of the environments we've seen it in before. And that's a big project it will probably.
00:29:43
Speaker
sort of add up to democating. Yeah, so I'll define democating. Basically, I propose preparing in advance to get really, really good at analyzing a thing's knowledge format or finding where the knowledge is and decoding the knowledge representation format. I think if we can do that, then we have a sort of way of asking it questions that it doesn't know it's being asked.
00:30:06
Speaker
Uh, so that it sort of has to answer them truthfully in the same way that it would answer questions it's asking of itself. This is basically, to me, it sounds like a form of mind reading. If we take it to be a real agent, you're kind of trying to dig into its subconscious, if you will, to figure out what it truly intends or believes. Yeah. Yeah. So I guess I would translate digging into the subconscious as like,
00:30:33
Speaker
bypassing the part that's designed to communicate with external agents, which might turn out to be very difficult if people keep trying to build AI on large language models, which are sort of, they know how to lie before they know how to care what the truth is, in a sense. Interesting.
00:30:59
Speaker
So what would incentivize researchers to try to find these problems in their AI systems? I mean, it's hard work to try to figure out if there are any sinister thoughts lurking. How can we try to encourage them to do that and see finding them as a good thing? One thing worth noting is that they already have a very large incentive in that none of them want to be dehabitated by a misaligned AI themselves.
00:31:28
Speaker
In a way, no one has an incentive to take risks on this. But I realize sometimes that doesn't really translate into the decisions that an organization makes. So yeah, I'm not sure. Fast amounts of public funding for closely integrated alignment programs and every AI research organization, that would be nice. Yeah.
00:31:52
Speaker
So you describe something like that coming about in your world. You call it OSI, A-W-S-A-I, which stands for the Allied World for Strong Artificial Intelligence. And we should say, for those who aren't familiar with the term, strong artificial intelligence is this kind of intelligence that is, it's a system that is more powerful, more successful at all tasks than a human, not just like playing chess or driving a car. It's kind of universally better than us.
00:32:18
Speaker
This is a global alliance to try to pursue the successful creation of this kind of thing. It would be very good to have that. I guess most of the story is reckoning with the fact that building that is very difficult and we probably couldn't do it yet.
00:32:38
Speaker
and asking, okay, what would need to happen in order for the world to realize that this is a very dire situation that we're in and get to the point where we could build a truly global allied world for alignment. Yeah, creating that truly global alliance seemed like a really pivotal success for your world.

Hopeful AI Futures and Global Interaction

00:33:01
Speaker
Even with that strong fear motivator, I think it would be very difficult today to coordinate that scale of collaboration.
00:33:08
Speaker
In your submission, you imagined multiple elements that facilitate this. And one factor I really appreciated was the impact of VR, which is also something I'm personally interested in. And in your world, it really seems to bring people together. As you say, if we were a better species, text would be enough. But there's really something transformative about inhabiting the same space as somebody.
00:33:30
Speaker
And so VR becomes really massive in your world. You say that in 2037, the average user spends almost 90% of their waking hours in VR. We should say this is with kind of regular eyeglasses sized VR headsets. So it's not like you have a big Oculus on your head all day long.
00:33:45
Speaker
Yeah, it sounds very extreme, but it's not. It's just wearing glasses, basically. Yeah, sure. But it's still a really big transformation in where we spend our time, right? And you point out how this decentralizes, delocalizes people's interactions. It makes global interactions so much easier and really lowers the barrier for collaboration. Can you say a bit more about how this impacts people's experiences in your world?
00:34:09
Speaker
Yeah, I'm really hoping it leads to a lot more cross organization collaboration. It's conceivable that keeping organizations separate might actually become quite difficult under these conditions because a person can meet a person outside of the organization just as easily as they can meet someone inside. So yeah, under that situation, how do you have an arms race? I don't know. Maybe it's impossible. Maybe we'll be safe at that point. Once everyone's sort of
00:34:37
Speaker
essentially in the same room all of the time. Right. Dystopian stories of the future are all too common, especially when powerful artificial intelligences and robots come into the mix. Even if these portrayals represent real risks, we feel that risk communication is at its most powerful when it's paired with hope.
00:35:04
Speaker
Part of the goal of this contest was to steer our collective image of the future towards something more positive, but without ending up in an unrealistic utopia either. We hope that this can generate some real insights into how we might navigate towards a future worth fighting for. Let's hear how Mako sees his work in relation to other cultural portrayals of the future.
00:35:24
Speaker
So we wanted to spend a bit of time talking about how cultural representations of some of the issues you explore in your world are kind of impacting the way people think about these problems and how those could be challenged.
00:35:38
Speaker
So one example of that is the issue of alignment and often we see the alignment problem and it's kind of dystopian sci-fi as having gone wrong in a way such that AI becomes sentient and you know has like a robot army and turns back against us like in the Matrix.
00:35:57
Speaker
That doesn't really seem to be what AI safety researchers are actually worried about because there are so many different ways that misalignments can occur that don't involve that kind of like physical agent battle robot army deal. Do you feel that that is something that we culturally overestimate? Oh, yeah. It's never going to get to the point where there are two armies and one of them belongs to humans and the other belongs to AIs.
00:36:25
Speaker
It ends for humans a lot sooner than that. Humans can get diseases, that's one of our big weaknesses and that sort of does not look like an army and it's very difficult to fight it and the main way you fight it is by receding into sealed bunkers. Violence from a super intelligent AI would not look like an army, I would say that's true.
00:36:52
Speaker
One thing that your world does include is I think there's this major fear that the AI will be malicious, or as you were saying, have sinister thoughts. And I'm curious about
00:37:04
Speaker
possible futures where AI isn't necessarily malicious but just misaligned in a way that is deeply problematic. Do you have any feeling for how much you're actually worried about a mean AI versus a problematically badly aligned AI that's not mean, if that makes sense? I don't really think about mean or vindictive or punitive AI's or threatening AI's very much.
00:37:31
Speaker
Yeah, I mean, most of the risk we're concerned about is just you get a system that wants to do something that is not really compatible with the continuation of humanity. And the issue is, there are many goals that could lead to that. Like if it just sort of I can probably think of a plausible one.
00:37:53
Speaker
We can use the example of, I think it's Nick Bostrom, who has the paperclip machine. Right. This machine that just wants to create paperclips out of everything. And this doesn't sound too bad until it starts ripping you apart to make more paperclips from the iron in your blood. Yeah, because eventually, yeah, that becomes profitable. Yeah. And the iron in the sun and the iron in the other stars. Which you needed to survive. Yeah. So it's essentially, I'm more worried about a dehabitation.
00:38:21
Speaker
the same way that humanity dehabitates other species. If you have a much more powerful industrial system that doesn't care about the human habitat, then it will eventually get to the level of power where it has a reason to turn it into something else and then we won't have it and then we'll die.
00:38:42
Speaker
Yeah, interesting. Are there examples of media where this is the case that you can think of where an AI doesn't necessarily want to destroy us but has other kind of maybe benign seeming goals that end up being a big problem for us? No, benign seeming. No. I don't know if I've seen that. That would be really interesting.
00:39:02
Speaker
Yeah, I mean, it's interesting how it might just be a feature of the stories that are easiest to tell because it makes sense to have a villain in the story that actively is trying to do something that's opposed to our interests as humans. And it's a little bit harder to tell a story where the villain just doesn't really care about you or maybe know about or understand you.
00:39:25
Speaker
You know, I think the distinction is you can tell the story for longer if the humans have a fighting chance and they exchange a lot of blows back and forward for a long time. In the situations where the system is just kind of indifferent to humans and doesn't just sort of destroys them. Incidentally, that's that situation where, yeah, it's way more powerful than us. There's not much we can do. Yeah. You don't get a long story.
00:39:52
Speaker
It's interesting that the outcomes you're describing there are kind of zero sum stories, whether the AI or humanity wins, the other loses. Maybe there's a parallel to the board game world here where exploring positive sum systems can lead to more interesting narratives. Like one thing I found really interesting about the resolution of your world is the way that the AI itself kind of disappeared or was dissolved into and became one with what became of humanity.
00:40:19
Speaker
I think there's often this idea that like you were saying before, there's like an army, there's two sides, there's humans and the AIs. But if we assume that there's a real solution to this alignment problem and the goals of the system are truly reflecting the goals of humanity somehow as a whole, then it kind of does make sense that we would become one thing. Yeah, I think essentially that has to happen.
00:40:41
Speaker
It's also interesting to me to see the variety of positive futures people came up with. One of the things we asked for in the competition was a world that we would reasonably want to live in. And that is really different for different people, given their preferences and cultural background and life experiences.
00:41:02
Speaker
And I think it'll be really interesting to see what others who think about a positive future imagine. Yeah, I found telling a story that would lead to an outcome that's agreeable to everyone. I found that really difficult. And that ended up leading to this sort of synthesis of AI control and human control and realizing that there shouldn't really be a difference.

Utopian vs. Dystopian Narratives

00:41:24
Speaker
You give the AI control and if it really has human's interests in its heart, then
00:41:30
Speaker
It hands the control right back. Yeah. Well, in addition to collecting more interesting and diverse visions of the future, one of the reasons we ran this contest is to try to show that imagining positive futures can help us to address the problems that are standing in our way. That's not really something we get from the probably more common dystopian views of the future.
00:41:54
Speaker
So the hope, I think, is that by imagining things we actually want to work towards, we can get people to be motivated to carry out the changes we need to get there. Do you think that that's a reasonable kind of mechanism of action or pathway to aim for? Absolutely, yeah. And do you have any media that comes to mind that you think helps this cause? I don't. No, I can't think of a lot of really helpful utopian fiction.
00:42:23
Speaker
Why do you think that is like why why are dystopian views so common. I think there's an idea that if you depict the problem then that will be more motivating to people. I'm not sure how true that is but enough writers believe it that that's what you get.
00:42:38
Speaker
It's funny, when you put it that way, it makes me think that actually the demon-catting in your world is kind of the ultimate motivating dystopian fiction, right? It's a prediction by researchers that there will be this real dystopian future if we don't get ourselves together. Yeah, interesting.
00:42:59
Speaker
Well, it's been fun to see how just by being positive, the stories we've collected have defied so many of the typical narratives that we're used to hearing about the future. I'm curious if there are any specific tropes that you were thinking about when you were building your world that you kind of tried to consciously oppose.
00:43:15
Speaker
The norm in media is to depict AI as arriving very gradually. So like you'll have robots that are roughly human level and slightly below human level and they're walking around and doing everyday tasks and there's not really a reason to believe it's going to go that way. I mean, as far as I'm aware, it could.
00:43:37
Speaker
But i think traditional science fiction is way too confident of that i think it's pretty likely that we're gonna get to autonomous cars and then like four years later we're gonna have a gi and it's it's gonna be way beyond your capabilities and there's not really gonna be a long period where we have robots.
00:43:57
Speaker
Yeah, I didn't notice that in your world. It was kind of funny because you have, we do have this thread where we have bipedal walking machines that are used for all kinds of remote manual labor and such, but they don't really feature that strongly in the concerns of, you know, the organizations trying to do safe AI development. It's not like there's this concern about a robot war, which you'd often see in more traditional sci-fi stories. I feel like they just kind of withered on the side because they weren't really the main heart of the issue.
00:44:26
Speaker
You know, the heart of the issue is when you have something that's better at making AI than we are. Interesting. Or it's better at logistics or assembling security or it's it's stuff or it's better at science. Yeah. Or it combines those things, especially we might just turn out to be very easy to do. Yeah. You had that whole, I forget the name of it, Orion or something.
00:44:54
Speaker
AuraClan, I wanted to acknowledge the possibility that we might end up with a logistics intelligence that can't really do science and can't do AI research and doesn't understand itself particularly well enough to self-improve its intelligence.
00:45:11
Speaker
Yeah, because that could still be dangerous in a kind of fun and traditional science fiction way because you have this thing that's protecting its facilities, but it's also kind of dependent on human technology and it has these blind spots.
00:45:27
Speaker
Yeah, so Oroplan is a system that kind of seems to do materials science basically and also be able to produce logistics and construction industrial systems so that it can actually carry out the whole stack.
00:45:45
Speaker
of hypothesizing and testing and manufacturing new materials and things like that. It's totally conceivable that you could get something that does that without really much self-reflection and without any, uh, impetus to think about, uh, intelligence itself, a sort of science algorithm or science and manufacturing algorithm. And that'd be really useful and it would generate a lot of wealth for society, but it's not guaranteed.
00:46:14
Speaker
And it's not guaranteed that if we get that, then that's what we're going to have for a very long time. It might be that that's quite a brief error and then we get something that's self-improving and strategic and then that error is over and we need to make sure that that transition out of that error is positive.
00:46:35
Speaker
I really enjoyed thinking about Aura Plan. It was like a little mini story within your world that explored this totally different breed of AI that could develop. You really managed to pack a lot of overlapping narratives into your submission. There's one other kind of meta thing that I really appreciated that you included, which is this idea of the very end of your world where some people started to believe maybe their reality was a simulation.
00:47:03
Speaker
and that it was meant to work out some problem. And now that they'd reached the end of solving their piece, there wouldn't be anything left to simulate and the world would be shut down. And you kind of tongue in cheek point out that this actually happened, you know, in your appendix because you stopped thinking about the world and so nothing else happened for them. I really love this idea. I'm curious if you've seen this happen in another piece of media, like was there an inspiration for this concept or did it just come to you?
00:47:32
Speaker
Probably at some point, yeah, I mean, I would have seen someone frame the question, do characters and stories experience their own existence? And I guess the related question is when an actor is performing a character, are the things that they're performing, is that in a sense being experienced? If they're performing heartbreak, is that creating suffering on some level? Yeah. Probably.
00:47:59
Speaker
now that I reflect on it. I hope that doesn't upset too many people. It really changes the calculus because if peace is destruction, then the people in your world shouldn't necessarily be pursuing peace. I could imagine if it kept iterating a bit more, maybe things would go wrong in a way that would keep you writing the story.
00:48:22
Speaker
Yeah, I think it's quite plausible that in reality we'll be pursuing some really complicated and difficult form of peace where we have to maintain narrative tension about some some fact of our society in order to make it worth it to continue simulating it by whoever would be saying because I have done a bit of work in simulation ism and.
00:48:45
Speaker
I get the impression it probably is possible to reason from what we know about technological societies as a result of being one. To make guesses about the kinds of questions that simulators might be interested in. And once we know that then. We can start to reason about what the rules are.
00:49:06
Speaker
for being a simulator and what the rules of our simulated, possibly simulated reality would be. And then things get really interesting, but I haven't found a reason to talk about that much publicly.
00:49:22
Speaker
I like your comment about pursuing a complicated kind of piece that has to maintain some tensions. That reminds me again of the Matrix where Agent Smith, I think, says they previously tried a Matrix where everything was perfect, but the humans rejected it and they weren't comfortable in that space. Some deep animal part of their brain just wasn't satisfied with that level of peace and lack of conflict. And so they had to make, you know, 90s New York instead, which has plenty of conflict.

World-Building as a Tool for Innovation

00:49:50
Speaker
I see my my preferred explanation for nineties europe is some. If i were writing a version of the matrix now knowing what i what i know about the future of a. I probably think you can write that story it's not inherently unrealistic how you do it is. It would be a failed alignment scenario so someone made a gi at some points in the nineties.
00:50:15
Speaker
And maybe they were foolish and they gave it the goal of sort of maintaining that condition, maintaining that society. They thought it was a decent society and it was a good fallback and they wouldn't want it to do anything worse than that. Maybe they wanted it to do something better, but that failed for some reason. It fell back to this. And now it's just doing this forever. We're stuck in this life, this era. That is something you'd want to fight against.
00:50:44
Speaker
Yeah. So you're like, I hope this AI makes humanity better. And it's like, best I can do is 90s New York. Yeah. It's our hope that listeners like you will be inspired by some of the stories we're exploring in this podcast and bring your own experiences and insights to the table and fleshing them out. The process of world building has great potential to make a positive future feel more attainable.
00:51:14
Speaker
This can be incredibly powerful, whether you're a creative person looking to produce rich works of fiction, or have a more technical focus and are looking to reach policymakers or the public. Our entrants span this spectrum, and their hopes for their works are as diverse as their perspectives and expertise. I asked Mako what kind of impact he hoped his work would have on the world.
00:51:34
Speaker
I have some questions about your hopes for the creative impact of the world that you made. Which aspects of what you've written would you most like to see taken on by popular media? I think there are definitely things we could do here. I just think I don't know what they are yet. There are more conversations that need to be had. I think world building did turn out to be really useful.
00:51:58
Speaker
thinking creatively about some kind of ambitious things we could do and depicting them in a way that makes them visceral and makes them feel real and possible. That is really useful for sort of bringing people's thinking up and letting them imagine and letting them believe they can build something big and complicated and new. The question is, what should we build beyond what I've already described, which I'm not even sure if that's what we need.
00:52:27
Speaker
Yeah. It's great to hear that this world building exercise was helpful for you in fleshing things out and imagining this realistic positive future. I'm curious what you learned about your own vision of a positive future by doing the exercise. So I think before I started, I couldn't see any solution to the geopolitical problem. And so we end up just pinning all of our hopes to the corporations doing the right thing.
00:52:56
Speaker
Yeah, being asked to do this being offered this bounty, I think I managed to think of one solution, one potential solution to that problem. It's still a very large problem. And there's no guarantee that that solution will work. Yeah, but there's hope.
00:53:15
Speaker
Yeah, that's cool. I like how one of your timeline points had this, like this concept of demon catting and demon kits and showing the fearful potential of AIs comes from the world building competition. And, you know, as part of what sets this off is again, this kind of like meta recursive quality. But it's, it's really nice to hear that that actually was helpful for you in developing this concepts just to have this challenge posed to you.
00:53:41
Speaker
do you imagine that it would be useful to have have more thinkers do this kind of exercise to source possible solutions like this? Absolutely. Yeah. Yeah. The challenge of exploring sort of ambitious possibilities and then asking yourself in detail what would need to happen in order for this to be possible.

Technology, Politics, and Future Risks

00:54:00
Speaker
Uh, if you don't go through that exercise, then you end up with a really narrow conception of what is possible.
00:54:08
Speaker
What sorts of exploring would you most like to see other people doing in this space, either for your own creative development and expression or for the good of our species, I guess?
00:54:24
Speaker
I would love to see more narrativization or illumination of conflict dynamics and the geopolitical dynamics that sort of get in the way of transparency and cooperation. I think we need to understand those a lot better because if you just talk to diplomats, they're going to bullshit you. So it's not clear where to look for those. And when you find out how things actually work, it's usually a lot dirtier than you want to believe.
00:54:54
Speaker
So we need really good depictions of that. We need to give people a sense of how things really work. And then we need to develop visions of how things could change. Realistic and ambitious ones, because we're going to need to change a lot. Yeah. I mean, it occurs to me that some of these changes will require pretty diverse forms of expertise, like psychological insights, sociological ones.
00:55:21
Speaker
What sorts of expertise would you be most interested in having people bring to thinking about a positive future? We need to get really good at talking about how technology and politics interact. I think there are going to be some pretty major transitions soon as things start to become more transparent and surveillance starts to become more common. And there's a lot of opportunity there.
00:55:49
Speaker
I guess I would recommend David Brinn's book, The Transparent Society. In some countries, you're definitely going to end up with either a police state, a penopticon, or a system with true transparency, and that's a huge fork. Those are very different futures. They're both surveilled, but one is accountable, and it's doing what it's supposed to do, and the other is just extremely repressive.
00:56:15
Speaker
I think that kind of transition could do a lot of good work or a lot of bad work, depending on how it ends up. And of course, government has a huge influence on what kind of research projects happen. And military research and weaponization of technologies is, of course, entirely in the hands of government. So, yeah, the interaction between technology and politics is really important. We need people who are good at reasoning about that and we need to get better at it.
00:56:44
Speaker
Well, we've covered a tremendous amount of ground in this conversation and your world still has plenty of other interesting details that we could dive into that we haven't even touched on. So I recommend anyone who wants to learn more about these ideas to visit worldbuilt.ai where you can dive into Mako's world and the other winning submissions.
00:57:06
Speaker
and learn more about them. And Mako, thank you so much for your time, for being on this podcast, for all the labor you put into this world that you've shared with us. I really appreciate you joining us. If this podcast has got you thinking about the future, you can find out more about this world and explore the ideas contained in the other worlds at www.worldbuild.ai. We want to hear your thoughts. Are these worlds you'd want to live in?
00:57:36
Speaker
If you've enjoyed this episode and would like to help more people discover and discuss these ideas, you can give us a rating or leave a comment wherever you're listening to this podcast. We read all the comments and appreciate every rating. This podcast is produced and edited by WorldView Studio and the Future of Life Institute. FLI is a nonprofit that works to reduce large-scale risks from transformative technologies and promote the development and use of these technologies to benefit all life on Earth.
00:57:59
Speaker
We run educational outreach and grants programs and advocate for better policymaking in the United Nations, US government, and European Union institutions. If you're a storyteller working on films or other creative projects about the future, we can also help you understand the science and storytelling potential of transformative technologies.
00:58:16
Speaker
If you'd like to get in touch with us or any of the teams featured on the podcast to collaborate, you can email worldbuild at futureoflife.org. A reminder, this podcast explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we all want. The ideas we discuss here are not to be taken as FLI positions. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
00:58:44
Speaker
Thanks for listening to Imagine a World. Stay tuned to explore more positive futures.