Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Imagine A World: What if new governance mechanisms helped us coordinate? image

Imagine A World: What if new governance mechanisms helped us coordinate?

Future of Life Institute Podcast
Avatar
206 Plays1 year ago
Are today's democratic systems equipped well enough to create the best possible future for everyone? If they're not, what systems might work better? And are governments around the world taking the destabilizing threats of new technologies seriously enough, or will it take a dramatic event, such as an AI-driven war, to get their act together? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In this first episode of Imagine A World we explore the fictional worldbuild titled 'Peace Through Prophecy'. Host Guillaume Riesen speaks to the makers of 'Peace Through Prophecy', a second place entry in FLI's Worldbuilding Contest. The worldbuild was created by Jackson Wagner, Diana Gurvich and Holly Oatley. In the episode, Jackson and Holly discuss just a few of the many ideas bubbling around in their imagined future. At its core, this world is arguably about community. It asks how technology might bring us closer together, and allow us to reinvent our social systems. Many roads are explored, a whole garden of governance systems bolstered by Artificial Intelligence and other technologies. Overall, there's a shift towards more intimate and empowered communities. Even the AI systems eventually come to see their emotional and creative potentials realized. While progress is uneven, and littered with many human setbacks, a pretty good case is made for how everyone's best interests can lead us to a more positive future. Please note: This episode explores the ideas created as part of FLI’s Worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions Explore this imagined world: https://worldbuild.ai/peace-through-prophecy The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects. Media and concepts referenced in the episode: https://en.wikipedia.org/wiki/Prediction_market https://forum.effectivealtruism.org/ 'Veil of ignorance' thought experiment: https://en.wikipedia.org/wiki/Original_position https://en.wikipedia.org/wiki/Isaac_Asimov https://en.wikipedia.org/wiki/Liquid_democracy https://en.wikipedia.org/wiki/The_Dispossessed https://en.wikipedia.org/wiki/Terra_Ignota https://equilibriabook.com/ https://en.wikipedia.org/wiki/John_Rawls https://en.wikipedia.org/wiki/Radical_transparency https://en.wikipedia.org/wiki/Audrey_Tang https://en.wikipedia.org/wiki/Quadratic_voting#Quadratic_funding
Recommended
Transcript

Introduction to 'Imagine a World' and Podcast Context

00:00:00
Speaker
on this episode of Imagine a World.
00:00:12
Speaker
technology capability arms races with AI that make it hard to slow down and be like, whoa, whoa, whoa, we got to like figure out alignment before we crank up power on this. So like I'm kind of coming out from a position of doubt. Like I have ideas that I'm excited about, like prediction markets and affinity cities, but these things haven't been tried. And there's a lot of kinks that need to be worked out.
00:00:32
Speaker
Part of my excitement about like the decentralized, more like anarchic world where there's lots of different options and things coexisting is because that's a world that's full of experimentation with different social systems and institutions.

Exploring 'Peace Through Prophecy' and Project Creators

00:00:49
Speaker
Welcome to Imagine a World, a mini-series from the Future of Life Institute. This podcast is based on a contest we ran to gather ideas from around the world about what a more positive future might look like in 2045. We hope the diverse ideas you're about to hear will spark discussions and maybe even collaborations. But you should know that the ideas in this podcast are not to be taken as FLI endorsed positions. And now, over to our host, PM Reason.
00:01:29
Speaker
Welcome to the Imagine a World podcast by the Future of Life Institute. I'm your host, Guillaume Reason. In this episode, we'll be exploring a world called peace through prophecy, which was a second place winner of FLI's world building contest.
00:01:42
Speaker
At its core, you could argue that this world is about community. It asks how technology might bring us closer together and allow us to reinvent our social systems. Many roads are explored, a whole garden of governance systems, bolstered by artificial intelligence and other technologies. Overall, there's a shift towards more intimate and empowered communities. Even the AI systems eventually begin to see their emotional and creative potentials realized.
00:02:06
Speaker
While progress is uneven and littered with many very human setbacks, a pretty good case is made for how everyone's best interests can lead us to a more positive future. Our guests today are Holly Oatley and Jackson Wagoner, two members of the three-person team who created this world. Their third teammate, Diana Gervich, created the digital mural accompanying their submission.
00:02:27
Speaker
Holly Oatley wrote the two short stories. She's a creative writer with an interest in positivity, history, fantasy and gay culture. And Jackson Wagner is an aerospace engineer with an interest in effective altruism, rationalism and forecasting. Hi, Holly and Jackson. Thanks so much for joining us. Thank you. It's really great to be here. Yeah, thanks. It's great to be on the podcast.
00:02:48
Speaker
Um, how did, how did you two come to work together on this? I mean, Jackson and I have been friends for a long time since college, always discussing nerdy stuff together. And yeah, Jackson just came to me with this idea and, you know, asking if I wanted to be part of the contest. And I, I took a look at it and I, I, you know, I've always been interested in world building. I have all these ideas sort of kicking around in my head about
00:03:14
Speaker
futurism and what, you know, designing my own bespoke future society might be like. And I decided, okay, well, let's adapt some of that to this project. Is that kind of how you experienced it, Jackson? Yeah, I mean, it was very fun to kind of like,
00:03:31
Speaker
Get together with a bunch of friends and stuff but how I initially got involved in the future of life institute contest was I

Challenges and Artwork in Worldbuilding

00:03:38
Speaker
was like looking at the forum one day and I saw that the future of life institution made this post announcing they were doing this cool AI world building contest about like optimistic futures.
00:03:47
Speaker
And someone in the comments was like, this is bad. What's up with these like really constraining, you know, there were these constraints in order to kind of ensure that we were depicting like a, you know, aspirational world where like a billion people don't like die in some kind of nuclear catastrophe along the way to like the AI future.
00:04:04
Speaker
And some people in the comments were like, oh, it's like so unrealistic to like have all these constraints or like the future would look so incomprehensibly alien as soon as we get AI that it's like impossible to tell a story that sounds normal or something. And I thought that sounded kind of silly. Like there's definitely a lot of ways that things could go wrong, but there's also a lot of kind of comprehensible realistic scenarios that I think could be told. So I like got into an argument in the comments about how like actually the contest was reasonable and like it was good and you know, it wasn't like, you know, misleading or whatever.
00:04:33
Speaker
Then after going back and forth writing a couple long comments i was like alright like now i've defended this position for long enough that i could have just entered the thing you know like you only have to fill out you many short answer questions and stuff yes i started started like that but then it. Turned out to be more work than i thought to think about like all these different.
00:04:52
Speaker
different things, although it was very fun. So I figured, oh, like, you know, I couldn't do the art for myself and I could try and write the stories, but then they would just come off as like the exact same tone as the writing for the short answer questions.

Inspirations and Realism in Fictional World

00:05:04
Speaker
So I figured I should find some people and then, um, just figured. Yeah. I mean, I considered like looking for random folks on the internet, but then I thought like, wait a second, like I know some people who write stories and make art. Uh, do you want to say something about your third colleague, um, Diana and how they got involved in this?
00:05:23
Speaker
Yeah, so Diana was a friend of Tanina, my wife, and we just were familiar with some of her artwork and thought that it would be really awesome to have her contribute to the project so that we could have some like, you know, nice custom art. I forget like how exactly the idea for like a kind of Disney style, big mural of everything and like timeline form came together. But
00:05:52
Speaker
I don't know, it was really fun to, you know, try and do a visual illustration of like the history of the world rather than, you know, just individual moments. Yeah. It was a very cool to see you guys going back and forth and kind of brainstorming together and being like, what if we do it like this? And what if we put this over here and incorporating some actual, I think you guys said that you incorporate some actual AI designed imagery into that as well.
00:06:18
Speaker
I think it was all, it was like Diana doing the characters that are in the image and then Tandina did sort of a collage of the sort of background pictures and then they helped weave that all together. I appreciate that on the worldbuild.ai now that's been like filled with, we've got like some extra artwork courtesy of some dolly generations.
00:06:42
Speaker
Yeah, yeah, that was me. Super fun. Yeah. Wait, so you're saying that your wife actually generated some of the images for the thing and was not credited?
00:06:51
Speaker
Uh, well, like it was all kind of like a big, I didn't want to submit as four people because like, then that would, I don't know. Then like I intend, you know, who are like basically what, you know, we're married and we would have got like $10,000 instead of $5,000. Right. Very generous of you. Well, shout out to tendina for her artistic

Historical Insights and Future Societies

00:07:11
Speaker
contributions as well. She is the power behind the throne in a lot of ways. Nice.
00:07:17
Speaker
I'm curious, so like Jackson, you're an aerospace engineer by training and do coding and Holly, you have this history background. How did those perspectives influence the way that you thought about this future? I feel like.
00:07:31
Speaker
They have a common origin in the sense that I'm one of those people who grew up reading science fiction. I've always been interested in big ideas and I've always been sort of like hopeful about and interested in the long-term future of civilization and stuff. I think a lot of what motivates some people to get into space
00:07:52
Speaker
projects like engineering rockets and satellites and stuff is because just in the broad culture, people have this vision of space is kind of a metaphor or a concrete example of what humanity will do in the future. So there's this kind of attitude. People working at SpaceX trying to make the Mars rockets to go and settle this other planet. I have to imagine they're being driven by this similar kind of wanting to
00:08:19
Speaker
think about humanity's long-term future and make the future go well. I think in terms of the practical details, satellite engineering is similar to other sorts of finicky engineering where you're just building stuff and dealing with different constraints. Maybe there is some attitude of
00:08:38
Speaker
I don't know trying to be realistic in a certain way that is taught by engineering but you know that's probably taught by a lot of different disciplines including like getting realistic picture of history. Yeah, but I think there's a common origin in terms of just being excited about the future and wanting to go well.
00:08:54
Speaker
Yeah. And my experience has been studying history. I say I'm a student of history as well as a teacher of history. I really want to try and understand these past societies, how they worked, how the logistics of everything
00:09:09
Speaker
In them came together the ideas the resources that they had at their disposal you know all the sort of interlocking systems and then you know in my writing i kind of often turn that outward and trying to think about okay you know how to design a society design a reality and i think,
00:09:27
Speaker
I took a lot of that sort of thinking about the future as a historian might possibly even, you know, in the back of my head, what is a future historian say, you know, looking back on our era and what kind of world could be, you know, looking back at us. So thinking about all these things and, you know, turning it to the future instead of the past. And it fits well with a lot of my natural inclinations and what I like to learn about and to think about.

Community Engagement and AI in Society

00:09:55
Speaker
It's a really cool reversal there of thinking about imagining the future as backwards history. I've even in some of my story ideas, I haven't gotten to them yet, but some of my story ideas for other futurist projects would be a future historian analyzing the early internet age or whatever. I think that could be very fun.
00:10:19
Speaker
Yeah, I've thought about that too. Like imagine like a really, really far future person and like the only data they have from our whole era is like one teenager's video blogs and like, what do they make of us? Yeah. Yeah. What gets lost, what gets saved and like, what are their tropes of like how they describe us in the same way that, you know, historians have tropes of describing, you know, the Roman empire and things like that, which may or may not be accurate. Yeah. Very interesting. It would be so fun. Super fun.
00:10:52
Speaker
This world is jam-packed with innovative ways of structuring and participating in societies. Many of its inhabitants are deeply engaged with their local communities, and they seem to really benefit from this. I wanted to take a few minutes to understand how concepts like prediction markets or affinity cities might provide people with this genuine sense of belonging and influence. So, Jackson, before we get into some of the specific concepts that your submission explores, could you just give us kind of a 10,000-foot view of the arc that your world goes through?

Global Coordination and AI Safety

00:11:21
Speaker
I think the kind of big picture structure of our scenario from 2022 to 2045, I tried to reflect this in the artwork that we put together in Diana's beautiful illustrations. So if you're looking at artwork, you can see that our story has a sort of three X structure where at first you have all this kind of decentralized innovation and different communities spinning off their own things, different governments trying new things.
00:11:47
Speaker
all kinds of different AI technology just being developed all over the place by different organizations. And it's this world of rapid change and economic growth and new ideas and experimentation. And it's this bewildering, almost hard to control, but seems like things are going well pace of change. And then the second act is the flash-flash war, which is this
00:12:11
Speaker
kind of near miss conflict between the US and China where you have this tense standoff and then you have these AI systems that neither side fully understands that are just in charge of like sensor fusion and dispatching forces and stuff. And because the AI systems are like not totally understood, you end up setting off this like diplomatic crisis where like
00:12:31
Speaker
both sides interpret the aggressive signals of the other and then they start moving the forces around too rapidly based on the AI signals and both sides think they're being invaded by the other. And so then that acts as a catalyst that is symbolic of this totally decentralized, full speed ahead economic growth and technology developments with no international coordination over it.
00:12:53
Speaker
is not a model that is going to work long term because this technology is just too powerful and it can easily kind of go out of control. So that acts as a catalyst that gets people thinking less about competing with each other and more about like, hey, we need to coordinate to deal with this common problem and create a solution that's amenable to humanity overall.

Innovative Governance Systems and Social Experimentation

00:13:14
Speaker
So then the middle image in Diana's painting is an image of the Delia cords, which is our name for a big kind of like nuclear non-proliferation esque, but much wider in scope kind of agreement that is signed by all the countries to coordinate on AI safety research to make sure that we can, as we make these systems more and more intelligent, that we make them align to human values and kind of like
00:13:36
Speaker
controllable and also on suppressing AI technology outside of this to make sure that we buy enough time for this crucial research to happen. We're going to centralize this dangerous technology just like we did for nuclear weapons and uranium ore and stuff.
00:13:51
Speaker
And also sort of setting some ground rules for, you know, like making sure that the international system of nations doesn't get too upended or that like economies don't become like spectacularly unequal and kind of putting a little bit more human intention onto what before had been a kind of decentralized path of just like growth and boom and bust. And then after that in our painting, we're just kind of depicting this optimistic world that's going forward with more sort of human control and like
00:14:21
Speaker
participation to create the kind of world that we want to be in.
00:14:41
Speaker
And if they're correct about that, then they get money and if they're wrong about it, they lose their money. Yeah. And so this is a way of betting that is reinforced by people's interest in not losing money. And it basically sources common knowledge from a large group of people about what the future will hold.
00:14:58
Speaker
Yes, exactly. So you might have a prediction market about who might win an election, right? And so if right now the market thinks that there's only a 40% chance that one candidate will win, but I think, oh, actually, that candidate has better odds. I think there's a 60% chance that they'll win. Then I might buy some shares in that candidate, which would also push up
00:15:22
Speaker
the price a little bit in the market in the hopes that when they actually do win, then the shares would cash out at like, you know, $1. So, you know, I'd buy them for the 40 cents and then, you know, I have like a 60% chance of getting all those shares for a dollar when the election happens.
00:15:39
Speaker
Yeah. Well, one thing that's kind of tripping me up when I think about prediction markets in this way, like if it's about an election, then the ground truth is inherently within the people's beliefs. Like if everybody in the voting group was on the prediction market, then you would expect the prediction market to a hundred percent reflect the reality of their voting habits. And so it would have all the information
00:16:03
Speaker
But when you talk about something like an economic policy for example then what you're capturing in a prediction market seems more like the average belief about the impact or efficacy of this policy and like is there really that much reason to believe that the wisdom of the crowd will be right about that.
00:16:19
Speaker
Yeah, I mean, well, this is where like, uh, it gets into a lot of like detailed theorems and things about market efficiency and stuff. So on the one hand, it's not just voting, right? Because like, if I don't have much of a belief or something, then I might not participate in like some election market or something versus if I'm like super nerding out and I've like built all these computer models and stuff, or I just have like a really, really firm belief, then I might be motivated to like put in a lot of cash on, on one side, um, and kind of.
00:16:49
Speaker
Express my like certain to your strength of belief yeah things like that yes the prediction markets are just one element there's there's a whole ton of really interesting. Little elements that you've written into the story and all these different governments and so a couple of things going on the u.s you have these prediction markets that are helping to figure out what different policies might do based on some metrics of what we want our society to look like.
00:17:12
Speaker
We also have liquid democracy you mentioned where you can kind of like give your vote to somebody else and they in turn can give their vote to someone else. So you have this kind of flow of impact based on who you trust or whose expertise you believe in. And then we have affinity cities where people are kind of free to move around and create their own little communities around special interests. Can you say a little more about that?
00:17:35
Speaker
Yeah, so I don't know if there's a good name for what I'm thinking of as the affinity cities concept. There's the idea of charter cities is very close to this, like Prospera Honduras is this kind of like aspiring city in Honduras that has gotten special permission from the Honduran government to sort of experiment with their own civil law code. So they're still like under the constitution of Honduras and everything, but they can have their own kind of like business regulations and their own like land

Worldbuilding Implications and Societal Design

00:18:03
Speaker
use. So it's kind of like
00:18:05
Speaker
A policy experiment zone that goes goes beyond this the the special economic zone concept like usually special economic zones are just like about kind of boring stuff like tax credits for a late manufacturing industries and things but prosper is aiming to have.
00:18:21
Speaker
kind of its own systems of sort of like governing system and different like citizenship rules and be able to experiment with different kinds of architecture and all different sorts of things. But that's explicitly focused on kind of legal experimentation and like having a different regulatory regime than might be present in other countries versus one of the things that I think has a lot of potential is just the possibility for people to
00:18:49
Speaker
create basically intentional communities that have shared cultures or shared goals without needing to get their own semi-constitution or anything. So I see that being very democratic in a way because you can sort of opt into the society instead of trying to vote and fight over what your society is going to be about. You can kind of go from place to place and find the place that matches you the best.
00:19:17
Speaker
Yeah, I was thinking about that. I like the idea of like, I also am somebody in like sort of the early middle of their life who has some flexibility as to where I am and looking for places to go with people to share my values and all that. But it's funny to think about like second generation affinity city residents. Like I mentioned growing up in like surf town, California, where everything is about surfing and like you just hate the water.
00:19:39
Speaker
like there could be some interesting stories there of these these really strong cultures that you grow up in and don't necessarily agree with. It's kind of interesting. Like we could imagine that, you know, maybe people have the opportunity to jump around and find some sort of value city that suits them better. But then on the other hand, you know, you've got family and various ties to whether it's surf town or, you know, big business town or maybe it's fungus growing town.
00:20:09
Speaker
for a very exciting place for biologists. Yeah, exactly. So yeah, I think you could do so many stories about like the culture there and what it means to kind of craft a culture, craft a place around a particular set of ideals or culture. I think there could potentially be clashes there, but you know, hopefully we can be optimistic. You know, people are always looking for a sense of community and
00:20:35
Speaker
Community was one of the really big topics that we thought a lot about and today people are generating communities online and in some ways that's working well and in some ways people are often saying that they are feeling dissatisfied or don't have a sense of community so.
00:20:52
Speaker
The hope is to see places like this and the idea of other things like these neighborhoods that people are living in other aspects here, giving people more of a sense of community that's lacking in this world. You know, I turn my building of like technology makes me feel so isolated. Well, what if it didn't? What if it made me feel connected and able to share my ideas and be with like-minded people? What if that was the main experience that I was having of it? So that's

AI Challenges and Global Cooperation

00:21:20
Speaker
my kind of
00:21:20
Speaker
you know hopeful optimistic thing but i think you could totally explore you know the pros and cons of such a city.
00:21:30
Speaker
This is definitely something sort of reading our story. It can come off as maybe like, what is everyone like attending meetings all the time and just voting and, you know, like doing community gardens and stuff. I think part of the story there is that when you have a world that is like, uh, much more fluid and when all the services are like easier to access, like if, you know, participating in government stuff was all like, we'd done the kind of like Estonia things and digitized a lot of these processes.
00:21:55
Speaker
then maybe it would become easier to do this kind of citizen participation. And also, if you were living in a world where, like right now, our form of representation through government, you vote for representatives, and then maybe they pass laws that you like, maybe they don't, and really, it all depends on the median person who's elected. But if we were living in some sort of super fluid world of you're voting on the kind of value function, and then the laws kind of get changed right away, but via the prediction market system, then people might be much more motivated because they'd have more
00:22:24
Speaker
voice and genuine influence over how their world was going to look like, they might participate more.
00:22:40
Speaker
not sitting at a desk and focusing, but like, you know, moving through the world and doing it. And I could even imagine these kind of governance interactions being more like talking to your roommate about who takes out the trash. You can have these kinds of interactions that shape your environment in helpful ways that are worth doing, even if they might seem imposing with today's technologies and systems.
00:22:58
Speaker
Yeah. Or like I don't spend a lot of time going to community meetings, but I do spend a lot of time like just randomly talking about, you know, politics and stuff like that. So that, you know, there might be almost more of a blend into daily life of this future kind of like AI enabled advanced social technology.
00:23:14
Speaker
Yeah. And I think the one phrase that might sum a lot of this up is like the idea of building things together, whether that's on the small scale or the large scale, the large scale being these giant amazing construction projects and things like that. But the small scale is, you know, people are building the kinds of houses that they want to live in. They're building the kinds of neighborhoods. They're building the kinds of weekly get togethers and social connections that they want to have with each other and that they feel a lot of, you know, agency again in the process.
00:23:44
Speaker
They don't feel like the government is some distant thing outside of them. They feel like they are part of the government. They are making those decisions for their community, their neighborhood.
00:24:04
Speaker
By the end of this world, things are going pretty well. AIs have been recognized as having some rights and are generally empowering rather than harming humans, but this outcome was by no means assured. Humanity had to weather sudden military events like the AI-driven flash-crash war and find approaches to global coordination in the face of massive change and uncertainty. We had to work together to navigate these challenges and ensure a safe path to developing stable AI systems. I wanted to hear more about how these threats were approached in this world.
00:24:35
Speaker
Well, I wanted to zoom out a little bit and look at the higher level story in your world where we have this kind of pivotal moment that you're calling the flash crash war that really changed the way that AI is treated and the way that different parts of the international society kind of work together on it. Maybe you could describe a little bit of that.
00:24:57
Speaker
So the flash crash war was kind of a near miss type scenario where I'm imagining going from a world of kind of like decentralized AI technologies being developed by militaries, by private companies, by just all kinds of different groups.
00:25:13
Speaker
And we're kind of like racing toward the brink without that much top-down control. You know, it's changing the balance of different technologies, including like military technology. So it's like kind of destabilizing the world, you know, but then we end up with this scenario where you kind of just have
00:25:29
Speaker
AI sensors that are like misperceiving the other side, like so it's almost like the two sensor systems that are constructed by the militaries on each side of this like tense standoff situation, kind of like get into this unintentional like feedback loop of like sort of signaling to each other, which really is not that different from what happens oftentimes in like
00:25:48
Speaker
human led wars but you know it could happen at a faster pace and with less understanding and that kind of acts as like a trigger for the world to step back and be like whoa there's got to be more coordination over this there's got to be kind of like international cooperation against this kind of common threat of this unstable technology.
00:26:03
Speaker
Yeah, I sort of did the second story as government people looking back on this experience. And one of the things I wanted to stress was, you know, nobody was like going in, you know, Warhawk, like, oh, we got to get the United States. Oh, we got to get China. It was all something that was a failure in these systems causing this kind of feedback loop. And I was thinking a lot about the Cold War. And I was thinking about, you know, people like Stanislav Petrov, who realized that
00:26:29
Speaker
what they were seeing was an error and prevented a world nuclear war. I think it's a very similar sort of thing here that both of these protagonists of this story were people who were involved in, you know, saying, okay, let's bring some human double checking to this. Let's let cooler heads prevail. And I think in a way, it's kind of like
00:26:48
Speaker
The overall outcome of that is that cooler heads not only prevailed, that the cooler heads are in charge and people who are thinking carefully about these kinds of risks and how to guard against this kind of scenario happening again, those are the people who are making this world as it is and who are creating all this international cooperation and unity.

Philosophical Concepts and AI Ethics

00:27:11
Speaker
Yeah, I thought a really interesting thing that you guys pointed out was how the Delhi Accords and the way everyone kind of agreed to work together on this and limit progress in a way that you could imagine would maybe be hampering to any individual country that thought they would be on top in the future. It was possible because of the uncertainty and the fast pace of all these changes because nobody could look into the future and tell what their position would be.
00:27:39
Speaker
Like beyond this transition right like anybody could end up on top or not on top and so it kind of allowed this even footing situation where everyone was like we gotta work together. Whoever is on top can't be too powerful so let's all agree to some limitations here.
00:27:58
Speaker
There's a philosophical thought experiment. I forget who it was by, but just designing the perfect society in a way where you would not be able to tell, you would be reincarnated into that society and not able to tell what social class you would be, what race you would be, what gender you would be. So there would be nobody trying to get control of the society and enriching themselves. And that's sort of imagining a utopia. Yeah, I feel like it's a similar
00:28:24
Speaker
kind of concept on a global scale, if no country can predict exactly where they're going to be or exactly how this is going to affect them, they can kind of recognize the need for this mutual cooperation and really trying to get a bigger sense of perspective on what is possible and where these changes could potentially lead everyone and seeing themselves as more of a global unit, at least in that sense, while doing their own different kinds of experiments with it. Yeah.
00:28:52
Speaker
Yeah, this is the veil of ignorance idea. And the real world never gets the perfect situation where everyone's sitting in heaven before they've been incarnated and they're designing the Constitution. But there are situations that are more and less like that. So if you had come to me while I was still in college and been like, hey, Jackson, I figured out this great technology that design space satellites really, really easily. So we won't need to employ nearly as many aerospace engineers. And we'll be able to launch so many more probes to the planets.
00:29:20
Speaker
college me would have been awesome. This sounds like a huge net win for humanity. I don't know. We can pay some benefits or unemployment insurance to all the aerospace engineers, but on net, this is huge. But current me, I'd be like, well, I don't know. Let's think about this. As an aerospace engineer, I'm going to petition my senator. So it's like when it's obvious who the winners and losers are, then even when something is a net positive, people who are losing out, they're going to want to fight it.
00:29:48
Speaker
when everyone can see the consequences of a decision and see how it's all going to turn out, then it's a little bit harder to make just sort of general positive sum deals versus when there's like a lot of uncertainty. I love this veil of ignorance concept because it's such a nice way to turn what seems like a really scary situation where there's so much uncertainty and fast change and like some people might end up in much worse situations than before these changes happen.
00:30:11
Speaker
into kind of a hopeful breeding ground for good collaboration and good faith efforts to make a good future for everyone. It's a really cool twist. Kind of making a common enemy out of our own ignorance, if you will. Yeah, totally.
00:30:26
Speaker
Yeah, and I think a lot of the things that we imagined might go into a kind of international coordination on AI agreement were things that were trying to kind of mitigate that uncertainty, right? So maybe AI technology would make the economy way, way more unequal and just create this kind of lords and peasants situation where the people who are controlling the technology, they can do everything, just accrue all the resources. So maybe you kind of pass a sort of international law that says, hey, we're going to
00:30:55
Speaker
We're going to mandate that the level of inequality in our economies doesn't exceed a certain amount so that if it gets bad enough, then we'll just start taxing the top and paying it out to the bottom just to make sure that things don't go too crazy. Or another one would be on the international stage. We're going to have joint collaboration, the inspections that people do of nuclear programs of different countries, but even more closer collaboration. There's going to be a unified project. We're going to decide as a species how we're going to use AI rather than
00:31:24
Speaker
racing and like, you know, if America gets it first and they get to do an American singularity and like if trying to get Chinese singularity and so forth, you know, nobody would, you know, would want that.
00:31:35
Speaker
Yeah. And another, another tool that you use in your world to kind of control things and slow things down is just like controlling the pipeline of, um, like the chips and other parts of these advanced computing systems that are necessary to actually develop them. And also kind of having the most advanced AI systems in these cages of research labs, which is featured in Holly's second story.
00:32:00
Speaker
I thought that was a really interesting story thing too. I found myself thinking about these three potentially superhuman AIs that are just kind of hinted at in that story and what their experience would be like. I mean, if you're making a lab to hold these potentially dangerous creatures, intelligences that you're creating, you probably don't want to let them know that they're in a lab.
00:32:23
Speaker
against their will. So then you start to imagine that you're this being that's awakening in a world that you slowly maybe realize is like an invisible cage. I don't know, it's interesting. Yeah, I'd love to hear some more thoughts like how that could go and what those experiences would be like of those AIs. I think originally I was sketching in my earliest concept sketches for stories that I might write in this world. I was thinking about the AIs internal experience a lot more.
00:32:51
Speaker
And that's sort of more the subject of stories that I'm writing for other projects is what is the internal experience of robots. It's kind of harkening back to Asimov and all of his great work with robots. But I think one concept I was floating around is maybe looking at an interview with one of
00:33:11
Speaker
maybe not the super intelligent AIs, but like the very intelligent AIs. And I think the idea I had is that these AIs, part of the alignment problem might be getting them very interested in ethics.
00:33:26
Speaker
having a sense of their own responsibility and what the right thing to do is, you know, in as much as their consciousness can be described as being like our own. And yeah, it's, I think it's a very uncertain future on a certain level at the end of the story. But sort of like just as we start doing the Deli Accords, and we went forward with hope that, you know, we could get a handle on this thing. And we did, you know, the hope is that the super intelligences that are coming up down the pipeline will be

Societal Resistance to AI and Democratic Power

00:33:53
Speaker
a positive impact and that a lot of smart people will work together to figure that out. Hopefully, the super intelligences aren't defended at being walled off for a while. I almost imagine them as children in the process of growing up and when they can finally hit their adulthood, then they're ready to take on the world and to be part of the world and to be responsible citizens. That'd be a real scenario anyway.
00:34:21
Speaker
And hopefully when they look back, they understand why we didn't let them play with the nuclear buttons. We start talking about, you know, being nice to the other kids really early on. Yeah. It's also interesting to me, like even outside of this sort of adversarial or like caution based approach of just keeping them safe, if everything pans out really well and these AIs are like somewhat smarter than us and they're super aligned, they want what's best for us. Like they love us. They're like our children in some way.
00:34:46
Speaker
What if they can't fulfill our hopes for them? What if they're just slightly smarter than us and we're all looking to them and they're like, God, they have no idea that I don't know what I'm doing. And we want them to create something even they don't understand. And how are they going to do that? You know, there's also an interesting story there of the best case scenario.
00:35:02
Speaker
Right, right, right. It's sort of a leadership position, you know, people are looking to people in positions of power and, you know, it's like, I want you to, you know, protect the country to solve my, you know, economic problems, to take care of my crops. And meanwhile, this king is just sitting there and it's like, well, I can't explain to you all the reasons why that won't work or I could try, but it might be a little bit hard. So,
00:35:25
Speaker
Yeah, I don't know. I think we might undergo some difficult conversations. But again, I almost feel like the parent-child metaphor, only in this case, I guess the AI is taking the role of the parent, saying, you know, like, okay, I have more perspective on this than you. And, you know, here's how I can try to help you understand this at the very least.
00:35:48
Speaker
Yeah, one thing that's kind of an assumption in this whole concern about AIs like taking off too quickly is that people will want them to or like will be okay with handing over power. And one thing I'm curious about is like, could you imagine a world where that doesn't happen? What if we have like liquid democracy and all these kind of empowering things that let people make policy changes and then people look at AIs and they're like, yeah, I don't trust them. Even if like they're proven to be accurate, you know, like the way that a lot of science is overall today, some people might still turn their backs and be like, I don't want to do that.
00:36:18
Speaker
And what if we use those new forms of democratic power, just kind of shut it all down and find ourselves in some kind of dead end where we no longer progress technologically.

Governance as Technology and Societal Development

00:36:26
Speaker
One thing I've been thinking a lot about is conveying to people a sense of agency in the process and that AI as a technology that can be helpful to them. It needs to show its results, show that it is worthy of being put in that position of being trusted, basically. But I think there's two different ways that it can be viewed. It can be viewed as an imposition. I think maybe one of the dangers is things are so government controlled in our stories that there could be a danger of seeing it as an imposition from the government.
00:36:56
Speaker
But ideally, what we would want to have is a sense of empowerment and that these are tools, AI as prosthetic, you know, as something that makes you able to do things that you couldn't before. And so I think there is a real task to be done, you know, conveying this idea of AI as prosthetic, conveying this idea of AI as being something empowering and something that is having a positive impact.
00:37:23
Speaker
You can have all sorts of ideas but even if they're good ideas, they might not be conveyed to people. I think we recently had the failure or current failure anyway of Mark Zuckerberg's metaverse. It seems like it's not really taking off the way that he wants it to and I think part of that is because people didn't see or haven't seen yet that there is something in there that they might want to find compelling. So I think there is something to be done and I'm not sure
00:37:52
Speaker
if this is the role of the government or other institutions or maybe everybody together to kind of, you know, this has to be an artificial intelligence that is, you know, working for us and helping us meet our personal goals, you know, that can be trusted. But there's a role to be played of sort of spreading that message and communicating to people that they can do great things and feel empowered by this technology.
00:38:24
Speaker
Improved governance tools aren't always top of mind in science fiction. It's pretty easy to imagine a futuristic gun or high-tech power plant. But what about a super-powered democracy? In this world, social systems are clearly presented as technologies, and their development plays a key role in the storytelling. I wanted to hear about Holly and Jackson's inspirations, and their thoughts about why this kind of story may be somewhat rare in popular depictions of the future.
00:38:49
Speaker
Often when governments are featured in fiction, they're typically like one end or the other. They're like mostly dystopian or mostly utopian. And your world really positions government and policy as tools that can be developed by an AI just like any other technology could. And I'm curious if there are any other examples that inspired you to think of it this way or how you see this kind of treatment happening in our culture.
00:39:15
Speaker
Yeah, I mean, there's so many, I think we do tend to go to extremes and we tend to, yeah, we tend to the big societies that are very absolutely good or absolutely utopian. I think probably one of my biggest inspirations would be the dispossessed by Ursula Le Guin. The subtitle of it is an ambiguous utopia. And I like that a lot because what she did is she basically proposed to, you know,
00:39:44
Speaker
the anarchist, somewhat, you might say, communistic society that evolves and emerges after this revolution on this moon, she shows that society. And what I love that she did is she said, okay, I'm going to try and create the utopia that would be most satisfying to me.
00:40:02
Speaker
And then I'm going to kind of pick it apart and show you know how there would be people who are discontented within the system and you know what are the sort of limitations of any particular idea of this utopia so even in utopia there is going to be logistics there's going to be challenges along the way to getting there.
00:40:23
Speaker
For the flip side of this, you could think about something like the series Terra Ignata, also known as by the name of the first book, Too Like the Lightning, which is this very wild and bizarre future and maybe sort of inspired some of the sense of, you know, outlandishness that I've been trying to cultivate.
00:40:40
Speaker
And in that society, it's kind of like a development of all the affinity cities that Jackson's been talking about. You have these giant cultures that anyone can join around the world, and they have become completely unmoored from geography, and they have different principles. For instance, you can have one that's like absolute authoritarian, except that you can leave at any time, and then you have another one
00:41:02
Speaker
which is fun, right? And then you have another one that's like, about like, cultivating human excellence. And you have another one that's like caring. And then you have one that's basically, you know, structured like a business. And you have all these beautiful systems. And then we slowly see over the course of the these novels, how the system falls apart. And it's not anything that any one person wants, every single person involved wants it to continue. But it's not the individual people involved. It's so much as
00:41:32
Speaker
Oh, the systems and that there are these sort of historical forces at work that are larger than any one person. So I guess that's kind of, you know, a lot of different angles on this. But in terms of thinking about utopia or dystopia, I guess I want to stress that utopia is made by people, that there is, you know, there is an effort to create it and there's
00:41:55
Speaker
It just is like we're trying to solve problems of governments today and the governments that we do have would be wildly surprising and outlandish to people hundreds of years ago. We're trying to create the governments of the future and that takes people collaborating and multiple people working together. It's not just individual efforts.
00:42:16
Speaker
Yeah, yeah. Both of the examples you gave have this kind of pluralism where you have at least two very different models of governance that are kind of coexisting and people might be able to go back and forth between them. And it strikes me that's kind of one end of a spectrum of what the future could look like.
00:42:31
Speaker
You could imagine we'd all come together into some kind of like global society that mostly has the same rules. Or as in your story, we'd have this continuing diversity of like different experimental models going alongside each other. Do you have any thoughts about like which of those is most likely in the long run or which of those would be more desirable for you?
00:42:52
Speaker
Yeah, I mean, there's definitely there's like some aspects of our story that are that are almost like totalitarian, right? Like we have this kind of like global coordination between all the governments that are like suppressing AI technology in order to develop it safely, you know, like controlling the semiconductor supply chains and all these sorts of things and then other elements that are like,
00:43:11
Speaker
hyper-libertarian or anarchic in terms of just having laws get passed automatically based on what the markets say will optimize this function that's being voted on by all the citizens or being able to leave and join all these different communities. I'm not sure in terms of what's likely for the future.
00:43:33
Speaker
One of the constraints that was driving a lot of the story, like I mentioned earlier, was trying to imagine the transitional story of like, this seems like a really difficult challenge. How do we get from here to a bright future that's also kind of strong enough and good enough at decision making, executing on ideas that we could handle this challenge? So I think a lot of
00:43:56
Speaker
Fictional like examples of different government systems are almost kind of portraying like an end state or like, you know They're portraying like an extreme position Versus so one of the stories that I was inspired by as Iwi Aizi-Ikowski has this book Inadequate Equilibria just like a basically nonfiction book in which he kind of just like
00:44:14
Speaker
rages about the ways that different social institutions are broken in the real world. And then he also has this setting from a long internet fiction that I haven't read, but it's called Doth Ilan. And this is this fantasy of a world in which
00:44:32
Speaker
people were naturally just way better at coordination than they are in our world. And so they're able to solve a lot of problems that we can't in our world. So for instance, in our world, maybe we have too many veto points around the construction of new infrastructure, right? Because somebody is in the way of that and how do we properly compensate them for the fact that we're going to change their neighborhood or something. But in the world of Dothalon, everybody would sign a bunch of
00:45:00
Speaker
crazy contracts like in the John Rawls thought experiment and then it would all go according to plan because everyone is just born with a John Rawls tier ability to coordinate. So I was inspired by that world because so many of the world's problems today seem to be coordination problems. We're stuck in these arms races, like literal arms races between different nations and
00:45:22
Speaker
technology capability arms races with AI that make it hard to slow down and be like whoa whoa whoa we gotta like figure out alignment before we crank up power on this so. In order to imagine like okay how do we get from here to like the world of dahul on.
00:45:37
Speaker
I'm coming out from a position of doubt. I have ideas that I'm excited about, like prediction markets and affinity cities, but these things haven't been tried and there's a lot of kinks that need to be worked out. Part of my excitement about the decentralized, more anarchic world where there's lots of different options and things coexisting is because that's a world that's full of experimentation with different social systems and institutions.
00:46:02
Speaker
Yeah, the diversity of what's going on in your world is kind of dazzling. I mean, it's hard to even touch on all the different concepts that are playing out at the same time. And I feel like maybe that's part of the reason it's hard to imagine stories like this, like in popular media. I mean, even the dispossessed is like these two societies that are being weighed against each other. And that's like a whole book about it. And I haven't read Tarik Nota, but it sounds like there would be a lot of exposition that needs to be done to figure out what's going on in all these different varied worlds. Yes.
00:46:31
Speaker
That is the challenge to starting it the first time is you're having so much thrown at you. But it could be an interesting exploration, I think, of maybe we have this very diverse world that's coordinated. OK, how do we break a very diverse world that's coordinated? What does it look like when that kind of system fails, when the coordination system fails? I think that's one of the many questions that that series is exploring. That's very interesting. But that story can't be in the background. You can't have that.
00:46:59
Speaker
be the background and then have like Star Wars on top of it. It's just too much. I really like this concept of using democracy and different forms of governance as like tools and experimenting with them. And one example in the real world that I'm pretty excited about is in Taiwan. Are you familiar with Audrey Tang, the digital minister of Taiwan? Yeah, definitely. Yeah. So she's doing a bunch of really cool things just in improving the transparency and accessibility of the Taiwanese government.
00:47:27
Speaker
empowering people to make their own apps to interact with the government and also working on social media platforms that will help people figure out where they have overlap and alignment in their positions on things. So this is an example of somebody who's really trying this stuff out. And I believe she's actually a conservative anarchist. She says she doesn't want states in the future, but this is the best she can do of contributing to the empowerment of all the citizens of Taiwan.
00:47:55
Speaker
Yeah. Are there any other examples like that or do you have any thoughts about her work? I'm just curious. I'm just learning about her just now, but I'm finding her fascinating. I'm fascinated particularly by the fact that she's an anarchist and we're coming in way in favor of state action here. But at the same time, we both have this sense that these technologies can be tools to empower people and to give them a sense of agency.
00:48:22
Speaker
And I think I really love the radical transparency idea and the way that she lives her life, according to transparency, because I feel like that could go a long way, you know, to helping people have a sense of agency with AIs and to having a sense of agency with governments, feeling like more participants in the process and fitting more with the ideas that we've been talking about, liquid democracy. You know, I think making people feel like they are
00:48:49
Speaker
moving and part of the system and not in an illusory way, but like, you know, actually giving them a role in moving and shaping of the world. I think that's just fantastic. Yeah. Hmm. Yeah. I mean, I think there's a whole world of like, as you can tell from the content of my story, I think that this sort of like experimentation with, um, with different forms of, of government and institutions is totally underrated.

Optimism in Worldbuilding and Institutional Decision-Making

00:49:13
Speaker
Um, I, I live in Fort Collins, Colorado, where I was very happy that our city recently passed a ranked choice voting.
00:49:18
Speaker
which is just a slightly different voting system where I think New York City and several other states and places have adopted this or approval voting, which is similar, where you're just able to provide more information to the election system by ranking what are your favorite candidates. This has all these downstream consequences of maybe promoting more moderate consensus candidates because it's worthwhile to be people's second place. Yeah, you don't have to just pick your favorite and that's it.
00:49:47
Speaker
Yeah, yeah. I mean, that's one that has a lot of kind of real world's implementation and it's sort of like popular and spreading these days, which is really great. But I think there's like so many things and sometimes people can get into this kind of like end of history type mode where they're like, okay, it's all about defending democracy. Like we have.
00:50:05
Speaker
We have the final form. It's like first pass and post voting, and then there's different houses of Congress. You have a Supreme Court or something, or maybe a parliament, I don't know. We figured it out. These guys in the 1700s, they're just real smart. And now we just need to make sure that we never, ever, ever backslide into authoritarianism.
00:50:23
Speaker
I'm like this, I'm like, there's two sides of this point. You know, like what about like exploring democracy or like going further in the direction of democracy? There's just a lot of different ways to do things. Like right now our whole conception of local participation or like local governments is often based on like a few people showing up to in-person meetings and almost kind of like overruling the vote of the majority of the people or something if they want to like stop construction somewhere.
00:50:48
Speaker
We just chose to create that system, giving a lot of voice to some people who show up to meetings. But we can imagine creating totally other systems. So one thing that I'm interested in is this idea of quadratic funding by Glenn Wheal is involved in this and also Vitalik Buterin, one of the leading creators of the Ethereum cryptocurrency. And this idea of quadratic funding is this kind of funding mechanism that's designed to
00:51:15
Speaker
Normally, people don't have much of an incentive to give to charities, even local charities that they themselves benefit from, like their library, because there's kind of a free rider problem. And the way that we solve that now is like we have local governments, everyone is forced to pay taxes, and then there's like a council that decides how the tax money should be spent in the budget. But you could imagine doing this almost algorithmically by having this kind of like everyone pays taxes that goes into a matching fund, and then individual people
00:51:40
Speaker
make charitable style donations to local organizations in their community like firefighting or police or you know like landscaping or whatever then the donations are kind of multiplied by the matching fund in a way that's determined by this like quadratic algorithm based on how many people donated you know it's this kind of like technocratic detailed thing but it's a way of yes this is really exciting proposal that kind of turns
00:52:02
Speaker
this seemingly insoluble like kind of free rider tragedy of the commons problem into a way to actually encourage people to like. Give community funding to the things that are actually helping people the most in their daily lives so it's sort of making charitable contributions a little bit more like voting it's kind of recognizing even small donations.
00:52:21
Speaker
they're kind of a signal to the system of like, hey, this organization has a lot of broad support. It's providing value to a lot of people versus if you just have one person giving a ton of money, then that is not matched as much by the system because like, well, it's probably just this one person is getting benefit from it and not the whole community. So there's less need for like a community subsidy of it. So it almost has, I mean, in a kind of big picture philosophical sense, it's almost similar to prediction markets because both of these are kind of
00:52:47
Speaker
living in this spectrum between like everyone gets one vote kind of voting power and a kind of economic world where it's like well some people care more or they have more information or they benefit more from it so they're gonna spend more and like there's a lot of situations where you want to use the information of different people giving different amounts or carrying different amounts about different issues but you don't want to like give all of the powers to that so you kind of want to interpolate between equal voting power and these kind of like different weighting schemes. Yeah that makes sense.
00:53:23
Speaker
The process of worldbuilding has great potential to make a positive future feel more attainable. This can be incredibly powerful, whether you're a creative person looking to produce rich works of fiction, or have a more technical focus and are looking to reach policymakers or the public. I asked Jackson and Holly what kind of impact they hoped their work might have on the world.
00:53:42
Speaker
So you've created this entire rich imagining of what the future could be. What would you most like to see come of this world being shared with the broader public? I think I would like to see, you know, if people were going to try and adapt this into other forms of media or, you know, kind of tell their own stories or just, you know, take away some ideas from this.
00:54:05
Speaker
I hope that my story, as optimistic as it is, is communicating how I feel about the seriousness of the challenge of safely developing artificial intelligence. I try to cram in a lot of the things that I find most inspiring and that make me most excited about the future.
00:54:24
Speaker
would hope to give people a lot of different ideas for ways that we might improve institutional decision making so that we can get better at controlling new technologies and making sure that they're deployed for the broadest benefit of humanity. And also just some concrete ideas about what that control of AI might look like. Hopefully, we get alignment right early and we never have to do a global nuclear non-proliferation program of tracking the semiconductor production and stuff.
00:54:53
Speaker
but i don't think it's like hopeless is sometimes people in the kind of effect of us in rational world can seem like. We've gotta get a right early because you know by the time congress gets involved we're just screwed and i wanna be like no there's a plan be here of trying to kind of build a world that's strong enough that it can manage these challenges.

Conclusion: Reflecting on Optimism and Future Possibilities

00:55:12
Speaker
And I think for me, I would love it if people were to take something away from this is the idea of, you know, creating these communities and that there can be these opportunities for joy, opportunities for like working together and living together and creating stuff together. There's opportunities for this very vibrant, exciting world. And right now, I think there's a lot of pessimism about technology, maybe just because that's the way that people have experienced it in some ways. But I feel like
00:55:41
Speaker
we we do have to think very carefully about what our goals are and i think that's something that comes out of the stories as well is you know we are thinking about like what target are we trying to hit of you know what kind of world are we trying to to have trying to experience and.
00:55:58
Speaker
what I've sort of discovered in the process of writing is this vision of a world where people really have this sense of passion and enthusiasm and feel like they have community and they have meaningful, fulfilling, rich lives. And I'm starting to think now that there is a way to get to there, that we can have this if we think carefully about trying to go after it, if we don't just go forward blindly, but if we go forward
00:56:26
Speaker
deliberately and say yeah that's the target that's our happily ever after that we want to get to is this really fun and exciting place to be.
00:56:34
Speaker
That seems to me like a call for more sociology and psychology and those kinds of elements of science to be involved in imagining the future. Is that something that you want to see more of or are there other types of expertise you'd want to bring in? Yes, absolutely. I would be delighted to see psychologists and sociologists get involved. Bring on us humanities folks. We want to help. We want to shape the world, shape the future.
00:56:58
Speaker
Yeah, I would love to see more social sci-fi. I also think that a lot of science fiction by default tends to have this very pessimistic take. Often you end up in this kind of like black mirror style of storytelling was like, okay, let me come up with a new cool technology. And then let me like imagine some counterintuitive like bad results of that technology. And then let me tell a story about that.
00:57:17
Speaker
and i think that oftentimes pessimistic science fiction stories i guess like feel pretty justified like it feels reasonable sometimes to feel pessimistic about the broader world but it doesn't seem that helpful to just be like warning about stuff and not trying to like plot out a positive force like it seems a little bit more action guiding or like inspiring to kind of give people
00:57:39
Speaker
ideas that might work and try and tell an optimistic story about directions that we might want to go in rather than just how we might pachinko around in the landscape of history.
00:57:56
Speaker
positivity from history in a weird way, in that a lot of people at the time, you see this with the thinkers of the Enlightenment and the thinkers of the Renaissance, that they had a hard time imagining a positive future. They were really caught up in the issues of their day. In the Renaissance, it was the infighting of the city-stage, the plague. In the Enlightenment, it was the endless authoritarianism of the kings, which seemed like
00:58:19
Speaker
permanently insurmountable. And now in this day and age, we've surmounted it. And that was not a perspective that Voltaire could have had at the time. But I think there's reasons to sort of get outside our concerns of the moment and look towards a really big picture. And I think it's definitely possible that new and exciting things can unfold, even if right now we find ourselves in the valley of doubt and stress.
00:58:45
Speaker
Yeah, it's like our world today wasn't built by the people who are complaining that the Spanish monarchy was absolute crap. It was built by the people who were thinking about like, oh, what if you had a Congress and then you had people who were trying to find a way forward even in what seemed like a good situation.
00:59:03
Speaker
I like that. That is really, really uplifting to hear Holly about the pessimists of the past being somewhat proven wrong by our current level of success. It really is encouraging in a weird roundabout way of like, you know, you didn't see the big picture. And I think, you know, it's good to look to the long term and to cross our fingers for the long term. And I do feel like
00:59:26
Speaker
The historical forces that I see moving the world, a lot of them are the winds of positive change. So I think I can have reason to be optimistic, if not certain, definitely optimistic. Nice. Yeah, we've covered a lot of ground. This is a really rich conceptual world, and I'm excited for our listeners to go and check it out on the website and learn more. You have a lot of nice hyperlinks in your submission that I also enjoyed diving into. So yeah, thanks so much for chatting about your world with us.
00:59:55
Speaker
Oh, it's been an absolute pleasure. It's been a great nerding out and thinking about all the great possibilities that are possibly going to come down the pipeline. Yeah. Well, thanks so much for having us on the podcast. It's been a super fun conversation. Definitely go check out those hyperlinks and yeah, I love what you guys are doing here at the Future of Life Institute.
01:00:23
Speaker
Our guests today were Holly Oatley and Jackson Wagner. You can see more of Holly's short fiction, including a novella, on her Tumblr, which is called Aspiring Keymaker. If you'd like to hear more of Jackson's ideas, check out his new blog called Nuka Zaria, that's N-U-K-A-Z-A-R-I-A, where he offers insights into new cause areas that might be worth charitable investments.
01:00:44
Speaker
Their third teammate, Diana Gervich, created the digital mural accompanying their submission. You can see more of Diana's work on her Instagram, Mr. Underscore Dirtlord, where she shares her sunny, pensive gouache paintings and some playful ceramic works.
01:01:04
Speaker
If this podcast has got you thinking about the future, you can find out more about this world and explore the ideas contained in the other worlds at www.worldbuild.ai. If you want to hear your thoughts, are these worlds you'd want to live in?
01:01:18
Speaker
If you've enjoyed this episode and would like to help more people discover and discuss these ideas, you can give us a rating or leave a comment wherever you're listening to this podcast. You read all the comments and appreciate every rating. This podcast is produced and edited by WorldView Studio and the Future of Life Institute. FLI is a nonprofit that works to reduce large-scale risks from transformative technologies and promote the development and use of these technologies to benefit all life on Earth.
01:01:41
Speaker
We run educational outreach and grants programs and advocate for better policymaking in the United Nations, US government, and European Union institutions. If you're a storyteller working on films or other creative projects about the future, we can also help you understand the science and storytelling potential of transformative technologies.
01:01:58
Speaker
If you'd like to get in touch with us or any of the teams featured on the podcast to collaborate, you can email worldbuild at futureoflife.org. A reminder, this podcast explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we all want. The ideas we discuss here are not to be taken as FLI positions. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
01:02:26
Speaker
Thanks for listening to Imagine a World. Stay tuned to explore more positive futures.