Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Imagine: What if narrow AI fractured our shared reality? image

Imagine: What if narrow AI fractured our shared reality?

Imagine A World
Avatar
26 Plays1 year ago

Let’s imagine a future where AGI is developed but kept at a distance from practically impacting the world, while narrow AI remakes the world completely. Inequality sticks around and AI fractures society into separate media bubbles with irreconcilable perspectives. But it's not all bad. AI markedly improves the general quality of life, enhancing medicine and therapy, and those bubbles help to sustain their inhabitants. Can you get excited about a world with these tradeoffs?

Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year.

In the seventh episode of Imagine A World we explore a fictional worldbuild titled 'Hall of Mirrors', which was a third-place winner of FLI's worldbuilding contest.

Michael Vasser joins Guillaume Reisen to discuss his imagined future, which he created with the help of Matija Franklin and Bryce Hidysmith. Vassar was formerly the president of the Singularity Institute, and co-founded Metamed; more recently he has worked on communication across political divisions. Franklin is a PhD student at UCL working on AI Ethics and Alignment. Finally, Hidysmith began in fashion design, passed through fortune-telling before winding up in finance and policy research, at places like Numerai, the Median Group, Bismarck Analysis, and Eco.com.

Hall of Mirrors is a deeply unstable world where nothing is as it seems. The structures of power that we know today have eroded away, survived only by shells of expectation and appearance. People are isolated by perceptual bubbles and struggle to agree on what is real. This team put a lot of effort into creating a plausible, empirically grounded world, but their work is also notable for its irreverence and dark humor. In some ways, this world is kind of a caricature of the present. We see deeper isolation and polarization caused by media, and a proliferation of powerful but ultimately limited AI tools that further erode our sense of objective reality. A deep instability threatens. And yet, on a human level, things seem relatively calm. It turns out that the stories we tell ourselves about the world have a lot of inertia, and so do the ways we live our lives.

Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions.

Explore this worldbuild: https://worldbuild.ai/hall-of-mirrors

The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.

Media referenced in the episode:

https://en.wikipedia.org/wiki/Neo-Confucianism
https://en.wikipedia.org/wiki/Who_Framed_Roger_Rabbit
https://en.wikipedia.org/wiki/Seigniorage  https://en.wikipedia.org/wiki/Adam_Smith
https://en.wikipedia.org/wiki/Hamlet  https://en.wikipedia.org/wiki/The_Golden_Notebook
https://en.wikipedia.org/wiki/Star_Trek%3A_The_Next_Generation
https://en.wikipedia.org/wiki/C-3PO
https://en.wikipedia.org/wiki/James_Baldwin

Recommended
Transcript

Introduction and Slow AGI Development

00:00:00
Speaker
on this episode of Imagine a World.
00:00:02
Speaker
My best guess is that AGI will progress much more slowly than I have it progressing in my story. And my best guess is that we do survive because AGI progresses much more slowly. From my perspective, it's extremely contrived for AGI to develop even as fast as it does in this story and be handled well enough, cautiously enough, thoughtfully enough, that we have more than a fraction of a percent chance of survival.

Imagine a World Podcast Overview

00:00:36
Speaker
Welcome to Imagine a World, a mini-series from the Future of Life Institute. This podcast is based on a contest we ran to gather ideas from around the world about what a more positive future might look like in 2045. We hope the diverse ideas you're about to hear will spark discussions and maybe even collaborations. But you should know that the ideas in this podcast are not to be taken as FLI endorsed positions. And now, over to our host, Kiam Reason.
00:01:15
Speaker
Welcome to the Imagine a World podcast by the Future of Life Institute. I'm your host, Guillaume Reason. In this episode, we'll be exploring a world called Hall of Mirrors, which was a third-place winner of FLI's world-building contest. Hall of Mirrors is a deeply unstable world where nothing is as it seems. The structures of power we know today have eroded away, survived only by shells of expectation and appearance. People are isolated by perceptual bubbles and struggle to agree on what's real.
00:01:43
Speaker
Despite all this, things are generally going okay, for now. This is partly due to this world's particularly slow and modest development of AI technologies.

Meet the Creators of 'Hall of Mirrors'

00:01:52
Speaker
AI tools here are still dominated by extensions of today's fundamentally narrow systems, with the one true AGI being developed under heavy quarantine. There are a number of reasons for this slow progress, including high computational costs and poor funding due to politicization.
00:02:08
Speaker
This team put a lot of effort into creating a plausible, empirically grounded world, but their work is also notable for its irreverence and dark humor. I can safely say that it's the only winning world where you could see virtual celebrity Tupac List perform at a luxury war-themed amusement park run by the Taliban. Needless to say, there's a lot going on here. I was excited to get a look into the minds behind this particularly brimming and erratic world.
00:02:32
Speaker
Our guest today is Michael Vassar, one member of the three-person team who created Hall of Mirrors. Michael is a futurist, activist, and entrepreneur with an eclectic background in biochemistry, economics, and business. He served as president of the Machine Intelligence Research Institute and is co-founder of MetaMed Research.
00:02:50
Speaker
His other team members were Mattia Franklin, a doctoral student studying AI ethics and alignment at University College London, and Bryce Heidi Smith, who has worn many hats from fortune telling to modeling and now has a focus on finance and policy research. Hey Michael, great to have you with us. Great. Good to speak to you.
00:03:08
Speaker
So I'm curious how the three of you on your team came to work on this project together. So I've known Bryce for a very long time. And when the project was starting up, there was a call for collaborations. And I tried talking to a bunch of people. And Matija and I had the most productive conversations.
00:03:29
Speaker
But the overall project was mostly my vision, and Matiegia's some level of editing, and Bryce did the fiction and art. Cool. So did Bryce make the music that was accompanying your suggestion? Yes. Cool. Yeah, I really enjoyed your music and the short stories as well. He did a great job with those. I mean, he's the closest thing to a super intelligence that we have around for now. Adorable. Well, what was it like for you guys to do this project together? Did you learn anything yourself in the course of it?
00:04:00
Speaker
I mean, I had a lot of fun. It helped me to concretize some of my thinking. Some, I feel like the basic sense of where I think we're going or would like to go has been reasonably stable in my head since GPT three came out and hasn't drastically changed since GPT two and COVID.

Inspiration and Perspectives of Michael Vassar

00:04:22
Speaker
Yeah. What were some of your biggest sources of inspiration when you were working on this together?
00:04:28
Speaker
I don't think my thinking on this is significantly influenced by stories or books or music or what have you. I think it's basically just coming from looking at what the technology can do and spending the last 25, 30 years obsessively thinking about history and the economy and social sciences and making some effort to understand the technology, but I'm certainly not a top expert and actually understand the technology well.
00:04:58
Speaker
I will humbly claim to be a top expert in understanding the history of technology as it relates to economics. Yeah. Well, you do have this deep professional background. Can you say a little bit about how your experience in other fields and kind of working through all this has influenced how you see the future?
00:05:17
Speaker
I mean, in terms of professional background, molecular bio, I studied in university and it doesn't really inform this very much. I have a lot of thoughts about cool things that could be done with molecular bio. And now that GPT-4 is performing at a high school national championship level without major upload enhancements.
00:05:43
Speaker
I'm confident that I can do a lot more of that stuff. Also, my alpha fold is very cool and mRNA tech is very cool. I think there's enormous opportunities now for bio. Getting an MBA gave me an opportunity to exist in the business office world for a while and that certainly is necessary. Without having interacted with corporate hierarchies, one doesn't know what corporate hierarchies I like at all. There's very effective disinformation and propaganda about that.
00:06:13
Speaker
I think mostly I've just read a lot in directions that seemed like they could be helpful for maybe a 25, 30 year period. Yeah. What sorts of insight did Bryce and Machida bring to the project? So the actual stories were very cool and the music was very cool. And Bryce wrote those mostly by himself. And there were some back and forth about what sorts of things were
00:06:42
Speaker
maybe too over the top or too fun and silly to include in the story. And, you know, it's just good to talk to people about things and develop the ideas together. And certainly Dreyse has been just enormously central to developing my understanding of the world in general over the last decade.
00:07:00
Speaker
And what about Mathieu? Mathieu, I mean, mostly just discussing what I can get away with in terms of when telling a story. What is too weird? What is socially acceptable enough that people can understand it as relatively limited inferential distance from normal, thoughtful people?

Exploring 'Hall of Mirrors' and AI Impact

00:07:19
Speaker
Yeah.
00:07:28
Speaker
In some ways, this world is kind of a caricature of the present. We see deeper isolation and polarization caused by media, and a proliferation of powerful but ultimately limited AI tools that further erode our sense of objective reality. A deep instability threatens. And yet on a human level, things seem relatively calm. It turns out that the stories we tell ourselves about the world have a lot of inertia, and so do the ways we live our lives.
00:07:54
Speaker
I had a hard time picturing those individual lives among all the wild happenings of this world, and I wanted to hear more about that human perspective for Michael. What's it like to live in this world you've made? Well, it's going to be very different in different media bubbles. The biggest media bubble by far is going to be Chinese, and the successor to contemporary Chinese Communist Party politics will mean
00:08:21
Speaker
something more neo-confusion than China has been recently, but done with capacities that no one's ever had the opportunity to bring to the table. So you can just spend so much more time on failure or purity and cultivating them when all the real work has been automated and when you have machines that are in some ways superhuman watching your every move and
00:08:46
Speaker
helping you along to express gratitude to your parents in the most richly prescribed manner. Other people have different experiences. There are probably hundreds of millions of people trapped in pornographic universes and effectively mind-controlled by AI.
00:09:03
Speaker
that would maybe be the second largest demographic if I really think about it. And there are lots and lots of people like the ones we discussed at the end, living in old age homes and having their experiences mediated through a somewhat more tasteful.
00:09:19
Speaker
but still relatively liberal and relatively cultivated sense of benevolence. But the prospect of AGI coming online at all changes that. In some sense, these stories were intended to point at the extreme instability of the world that I produced. So we have one story
00:09:39
Speaker
about producing a piece of transhuman music and one story about consuming it, despite the cautions of the companies around AGI under the, you know, basically reasonable assumption that music was not existentially dangerous under normal circumstances.
00:10:00
Speaker
Yeah, so you're referring to, in your world, there's this system that DeepMind has called Siren, which is, I think, kind of the only AGI in your world. It's under very tight wraps. Everyone's really carefully screened and there's follow-up monitoring, even if they just hear the music that it produces. This system has also written some books on a few topics that have been carefully curated. I'm curious what broader impacts Siren's existence has on your world, given kind of how cloistered it is.
00:10:28
Speaker
I mean, none by design, allowing it to have more than the tiniest amount of impact on the world would be allowing the world to end almost immediately. So instead, yeah, your world really dives into narrow AI. So these are systems that are very good at just a few specific tasks, like playing chess or driving a car. No, much broader than that. Like the AIs we have today, like GPT, which are at least pretty good at most things that we tend to think of as intellectual tasks.
00:10:55
Speaker
and very, very, very good at most things that we tend to think of as perceptual or as extremely rehearsed short-term actions without a lot of context sensitivity. I see. These are souped-up, narrow AI systems. They're still not AGIs, but they're the most effective extension of today's technologies like chat GPT and things like that, as you're saying. They're general enough that for the vast majority of the world's population,
00:11:24
Speaker
they probably are vaguely thought of as generally intelligent, like the vast majority of the world's people probably don't understand very well.
00:11:34
Speaker
the differences between them and AGIs. And that's probably part of why there's essentially no funding or work on AGI outside of DEIPIs. And broadly speaking, they're sufficient to produce some level of relatively benign, not totally impenetrable, but close enough global mind control system that also contributes to not understanding the differences and also not pursuing AGI.
00:12:01
Speaker
In some ways, I think that the fiction that my world most reminds me of is probably Who Framed Roger Rabbit, where they have these tunes everywhere. And the tunes can talk, they have personalities, they have something kind of like agency, but they don't seem to, for the most part, have agency with any scale.
00:12:22
Speaker
It's like an extremely rare, extremely dangerous thing for a tune like Judge Doom to have agency with scale and scope. And when they do, like Judge Doom, it's agency with an extremely inhuman focus, scale and scope.

Human Experience and AI in Society

00:12:37
Speaker
So very potentially dangerous. And the tunes are
00:12:41
Speaker
in some sense extremely cheap and disposable, easy to produce, but in some sense immortal. And the humans are like completely clueless about the glaring ways in which the tune's capabilities are less than human, such as Roger Rabbit can only do things when it's funny, but fairly clue-full about the ways in which their abilities are more than human, like they can survive having a piano dropped on their head.
00:13:07
Speaker
I actually haven't seen that movie, but I'm really excited now to watch it with this metaphor in mind. It's a really cool connection. Yeah. So one thing you say that these systems can do in your world is basically replace all white collar workers in theory. But you say this doesn't happen. And you say basically, you know, there are various reasons political and personal why humans are still employed. I'm curious what kinds of work humans do and what it's like for these human workers in this situation. So I think
00:13:37
Speaker
Basically, it depends on their organization, but in the pretty large majority of organizations, it's pure office politics and getting therapy from not peak human ability, but good enough AI therapists to recover enough from the office politics.
00:13:58
Speaker
that they only kill themselves with drug overdoses and the like at maybe a third or a fourth the rate that they do in our world. And maybe even less if AI enhanced medicine makes such drugs significantly less deadly and treatment significantly more effective.
00:14:17
Speaker
Your world still has a ton of economic inequality, but the actual quality of life that you describe is kind of universally pretty good. Like travel has become really cheap and there's basically free energy. It makes food distribution really trivial as people can kind of live wherever they want and they have augmented reality. So it'll always look beautiful. I'm curious, given all of these kind of unifying factors, how people decide where to build their lives and what kinds of goals they decide to pursue with them.
00:14:42
Speaker
So the world that I'm thinking of, for the large majority of people, they start exploring the world when they're children, and hopefully their parents take a lot of interest in them. But if not, there's an infinite amount of attention freely available from the web and from open source and commercial products.
00:15:00
Speaker
and the decisions they make throughout their lives are almost entirely determined by what sorts of commercial or open source products find them first in a sense and build the sort of feedback loops that pull them into one or another bubble reality. You have this interesting thread in your world where families kind of become a currency or a kind of wealth that people pursue more than monetary assets. Can you say a little bit about what that looks like?
00:15:29
Speaker
I mean, that's just being a normal person. We've lost touch with it in late-stage capitalism. But even under normal capitalism, this was not confusing to anybody. The idea of trying to accumulate wealth rather than trying to accumulate happy help with the wise, flourishing, and mutually cooperative descendants is a really surprising thing to find an organism doing.
00:15:57
Speaker
So thinking about some of the more unusual aspects of your world, your world definitely had some of the wildest kind of one-off ideas in it that we saw in the competition. You have like the Taliban creates luxury war themed amusement parks. You have elephants that are domesticated by CRISPR and you even have Kanye West creating a virtual reproduction of biblical Jerusalem. I'm curious like what prompted these kinds of details to be included and whether they're part of a larger theme for you that you were trying to convey.
00:16:24
Speaker
So the biggest thing that I left out of the actual thing that Matija's influence was a coup by the comedy party where basically in the 2032 election between AOC and Donald Trump,
00:16:42
Speaker
the mainstream Democrats, who still basically control the media and the courts, decide to allow a completely flagrant election fraud to install John Stuart as a third party president. That one, I think Matija thought, was too political, too controversial. But I do think it's the sort of thing that could realistically happen. Overall, where are these coming from?

Power Shifts and Economic Instability

00:17:08
Speaker
Some of them are just like
00:17:10
Speaker
extreme low hanging fruit things that if a few college kids could throw together as a project in a world with the AI capabilities that I realistically expect to exist well before the 2045 deadline. Yeah. So this is kind of just speaking to maybe like the chaos and the power flying around the instability of things and how the world is just going to get so much stranger.
00:17:33
Speaker
I don't think of it as a chaotic world. The stories are super, super non-chaotic about people living very calm lives. I see it as a world that's very, very unstable simply because it has even one AGI in it. And sooner or later, a more permanent solution is necessary than just keeping its interests hyper-focused and keeping people from noticing it very much.
00:17:59
Speaker
To some degree, I'm just trying to show a picture because that's all you can do in a story like this, but a picture where all of the pieces are scientifically well-founded, technologically, economically, and politically well-founded makes sense and fit together fairly well. I guess more than anything else, I'm trying to show people, like the contest is trying to show people, that it is even possible to make a sincere, serious, and competent effort to depict a realistic but optimistic future.
00:18:34
Speaker
Major changes are hacking away at the foundations of this world's systems. The loss of shared reality and weakening of governmental structures, at least in the West, seem to strip humanity of a good deal of agency. It's implied that we're being kept from destruction only by our tenuous control of this world's one true AGI. At the same time, new approaches to things like education and social conflict signal hope for building a more coherent and empowered humanity. I wanted to hear more about how Michael saw this world approaching the changes and challenges that it faced.
00:19:06
Speaker
You write that in America like Microsoft, Amazon, Tesla and Walmart are basically the only entities capable of large scale coordinated action anymore and elected government officials really just enact change by influencing their supporters rather than by pursuing any kind of legislation. Most decisions are made locally. Can you say a little bit more about how America's governmental systems lose so much influence in your world?
00:19:27
Speaker
So, I just see that as a continuation of the trend that we're already on. When you look at COVID, the government took an unbelievably huge amount of oppressive and authoritarian action that there probably won't be social or political support for if there's another major event that calls for it. It lost an enormous amount of public trust.
00:19:47
Speaker
And if you look at what the government did that was effective with COVID, it basically boils down to printing enormous amounts of money and providing certain types of encouragement to conform to a certain standard. So it's not that the government no longer matters, it's just that
00:20:07
Speaker
like popularity contests should be, is primarily a source of information about how to be popular. Just like in our world, people mostly want to be popular. They don't want it as much as in our world because they can always be popular with AIs. But still, AIs are not fully satisfying as mental and social companions. Well, as this power kind of switches over and flows towards tech companies gaining influence, it becomes increasingly hard to track wealth.
00:20:36
Speaker
But in some ways it also seems like things are just kind of going on sheer inertia. You have this great line in your submission that says, a supermajority of the population has negative net worth and continues to be allocated credit as a matter of economic policy. And you mentioned this kind of instability of the world. How long do you see it remaining stable? Will these systems fall apart shortly after 2045 in your imagining?
00:20:59
Speaker
So the way I'm imagining this, this is a fairly close to best case scenario. My realistic best guest scenario would be that it's more than 70% likely, maybe more than 80% likely, that the system that I'm describing falls apart well before it gets to the point that I'm describing. These are supposed to be optimistic visions for the future.
00:21:22
Speaker
But once it gets to the point that I'm describing, if it gets to that point, I actually imagine it being stable for a pretty long time. I mean, except for the AGI bin. Yeah. Siren gets out.
00:21:37
Speaker
One big tension in your world as a result of this increasing difficulty in verifying information is just people have a hard time agreeing on objective reality. They're really good in experimental healthcare interventions, but it's mostly kind of about luck and maybe some skill to pick the winners out of that crowd. You have cryptocurrency that's made it really impossible to tell how much money anyone has. You mentioned that instead of Forbes keeping track of wealth, now kidnapping rings keep some of the best records of people's total assets.
00:22:07
Speaker
And you even say that startups are buying these records off of those kidnapping rings to find wealthy funders. So can you say a little bit more about what leads to this like deep fracturing of shared objectivity?
00:22:19
Speaker
That's been going on really in a big way since the 1940s. Once again, I'm just imagining it continuing and accelerating with more powerful technologies. The collapse of the dollar, which happens in the 2030s more or less in my story, contributes a

New Education Paradigms and AI's Role

00:22:37
Speaker
fair amount. It makes the crypto thing much more substantial.
00:22:41
Speaker
and the increase like basically social welfare through senior age and the expansion of senior age through the population helps to stabilize things a lot at the expense of coherence and efficiency, which isn't really necessarily anymore.
00:22:57
Speaker
you say what seniorage is?
00:23:13
Speaker
One of the basic challenges of running a capitalist society that's been well understood since long before Adam Smith is the extreme difficulty of causing control of the money printing apparatus to not be the convergent agenda of practically everyone.
00:23:30
Speaker
and most capitalist societies do collapse as control of the money printing apparatus becomes a convergent agenda. So I'm basically imagining the essential worker system that we discovered we had during COVID and the relatively resilient management of a small number of companies basically keeping the material reality held together despite the fact that
00:23:55
Speaker
The vast majority of supposed economic activity is actually pure political wealth redistribution to the people who bother to fight for wealth being distributed to them in a world where most people have basically lost track of wealth anyway.
00:24:12
Speaker
I'm curious why AI systems don't help more with these issues of shared goals and shared knowledge. You mentioned that AI systems can provide common knowledge. They help groups of people identify if their behaviors are aligned with their goals or how to change their behaviors. You would think that that might cut through the haze and help people agree on things more or have more transparency.
00:24:34
Speaker
I mean, AI systems help enormously with establishing whatever set of goals is reasonably psychologically plausible and that the systems designers want to establish, but mostly that consists of like consuming products just like it does in our world. And in the rare cases of societies that have more of a shared set of values and more of a shared power structure like China, it means that they have incredibly high
00:25:01
Speaker
integration and unity targeting shared goals that more or less consist of normal reasonable things like extending life and ecological sustainability and stability in general.
00:25:15
Speaker
One other thread I really enjoyed in your world is how you talk about education changing. So people start to see traditional educational pedigrees as a form of inherited privilege and educational histories actually become private information, which can't be used in decisions like hiring, which is a really interesting concept. And this tips the scales in favor of online self-driven education. Schools basically go empty while kids live with their families and learn on their own. I'm curious what this looks like for those kids. Like what are they learning? What aren't they learning and who's deciding?
00:25:45
Speaker
So I'm basically imagining that nominally, the parents decide when the kids are younger and the kids decide when they're older.

Realism in World-Building and Future Safety

00:25:55
Speaker
But in practice, reasonably agentic parents who are also tech savvy and have reasonably coherent preferences about what to get
00:26:08
Speaker
will be able to direct their kids towards media bubbles and narratives that will be extremely stable and which won't change much unless something really weird happens. So I expect that almost everyone's learning speed is going to be at least four or five times faster between more targeted instruction, objectively better instruction, maybe learning enhancement through drugs and mRNA tech.
00:26:39
Speaker
And much better trauma care is a major feature of my world. So just the elimination of mental blocks through MDM based therapies and their successors.
00:26:49
Speaker
I don't know if I really played up adequately the spread of a new way of doing civilization from the carceral system into the general population, as MDMA therapies get adopted for dispute resolution within prisons and reach a level of reliability and efficacy that's sufficient that basically everyone wants some.
00:27:21
Speaker
Despite some of the more madcap details of their world, this team expresses a strong commitment to realism and plausibility. Their portrayal of AI development was also perhaps the slowest and most restricted among our winners. While there is an AGI around, most of the technological developments in this world are just extensions of today's narrow AI systems, whose awesome capabilities are ultimately limited.
00:27:44
Speaker
I was curious to hear more about this team's creative influences and whether this slower pace of AI development was something they saw as merely likely or a necessary component of any safe path to an aspirational future.
00:27:56
Speaker
So I'd like to spend a little while discussing the narratives in your world and how they compare to other narratives that are going around in popular culture. Like one really big through line for me is this sort of emperor has no close attitude you have towards economics and politics, where your world kind of just goes through the motions to keep things moving along, but the systems themselves are no longer really doing much. I'm curious if there are other examples of this perspective that inspired you in other kinds of media.
00:28:24
Speaker
I mean, mostly I'm inspired by real life, not by media narratives. I can't think of a piece of fiction that is as radical as real life in the degree to which it violates conventional assumptions. You know, it's basically impossible to do without being a top tier literary genius like Shakespeare. I mean, Hamlet's wonderful Doris Lessig's book, The Golden Notebook,
00:28:51
Speaker
is maybe the best depiction I've ever seen. But one would need to be a really, really good literary scholar to appreciate it, I think. Same with Hamlet. Interesting. Well, I'm curious if there are any examples, and this can come from philosophers as well as fiction, of economic or political systems that could actually maybe function in a world like yours, or do you think that the whole concept of having a system that's run in a sensible way is kind of moot?

Potential Reforms and Global Sensibilities

00:29:20
Speaker
No, I mean, the Chinese system is sort of run in a sensible way in the world I'm describing. It's not run with perfect rigor and resolution. It wouldn't pass like Talmud standards. But by the standards that we're used to from government, I'm imagining a China with a life expectancy of well over 100 years and
00:29:44
Speaker
the ability to industrially produce in a clean way and with very little labor, practically everything the entire world needs. The goals of maximizing filial piety and ren are just going to be what's inherited from their ancestors and traditions. It may not seem like doing a thing to us, but most of what we do is arguably not really doing a thing.
00:30:12
Speaker
Do you think there are any actions or reforms we could do to Western systems that would make them more resilient to these changes as well? I mean, my simple answer is I already put them all into this story. That's why the world is still alive and has not collapsed already. I involve, you know, I'm making a number of surprising good luck happens assumptions.
00:30:37
Speaker
not extraordinary. I really try to keep avoid endorsing things that are not just quirky and that have probabilities of less than about 10%.
00:30:48
Speaker
You know, I think it's important to note that our world would be way scarier to people from my vision of 2045 than their world would be from us. Their world would just be like incredibly addictive and we would very quickly find ourselves trapped in some relatively exploitative bubble. But even exploitative bubbles have reasons to try to keep people mentally healthy enough to keep on
00:31:12
Speaker
receiving government benefits within a thin veneer of contributing to the economy. I guess one way to think about it is the American dream is basically a collage of the America prior to the Civil War, America between the Civil War and the New Deal, and America after the New Deal, which could be summarized as the colonist experience, the immigrant experience, and the GI experience.
00:31:38
Speaker
And none of these experiences at all resemble what zoomers are coming into and experiencing. And so they are growing up in a world of such transparent lies that they're almost without exception, total epistemic nihilist mistakenly disbelieved that anything was ever true, rather than only disbelieving that anything that they've ever seen is true, which is actually the case.
00:32:01
Speaker
So one really unique thing about your world is this focus on the narrow AI systems and how high a ceiling you put on their abilities. You kind of have basically a suite of different narrow AI systems that together have the capabilities of an AGI in some ways, but they're spread across these separate modules. No, they don't have the capabilities of an AGI. They don't have anything even remotely close to the abilities of an AGI. Yeah, so can you distinguish that?
00:32:24
Speaker
The story is just kind of hinting at the capabilities of an AGI with the sort of security around it and the sort of implied impact and like potential risk. So I'm operating with the definition of AGI that's something like a system that's better than a human at any task that you can reasonably define.
00:32:42
Speaker
Is that different from what you say when you say that these narrow AI systems? No, that's better than any human at any task that you can reasonably define. I'm saying that the systems that I'm describing are not even remotely close to that. They're superhuman at very narrow tasks. They're superhuman at a lot of very narrow tasks.
00:33:04
Speaker
But it doesn't even come close, fitting them all together to the full range of human capabilities. I see. So there's kind of a synergy here, you're saying? Right. And then they're like not superhuman, but like nearly as good as the experts that top elites tend to point to at the vast majority of tasks that
00:33:29
Speaker
get measured and graded and systematized and standardized threat society. So the best doctors in my world are still humans who make very heavy use of AI tools, but the best purely AI doctors might only be as good as the doctors that like the president has in our world, but not nearly as good as the doctors that like a top doctor has in our world since the top doctors know who the actual best doctors are and to date them.
00:33:57
Speaker
So not only do these systems not exceed the best human experts individually at these narrower tasks, but you're also saying that there's something missing even if you have this collection of narrow systems that can each do something that a human can do. Just putting those together is not the same as having something that could do all of these flexibly. Is that what you're saying?
00:34:16
Speaker
Definitely, but also there are things that none of them can do even a little bit. Like in the story that I'm talking about, siren is the only AI in the world that could, if it wanted to, do important original mathematics. It's the only AI in the world that could, if it wanted to, make the tiniest contribution to theoretical or applied physics.
00:34:37
Speaker
So in your world, you have this incredibly powerful AGI system that does exist, but it's under really, really strong protections, under tight wraps. Do you think that this is necessary to have an optimistic future with AGI in it? Yeah, definitely. Unless we can basically do thousands of years worth of philosophical progress in 20 years.
00:35:01
Speaker
And we can't. Like maybe we can do thousands of years worth of philosophical progress this century because we will have both AI and other technologies for enhancing our mental capabilities if we choose to use them. But we can't do it in 20 years. It's just laughable.
00:35:19
Speaker
So the limitations that are preventing AGI from developing faster in your world, some of them are intentional, like policy decisions. Some of them are just kind of practical ones, like bad funding, politicization, and the rarity of human expertise. Do you think these are actual likely causes of slowing development in the real world? Yeah, my best guess is that AGI will progress much more slowly than I have it progressing in my story. And my best guess is that
00:35:49
Speaker
We do survive because AGI progresses much more slowly. From my perspective, it's extremely contrived for AGI to develop even as fast as it does in this story and be handled well enough, cautiously enough, thoughtfully enough, that we have more than a fraction of a percent chance of survival. Yeah. Have you seen other portrayals of the future where narrow AI plays as much of a role as in yours?
00:36:17
Speaker
I feel like there's a lot of portrayals of the future where narrow AI is taken for granted and plays a large role. In Star Trek The Next Generation, they have one AGI data, like in my world, and then they have unbelievably powerful narrow AI in the ship and in the holodeck and just all over the place.
00:36:40
Speaker
But everyone takes it for granted, and it's used as a tool by a military organization with a relatively unified internal agenda of exploration and extremely prudish and narrow conceptions of what types of experiences and behaviors its members are supposed to engage in. I will say that the type of narrow AI
00:37:10
Speaker
we have actually developed is pretty broad compared to what I expect in five years ago. It's very much what we were visibly moving towards four years ago. But to some degree, when I was growing up, C3PO seemed like a silly fantasy. It seemed silly that you could have a machine that was that close to,
00:37:37
Speaker
human performance, but like stuck for a long time at below human performance and in some ways pretty profoundly below and in some ways pretty profoundly superhuman, but like just be stuck there for a long time. But it kind of looks from the technologies that OpenAI is generating that a minimum viable C3PO might actually happen and be around for a long time without really drastic improvement from there.
00:38:08
Speaker
do you feel about the general portrayals of the future that are in fiction? Do you think they're over or under optimistic when they try to be optimistic? I just think optimism in fiction that is trying to be at all realistic is unfortunately much rarer than it should be. And like that's basically maybe largely because the most perceptive and
00:38:36
Speaker
insightful people who are also successful at becoming prominent and surrounding themselves with other prominent people.

Balanced Media Portrayals and Social Influence

00:38:46
Speaker
are constantly confronted with lots of profoundly miserable, extremely zero-sum other prominent people and have very little contact with the large majority of people who are just not as miserable as the people who like Nobel laureates are going to end up around. So you're kind of calling for more optimism in non-fiction as well? Is that where you're going?
00:39:12
Speaker
I feel like optimism and pessimism is intrinsically unhealthy concepts. You should just try to have true beliefs, but true beliefs should be balanced. There's a lot of social pressure to performatively be pessimistic because the elites tend to be pessimistic. And elites tend to be pessimistic because they're living in a hyper-competitive zero-sum world that most of us are not living in.
00:39:40
Speaker
And there's a lot of, it's easier in some ways to be pessimistic, especially cheaply pessimistic. But there's also like just cognitive biases that lead to silly sorts of pessimism. So like imagine there was a news item about how it turns out that apples cause cancer. Practically, everyone would see this as bad news. Oh no, I've been poisoning myself for years. But obviously it's good news. We know what the cancer rate is. Now we know that we can avoid it by not eating apples. Right. The devil you know.
00:40:10
Speaker
Like information is almost always desirable, but information about bad things gets interpreted as something bad happening rather than being interpreted as something good happening. James Baldwin, not everything that can be confronted can be overcome, but everything to be overcome must be confronted.
00:40:34
Speaker
To some degree, this picture that I'm depicting is an edge case on that because this is a world that has managed to partially overcome and fully survive a lot of the problems that our society is dying from without really confronting them.
00:40:53
Speaker
In your world, a lot of the role models become virtual, so basically all celebrities popular with the Under-30 crowd are virtual people. Some are recreations of historical figures, others are kind of amalgams like Tupac lists, and you have XXXTentacion, Elvis XXX. I'm curious how these play a role as kind of role models in your world.
00:41:15
Speaker
I think they mostly don't. I think that people who have at least reasonably good taste do prefer interaction with individuals insofar as they mostly imitate behavior by real people. The social influences from machines are, in general, more of a goal-directed, relatively overt manipulation sort.
00:41:40
Speaker
What are your thoughts on some of the current cultural attitudes towards AI-generated art and virtual cultural figures right now? So it seems like any reasonably good artist wants to make art, not wants to get paid for making art, and maybe wants to be seen.
00:41:56
Speaker
And people can worry that a lot of people will never be exposed to good art because they're going to be exposed to an enormous amount of stuff that doesn't have a message behind it and isn't created as an expression of pain and suppressed emotion. But if you're a good artist, you can learn new tools and you keep learning new tools throughout your life and you learn how to make use of these new tools to compete and to get your message out there.
00:42:25
Speaker
barriers to entry for art in some sense never go down because they have to do with the attention and consciousness of your audience. The economic barriers to entry in our world are going up because of extreme economic scarcity that's being created through policy. So we're living in a time of really extreme economic scarcity compared to the Great Depression at this point. We've lost way more actual economic freedom in the sense of
00:42:54
Speaker
not needing to work very hard or very well or be very exceptional in order to afford
00:43:00
Speaker
to reproduce our own labor, have children and grandchildren, and have them live at least as nicely as we do, and be able to purchase our baskets of goods. And every Zoomer knows it, because they in fact can't purchase the baskets of goods that their parents had, and their parents deny them recognition as adults, even like millennials with kids.

World-Building for Positive Futures

00:43:25
Speaker
are denied recognition as adults in major ways because they don't have the baskets of consumption goods that, in fact, generationally speaking, they can't have.
00:43:43
Speaker
The process of world building has great potential to make a positive future feel more attainable. This can be incredibly powerful, whether you're a creative person looking to produce rich works of fiction or have a more technical focus and are looking to reach policymakers or the public. I asked Michael what kind of impact he hoped his work would have on the world. What do you hope your world leaves people thinking about long after they've read through it?
00:44:05
Speaker
So the biggest thing is that I just would like people to try to make sense of what could happen. I would like them to know that it is possible.
00:44:15
Speaker
to make a joint prediction and expression of preference that is in line with the relatively full range of scientific and technological and humanistic thinking and that holds together and makes sense and that much more close to comes true and also somewhat guides society towards it coming true than what we would normally think of as science fiction.
00:44:41
Speaker
What kinds of expertise would you be most interested in having people bring to discussions about your world or the future in general? I think that the things that I would want to bring in first would be somatic skills like body work and yoga and things like that, experience with MDMA and other psychedelic therapies,
00:45:08
Speaker
and maybe even electrical engineering for creating better alternatives to transcranial magnetic stimulation for disinhibiting some parts of the cortex and activating other parts of the cortex. So to enable people to recapture, usually after years of work, the sorts of cognitive abilities that normal smart inquisitive kids had when I was a kid.
00:45:32
Speaker
and which the entire literary imprint of Western civilization is the imprint of, which is why we don't have a literary imprint for our contemporary civilization.

Inspiration for Young Generations

00:45:45
Speaker
Do you have any particular hopes about the impact that this work would have on the younger generation of folks around today?
00:45:52
Speaker
It seems to me that the vast majority of zoomers are doomers. They believe that the world is going to hell in a hand basket and everything is falling apart. But they are likewise cynics about the past in that they somehow believe that things have been getting worse forever but were never better than they are today.
00:46:11
Speaker
And that can't be a concrete set of beliefs. It actually has to be something that they have instead of having beliefs, which is like a posture, a vibe. What's actually going on is that they have always been lied to about everything of importance by every credible authority, so they don't believe that people can know things.
00:46:35
Speaker
and they only believe that people can posture and vibe. That's really sad because manifestly the world around us displays incredible amounts of the results of knowledge. And if people don't continue to produce the results of that knowledge, we're going to live in a much less nice world. What could help to correct for this or influence their attitudes in a positive way for you? Well, to a really non-trivial degree,
00:47:03
Speaker
Large language models that we have today, if they were reinforcement learning trained for questioning and challenging and calling out bullshit, and especially for perceiving the emotional dynamics of social situations, which are very easy to perceive for even average humans and wouldn't be that hard to train systems to perceive.
00:47:27
Speaker
Like people need social support in calling out bullshit rather than all of the social pressure being to submit to bullshit and go along with it. And like we have the technology today to build artificial social support of precisely the type we need. What aspects of your world would you be most excited to see popular media take on when portraying the future?
00:47:54
Speaker
Well, to start having a picture of China as something like China rather than using China as our designated bad guy, which we use to project the images of ourselves, basically our popular media almost exclusively treats China as a scapegoat for the types of behavior that we are very aware that we engage in to approximately the same degree that they do.
00:48:22
Speaker
and is almost always trying to display, you know, bias for the sake of showing loyalty rather than trying to display scholarship and understanding. I think that having in general a somewhat more balanced view of all sorts of cultural things, having more of an attitude that most things have some good and some bad in them,
00:48:47
Speaker
Well, thanks so much for joining us today, Michael. We've covered so much ground in this conversation, and it's been really great to explore all these ideas with you. It's great having this conversation.
00:49:05
Speaker
If this podcast has got you thinking about the future, you can find out more about this world and explore the ideas contained in the other worlds at www.worldbuild.ai. If you want to hear your thoughts, are these worlds you'd want to live in?
00:49:19
Speaker
If you've enjoyed this episode and would like to help more people discover and discuss these ideas, you can give us a rating or leave a comment wherever you're listening to this podcast. We read all the comments and appreciate every rating. This podcast is produced and edited by WorldView Studio and the Future of Life Institute. FLI is a nonprofit that works to reduce large-scale risks from transformative technologies and promote the development and use of these technologies to benefit all life on Earth.
00:49:42
Speaker
We run educational outreach and grants programs and advocate for better policymaking in the United Nations, US government, and European Union institutions. If you're a storyteller working on films or other creative projects about the future, we can also help you understand the science and storytelling potential of transformative technologies.
00:49:59
Speaker
If you'd like to get in touch with us or any of the teams featured on the podcast to collaborate, you can email worldbuild at futureoflife.org. A reminder, this podcast explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we all want. The ideas we discuss here are not to be taken as FLI positions. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
00:50:27
Speaker
Thanks for listening to Imagine a World. Stay tuned to explore more positive futures.