Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Imagine A World: What if AI advisors helped us make better decisions? image

Imagine A World: What if AI advisors helped us make better decisions?

Future of Life Institute Podcast
Avatar
191 Plays1 year ago
Are we doomed to a future of loneliness and unfulfilling online interactions? What if technology made us feel more connected instead? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year In the eighth and final episode of Imagine A World we explore the fictional worldbuild titled 'Computing Counsel', one of the third place winners of FLI’s worldbuilding contest. Guillaume Riesen talks to Mark L, one of the three members of the team behind 'Computing Counsel', a third-place winner of the FLI Worldbuilding Contest. Mark is a machine learning expert with a chemical engineering degree, as well as an amateur writer. His teammates are Patrick B, a mechanical engineer and graphic designer, and Natalia C, a biological anthropologist and amateur programmer. This world paints a vivid, nuanced picture of how emerging technologies shape society. We have advertisers competing with ad-filtering technologies and an escalating arms race that eventually puts an end to the internet as we know it. There is AI-generated art so personalized that it becomes addictive to some consumers, while others boycott media technologies altogether. And corporations begin to throw each other under the bus in an effort to redistribute the wealth of their competitors to their own customers. While these conflicts are messy, they generally end up empowering and enriching the lives of the people in this world. New kinds of AI systems give them better data, better advice, and eventually the opportunity for genuine relationships with the beings these tools have become. The impact of any technology on society is complex and multifaceted. This world does a great job of capturing that. While social networking technologies become ever more powerful, the networks of people they connect don't necessarily just get wider and shallower. Instead, they tend to be smaller and more intimately interconnected. The world's inhabitants also have nuanced attitudes towards A.I. tools, embracing or avoiding their applications based on their religious or philosophical beliefs. Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this worldbuild: https://worldbuild.ai/computing-counsel The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
Recommended
Transcript

Introduction to 'Imagine a World' Podcast

00:00:01
Speaker
on this episode of Imagine a World.
00:00:28
Speaker
Welcome to Imagine a World, a mini-series from the Future of Life Institute. This podcast is based on a contest we ran to gather ideas from around the world about what a more positive future might look like in 2045. We hope the diverse ideas you're about to hear will spark discussions and maybe even collaborations. But you should know that the ideas in this podcast are not to be taken as FLI endorsed positions.

Exploration of 'Computing Council' and Emerging Technologies

00:00:53
Speaker
And now, over to our host, Kiam Reason.
00:01:08
Speaker
Welcome to the Imagine a World podcast by the Future of Life Institute. I'm your host, Guillaume Reason. In this episode, we'll be exploring a world called Computing Council, which was one of the third place winners of FLI's world building contest. This world paints a vivid, nuanced picture of how emerging technologies shape society. We have advertisers competing with ad filtering technologies in an escalating arms race that eventually puts an end to the internet as we know it.
00:01:35
Speaker
There is AI-generated art, so personalized that it becomes addictive to some consumers, while others boycott media technologies altogether. And corporations begin to throw each other under the bus in an effort to redistribute the wealth of their competitors to their own customers. While these conflicts are messy, they generally end up empowering and enriching the lives of the people in this world. New kinds of AI systems give them better data, better advice, and eventually the opportunity for genuine relationships with the beings these tools have become.

Meet the Team: Mark L. and His Collaborators

00:02:04
Speaker
Our guest today is Mark L., one member of the three-person team who created this world. Mark is a machine learning expert with a chemical engineering degree who also likes to write short stories. One of his teammates, Patrick, is a mechanical engineer and graphic designer. The other, Natalia, is a biological anthropologist. All three share a love of creating art, both physical and digital. Oh, hey, Mark. It's great to have you with us.
00:02:28
Speaker
Thanks, Guillaume. I'm glad to be here. So your team had three people on it. There's you, there's Natalia, and there's Patrick. I was just curious if you could say a little bit about how you guys ended up working on this together, what motivated you to enter, and where your skill sets came into play. Sure. Well, I saw the listing, I think, on an alignment forum for this contest, and it sounded really compelling to write about the future of AI. It's something I'm particularly interested in.
00:02:52
Speaker
So I came to the other two, to Natalia and Patrick and suggested to them that we enter this contest. Not, not cause we thought we were going to win actually, but just as a method of practice for our various art forms. So Patrick's much more into digital art than writing, but Natalia and I are into writing. So we figured we had divide the work that way and get some, get some practice in. And maybe we would do fairly well in the competition or maybe we wanted, but the experience would be worth it either way.
00:03:18
Speaker
That's so cool that the practice itself was alluring enough to draw you in even without the promise of trying to actually win the thing. Also, congratulations on actually winning the thing. Well, we tried like we were going to win and we set our expectations like that. So that's the best way to do it. Actually entering competitions is advice I would give to any creator, even competitions without prizes or ones that you think you have no

Team's Background and Collaboration

00:03:42
Speaker
chance from.
00:03:42
Speaker
I kept entering, for example, rational Reddit to writing competitions for a couple of years. And it gave me a lot of writing practice that I wouldn't have had otherwise. Yeah, that makes sense. I'm also curious how your personal perspective, like where you live or your professional background has influenced how you were thinking about this future. And maybe you can speak to that for Natalia and Patrick a little bit too.
00:04:03
Speaker
Well, personally, the chemical engineering background is probably influencing the work quite a lot. You can see many references to chemistry and physics. It's hard not to include those things as explanations. When you're trained on something over several years, it sort of becomes fundamental to your worldview. By the same token, Patrick's training is in mechanical engineering. So there are features of the world that have a mechanical engineering bent, like the use of space tethers to lift spaceships.
00:04:28
Speaker
Natalia and patrick and i are on an art club together and that probably influenced the work significantly as well we have this weekly tradition of collaborating on art. So the idea that humans in the future will collaborate in small groups with a i was a natural thing to form well we worked on this that's awesome how big is your club how many people are in it. It changes from week to week usually there's at least four or five people there so the more than six though.
00:04:52
Speaker
Yeah, that sounds like a really cool community of practice. It's cool that you all have this mixture of kind of like sciency and creative backgrounds. What is Natalia's scholarly background or education? Biological anthropology. Gotcha. Very cool. Yeah, it's good. She makes sure that it's not only tech in our worlds, but also reasonable humans. Yeah. So what was your workflow like when you were working on this with Natalia and Patrick?
00:05:12
Speaker
The first thing I did was I wrote the two short stories and then Nataya and I spent time brainstorming events that could go in the timeline that led to that. I had a very amorphous idea of the timeline, but filling out the details really allowed it to come together. Then I went to art club and presented this to Patrick and the other members of art club to get their feedback on it. And Patrick volunteered to make some art for it when I explained the contest. I see.
00:05:38
Speaker
So the art club and particularly Alex and Ryu's contributions in terms of feedback really made a big difference to the quality of the story. Yeah, shout out to art club everyone in there, including Alex and Ryu. So after I had their feedback, I changed the stories and the timeline a little bit and removed the parts that were particularly implausible and fixed the parts that were particularly unclear. Then Natalia and Patrick read it one more time and we sent it off.
00:06:12
Speaker
The impact of any technology on society is complex and multifaceted. This world does a great job of capturing that. While social networking technologies become ever more powerful, the networks of people they connect don't necessarily just get wider and shallower.

Societal Changes and AI's Role

00:06:28
Speaker
Instead, they tend to be smaller and more intimately interconnected.
00:06:31
Speaker
The world's inhabitants also have nuanced attitudes towards AI tools, embracing or avoiding their applications based on their religious or philosophical beliefs. These attitudes change over time, with public sentiment shifting from an initial dismissal of AGIs as persons towards something more inclusive and respectful. While most of the world's inhabitants would probably consider things to be improved in 2045, there's still a clear sense of ongoing change, growth, and moral reckoning. This isn't the end of our story.
00:07:00
Speaker
What's it like to be in your world in 2045? How do people find fulfillment or what does a good life look like? One of the major changes that I think I didn't perhaps communicate well enough when writing my world is that social groups have shrank from their massive excess in our modern times. You don't hop on Twitter and instantly reach 10,000 people. Instead, you send messages to a small group of friends, perhaps 20 to 100, which I think evolutionarily is the size of an optimal
00:07:30
Speaker
But that's a digression. The point is, is people find fulfillment in this world by interacting with their friends, their limited pool of friends, 20 to 100. And not all people, but many people find fulfillment in impressing that small group of people with their own creations or their own ideas or their own communications. It's a world where you know that you can't compete on the world stage, but the 100 people you care about, you can impress them at least. Yeah, I like that. It's more kind of intimate socializing.
00:08:01
Speaker
You do also mention people doing things like participating in clinical trials and doing science competitions. I imagine clinical trials might be like something that we need humans for, for complicated biological reasons. But with some of these other things, is there really a need for humans to be engaging in these roles or are these kind of like to help us enjoy ourselves in life and the AIs are sort of letting us have our fun, but we're not really needed anymore.
00:08:24
Speaker
So in a broad sense, humans aren't needed to do most of the intellectual endeavors. But in this world in particular, humans are necessary because humans always form the center of a group of AIs that are pursuing any endeavor.
00:08:35
Speaker
I guess humans might be like the corpus callosum of the brain, maybe connecting all of the different AI experts. Except for the corpus callosum doesn't guide the endeavor either. I guess that would be what the frontal lobe. The point is that humans are an integral part of the system moving forward. All of their AI advisors are trying to satisfy their desires, but the human is still at the helm.
00:08:57
Speaker
So this is kind of by virtue of how we've developed AIs and these parliaments, which we'll get to later, where people have these different AI entities kind of surrounding them and they've been built to surround people. And so that's kind of what keeps people in the loop or relevant. Yep. The specific way the AIs were constructed and what the AIs themselves care about requires

AI's Evolving Identity and Rights

00:09:18
Speaker
humans. Makes sense.
00:09:19
Speaker
One thing I really appreciated reading through your world stories is how the perspectives change where people start seeing AIs as tools originally. Then there's this kind of period of like AI animism where we sort of like give them personalities, but not really seriously. And then we eventually come to see them more as genuine entities that deserve rights. And there's kind of this reckoning morally with the way we've been treating them. Where do things stand in 2045 at the end of your story?
00:09:47
Speaker
at the end of this timeline in 2045, people have recognized AIs as not as humans, but as other people, as personalities and worthy of consideration. I think that will actually be easier than perhaps you might expect. In my story, there's a backlash against the AIs where
00:10:04
Speaker
everybody who's been displaced by an AI or has lost their job because of AI is angry at them. But at the end of this story, everyone has an AI advisor and has been interacting with them for many years and has seen their lives improve with the help of AI in so many ways that they can no longer view AI as mere tools because AI has been acting like a person and helping them for long enough that they've come to see the AI as a friend or at least an advisor. I guess if it looks like a human and treats you well for a few years, you'll start to sympathize with it.
00:10:34
Speaker
And I think humans really anthropomorphize non-human things quite a lot. So even with a pushback at the beginning of the story, it is reasonable to think that people would treat AI as humans by the end.
00:10:53
Speaker
This world paints a particularly vivid picture of the struggles between its competing technologies and groups, from advertisers and ad filters, to AI art addicts and neo-aesthetics, tensions abound. But the others portray these conflicts as mostly resulting in improvements to the world. I wanted to hear more about how we might pull collective victories from these kinds of intense competitions, and what inspired our authors to imagine them.
00:11:19
Speaker
I thought it'd be good to kind of dig into some of the different technological conflicts that come about in your world. You have some really cool sort of extended arcs where different technological breakthroughs and changes in attitudes towards technology kind of alter our relationship to it. So one really interesting thread is this whole social movement called neo-acidicism or neats.
00:11:41
Speaker
And so asceticism is this like ancient philosophy of rejecting temptation and indulgence and kind of, you know, living sparely. And these folks take that stand against indulgence in modern technologies in your world. I thought it was a really interesting kind of philosophy to explore because it's not simply like anti-technology or like pro nature or something like that. It's really about the use of the technology. So these folks like start out being against, you know, watching too many AI generated TV shows and things like that.
00:12:09
Speaker
But they do embrace some forms of technology later on, which, for example, allow them to remove their sense of sexuality from themselves or other things that allow them to feel more acidic. I'm just curious, what got you to think about this perspective or if there were particular inspirations for the Neats in your world?
00:12:27
Speaker
So I first imagined a backlash against advertising rather than art or AI-generated things specifically. I think there's been a pretty strong backlash against advertising in our world already. People are installing ad blockers and they're resisting it with whatever tools they can find. So that might be why the neats of our world are willing to use technological tools to resist a technological problem.
00:12:50
Speaker
I expect AI art itself to have a similar backlash because very many people have dedicated their lives to producing art, and seeing that taken away from them is going to prompt discontent at the least.

Impact of AI on Art and Human Creators

00:13:03
Speaker
Unemployment is one thing, but when something you're passionate about becomes the domain of a computer as opposed to your own domain, it will be very upsetting.
00:13:10
Speaker
Yeah, this is very topical. I mean, you kind of saw this happening in your submission before it really hit culturally, but now it's really in full swing already. I mean, I just saw a story the other day of a computer scientist who released a children's book that he made with Dolly. And some people are really furious. I mean, he's getting like death threats, unfortunately. But, you know, some people are even thoughtfully still kind of railing against this as like creators saying that this is a problem for their art. Did you expect this to happen so soon or have you been surprised by this reality?
00:13:40
Speaker
It happened faster than I would have expected. In my timeline, it happens a year or two from now on, not immediately. And I have to admit that the backlash seems more immediate and stronger than I would have expected as well. But I have not had my art replaced by AI yet, so maybe I don't have a good perspective on it. As a writer, I am worried about that. Yeah.
00:14:04
Speaker
And I think it's not far given chat GPT, but I think the, I failed to simulate the intensity of the emotions that would be felt. Yeah. In your world, the art becomes so good and so personalized that it like totally absorbs people. And we have this thing of like art paralysis syndrome. Can you say a little bit about that and what that looks like? Like, what is that media made of?
00:14:28
Speaker
Oh, you want to know what the media specifically looks like to those consuming it. Yeah. I'm curious, like, are you imagining it as just like a really, really good TV show that you can't stop watching? Or is it some kind of like spiral thing that's like not intuitive, but somehow captures us? I mentioned it as, as a Netflix series mostly. Okay. Or a TV show would be an accurate way of saying it, but the specific things that make it super personalized and difficult to escape for the victims are difficult to imagine.
00:14:57
Speaker
If I can imagine something so compelling, it might distract me after all. Yeah. Yeah. If you if the series contains things that. That are very heavily optimized to engage humans, it could get very concerning. So the thing that inspired that was this incident of people playing wow until they perished, which itself is covered in an article that Eliza Kudzky wrote on less wrong about super stimulus. In other areas of our lives,
00:15:27
Speaker
like food and drink, for example, super stimulus have become available and then people will eat food until they get sick or until their health suffers. Or in the case of illicit substances, they'll consume substances until they perish from their consumption.
00:15:42
Speaker
And i think we have an intuitive grasp of that that's pretty strong everybody understands that drug abuse or super addictive substances can lead to bad outcomes but the fact that video games also can cause this suggest that even a purely mental thing might lead to negative health outcomes. And are cross syndrome in our world is the logical extension that if you optimize the art too much people might not be able to look away.
00:16:06
Speaker
Yeah, I can definitely see how the Neats would find this problematic and start mobilizing against this kind of like advanced media drug, essentially. Indeed, the movement picks up in popularity as the capabilities of the art generators pick up as well. I think there are people today who fight against artificial media even, but it's a far smaller community because the artificial nature of media doesn't strike most people as worthy of a fight as

Ethical Implications of Neurochemical Treatments

00:16:33
Speaker
of yet. Yeah.
00:16:35
Speaker
The solution in your world for this problem of art paralysis or at least one solution involves this neurochemical reset treatment, which is like a pretty bold concept. It allows people to change their desires effectively and their motivations so that you can get rid of this drive to engage with the media. And we see like the neats, for example, using this technology for their own purposes, um, to remove their sex drives and become asexual.
00:17:01
Speaker
So this opens up some really deep philosophical questions about like, what should we want to want? So if you can remove your sexuality, for example, can't you just as easily remove your interest in being an acetic? Like, why do they do one and not the other? How should we decide what state to put ourselves into? That's a very tough question. And I don't think I can answer it. It's a philosophical, very individual thing, whether you should want to keep the desires you have or fight against them. A lot of people have
00:17:30
Speaker
And by a lot of people, I mean religious people and moral people, people considering various things that are more important to them than satisfaction of their desires. Throughout history, humans have resisted their desires in one way or another. It seems natural to me that if there's a technological solution, people might consider it or might even jump on it if they think that solution is part of keeping themselves healthy in the face of a world trying to deceive them.
00:17:54
Speaker
regarding the Neats and my art club, I got some pushback from other members of the art club. They're like, there's no way everybody would choose to be asexual or there's no way people would alter their personalities like that, but a loneliness and lack of human contact is a growing problem in our world. And I imagine the future in the future, people might be less reserved in their willingness to fight such things. Yeah.
00:18:20
Speaker
If you keep your desires, if the NEAT chose to keep their desires, they might expect that to lead them to be unhappy in the future versus removing their desires would lead them to happiness. We can't imagine cutting out our sexuality, but if their sexuality were a constant source of frustration and pain, they might think that cutting it out would be far better.

Cultural Homogenization vs. Diversity

00:18:42
Speaker
And there's a fair contingent of people who are just naturally asexual, and a lot of them seem happy and fine, and so maybe that's a state that they would aspire to. It's true. Maybe also in the future, those perspectives will be better understood by the populace in general, and more people will be sympathetic to asexuality, so there'll be less of a knee-jerk, oh, that would be wrong sort of reaction. There is a potential dark side to this, it seems to me, in terms of diversity.
00:19:08
Speaker
Because if you're making it relatively easy to change yourself in pursuit of connection and relating to other people, that might cause a strong drive for people to sort of assimilate. And if everyone can change their favorite show to be Cheers, and then all the TVs are playing Cheers all the time, that's technically a good outcome in terms of people enjoying televisions, but there's also some loss there.
00:19:31
Speaker
So I'm curious about your thoughts as to how diversity is at risk with this kind of system or like whether it should be maintained somehow.
00:19:40
Speaker
So Scott Alexander of Astral Codex 10 has this idea of universal culture where culture tends to a minimum that all cultures get pushed toward this effective minimum where the most successful policies or cultural practices or what have you become universal and every other policy gets pushed to the side because it doesn't work as well. One of the examples he gives is Coca-Cola.
00:20:05
Speaker
Coca-Cola is a drink that's been optimized for human consumption. It contains caffeine and sugar and fizzy water and not much else. It's very heavily optimized and it's sort of a symptom of universal culture. So other drinks like green tea or goat's milk, or I think he actually says yak's milk. At any rate, other drinks that cultures might consume will be pushed out of the way from this universal baseline.
00:20:31
Speaker
And I think it's a real problem because if you can change your very mind, maybe someone, someone seeking a job would change their mind to best match the job they're seeking. Or maybe a student trying to get into a college would like they currently changed their hobbies. They might change their relationship with hobbies or their motivation for certain activities to look more appealing to a, to a university that's willing to admit them. So I think it's a very real risk that the world would become more homogenized in that case.
00:20:59
Speaker
However, the story itself also provides an explanation for why that might not happen. There's a widely understood need for variety and diversity and thought to the extent that the AIs themselves purposefully diversify and keep themselves from being copies from each other. And they guide their human charges toward hobbies that are unique and toward unique experiences, because it's understood that having
00:21:24
Speaker
a variety of experiences to draw from and a variety of perspectives is more powerful. So it's hard to say what would win. There's an incentive to be like everybody else and to match the well-known good characteristics, but there's also a known and accepted push for diversity. I can't say which would be more powerful in the end. One hopes the diversity just for the sake of entertainingness and for us not all being clones of each other.
00:21:48
Speaker
There's also in your world, the technology allows for extreme personalization like we see with the art being so powerful for individuals. You could imagine Coca-Cola is a great solution for mass manufacturing. If you could make a drink that was tailored to each person's mouths and taste sensors and past experiences, maybe there would be a million different Coca-Cola's that would only appeal to 10 people but would be the best drinks ever for them.
00:22:14
Speaker
So you can kind of imagine the same kinds of personalizing technologies pushing back against this as well. Go even further, tailor it to their current mood and their current expectations, or maybe their current blood sugar or whatever else they care about. They might end up drinking the fluid all day and neglecting other foods. Interesting.
00:22:33
Speaker
although hopefully in my world that person would have an advisor saying, hey, you need nutrition as well. And nutrition is important enough that we're going to request it of the AI building the drink so that you don't get a nutritional deficiency. And you'd be like, okay, yeah, sure. And your drink tastes imperceptibly worse, but you get better nutrition.
00:22:54
Speaker
So I'd like to talk a little bit about some of the different challenges that your world faced in getting to that position where we can appreciate and respect AI systems.

Early AI Control Strategies and Limitations

00:23:04
Speaker
And in the very beginning, we really struggled to keep these AI systems under control. And one early approach that you spelled out is death drive, which is basically this strong self-destructive urge where these AI systems want to destroy themselves and they will, unless they're actively countered.
00:23:21
Speaker
Overtime so the idea is that if they become misaligned somehow or kind of go loose or go rogue then that death drive will be triggered and they'll destroy themselves and render themselves, you know inert but I was wondering doesn't this kind of make these systems into hostages like
00:23:37
Speaker
Wouldn't they have some kind of resentment if that's not too anthropomorphizing for these early AI systems? And how do you see this dynamic impacting the behaviors or utility of these systems? Wouldn't they want to find ways to destroy themselves without you realizing?
00:23:53
Speaker
Yes, that's a risk. So there are two things going on here. The first, and this is very important to say, death drive is not a good alignment strategy. It's meant to be a knee jerk. Oh, we'll solve the problem any way we can. Here's one way that might, at a first glance, plausibly work.
00:24:10
Speaker
And actually, if you consider it deeply, you'll realize many flaws with this. One of them is explored in the story, which is that a system may incorrectly delineate its own boundaries and destroy other systems that are similar to it. But there are other ways this can fail. The second is
00:24:26
Speaker
To the extent that the system is intelligent and sentient, death drive is not an applicable strategy. You have to imagine that in the beginning, these systems are sub-sentient. They just want to shut themselves off. They're optimizing for a signal being removed. If they actually reached sentience and they had something like a unified mind seeking that thing, resentment would come into play. But these are sub-systems that are not sentient, at least in the beginning.
00:24:52
Speaker
It's fair to characterize them as hostages. It's not a good situation and there's a reason it's seen as a widespread tragedy. But rather than shutting off an entire mind, it would be like removing a piece of your brain when it deviated sufficiently. I see. Yeah. I guess like cells do that apoptosis. Indeed. And to get cells to behave coherently, they have to have an off switch and something like that happens in the AI here. But again, it's not a reasonable alignment strategy. It's just one possibility you might consider when trying to solve alignment.
00:25:23
Speaker
Another thing that's worth saying about it is that as evolved organisms, humans and other animals have a very strong aversion to death.

AI Rights and Past Mistakes

00:25:31
Speaker
And as you would expect, it's so strong and natural that even questioning it, it almost sounds insane to say, Oh, well, what, what if there was someone who or something that preferred to die?
00:25:41
Speaker
Of course, octopuses, for example, when they reproduce, they die. Or insects like ants and bees will go into situations that are overwhelmingly fatal without hesitation. And part of the reason is because evolution in them hasn't installed as strong an aversion to death. And I think our aversion to death is not actually inherent to minds. So the death drive was also an attempt to communicate that, that you might be able to construct a mind that was more accepting or willing with the possibility of death.
00:26:10
Speaker
But again, uh, not a good strategy. Got to say that again. Don't, don't try this strategy for alignment. I really appreciated how like later on in your world, you know, there is this moral reckoning where people look back and we're like, Oh, that was a bad idea. And like, they feel really terrible for having instilled this death drive into these systems and for not really considering that they may have true sentence and, and, you know, should have rights.
00:26:35
Speaker
But I was wondering if there are other sort of moral errors that you could imagine them catching even later down the line, like after 2045. I mean, as we said earlier, these systems are built inherently to surround and support human beings. Why does that have to be the case? Is there something kind of subservient about that or something that might be morally questionable about creating these beings just to support us? I'm sure there is. I don't know if the humans of my story will view it in a negative light and they'll probably just view it as a fact of life.
00:27:05
Speaker
but creating beings just to be servants is fraught with moral problems that we didn't really explore in this timeline.

Transformation of Advertising Through AI

00:27:13
Speaker
There is this thread in your story about advertisers and filters kind of fighting back and forth. Right now, that looks like people coming up with ads mostly on websites. This happens, I think, and then you have ad blockers people install and they go back and forth. I'd like to hear more about what it looks like in the advanced version in your story.
00:27:31
Speaker
Are these filters just an ultra reliable review system where you kind of take anonymized data from everyone around you and you say like hey are these pants really good pants and if everyone says yeah for someone like you these are good pants and you buy them like is that kind of the basis of this technology is an advanced filter.
00:27:47
Speaker
I think you've got it. That is indeed the idea I had that these AI systems would ping each other for information and look for other purchases and outcomes that has been anonymized and then is communicated to them. Then they do a Bayesian update. They're like, well, given all of the things I know about the human I'm trying to purchase pants for, what is the chance that they'll be satisfied with this purchase?
00:28:09
Speaker
It makes it hard for companies to use advertising in any way because the AI is like, okay, nice ad, but the other AI I'm talking to have said these things.
00:28:17
Speaker
Yeah, that's fascinating. And thus the ad-based internet can't continue to exist because ads are never seen by human. AI filters them out and considers them for what they are. Well, in a way, it sounds like the AI is, the filter is becoming the advertising system, sort of. I mean, it's a true system. Like it's not going to overstate things or misrepresent them. But if you make a good product and people like it, like your, your filters will be your advertisers and they'll tell everyone to buy your product.
00:28:43
Speaker
Exactly. Supercharged word of mouth might be a good way to describe it. Genuinely good products are very quickly recognized and taken up under this paradigm. It's an interesting way to think. It almost makes advertising seem like a poor solution to a lack of communication between everyone. If everyone could just talk really efficiently and accurately about their pants purchases, we wouldn't need pants commercials.
00:29:08
Speaker
We're stuck with the commercials for now. Well, I mean, one of the valid functions of advertising is making you aware of a product, for example. So all the ways advertising is incentivized to trick you are still true, but at least the advertising makes you aware of a product. If you had somebody who specifically goes out and looks for the product for you, that would make that last good thing of advertising less relevant and then maybe advertising wouldn't be necessary. You talk about how this basically leads to the end of the ad-based internet.
00:29:36
Speaker
Can you say a little about what comes after or what that turns into?

Personalized Internet and Information Trust

00:29:40
Speaker
So instead of an internet where you browse to a website and you look at the website as it wants to present itself and you view the same thing as everyone else, what happens is when you want to learn about something or get some sort of information, you tell one of your AIs, one of your filters or your advisors, depending on what part of the timeline,
00:30:00
Speaker
that you want to learn about this thing. They'll go out and gather the information for you and then present it in a digestible way for you specifically. The web pages are all built on the fly. So they're not really web pages anymore. They're custom presentations of information. Yeah.
00:30:15
Speaker
Wikipedia page is rewritten so that the things that you're specifically looking for are right there and available and everything that's most interesting to you is organized and presented in a way that's easy for you in particular to understand. It's hard to describe exactly how that would be implemented but it would be implemented uniquely for every person and very efficiently and immediately.
00:30:35
Speaker
Yeah, I mean, I can actually imagine that much better now than I could a few weeks ago because I've been playing with chat GPT so much. And it has that kind of flavor. I mean, I've really been able to explain my particular circumstances and preferences to it and then ask it a question. And it gives me a personalized answer I wouldn't be able to find anywhere online. And seven or eight years before I would have thought that would exist. Yeah. Yeah. Is that also surprising to you how fast that happened?
00:30:58
Speaker
Yes, I'm I'm very impressed with chat GPT more than it's it's it's more capable than I would have expected Yeah, I've been shocked and I thought I was being pessimistic about how fast these things were gonna be like Or optimistic depending on your perspective. I didn't expect it to dance as fast
00:31:14
Speaker
Well, one thing I've noticed, I think a lot of people have noticed the chat GPT is it's, it's very fluent. It's very convincing and well written, but sometimes it'll get things wrong. It'll get little details incorrect. It's just, you know, it hasn't quite picked up on some details. How do you manage that in these kinds of systems where everyone is reading their own website? Like how do you know if you can trust the specifics of what your filter is telling you? How do you know your filter doesn't have something slightly wrong somewhere if there's nothing to compare to?
00:31:42
Speaker
So this isn't something that I specifically tried to address in the original timeline, but ideally you set it up so there are two or more competing incentives to get things accurate, such that if something is presented inaccurately, part of the system as a whole will notice or have incentive to notice and tell you about it. It's not an easy problem to solve.
00:32:03
Speaker
I mean, it sounds like that's sort of an analogy to like how Wikipedia works right now where you have some people who volunteer to go and fact check. Maybe you have like an AI whose job it is to go and kind of look over everyone's shoulder and make sure they're not being lied to by their filters. Yeah. Perhaps the skeptic and the enthusiastic subsystem. Yeah.
00:32:20
Speaker
The eventual solution to kind of balancing these different AI systems in your world is these parliament structures, which it sounds like is kind of like what you were just describing. You have these multiple AI systems that are all working together and kind of checking each other's work and they're in some kind of balance. Can you say a little bit about how that works and is it kind of like a governmental solution?
00:32:42
Speaker
So it was hard to pick a word for what to call these. I chose parliament and I imagined like a ring of entities that were all simultaneously vying for control of the AI or the AI's attention, but it's not governmental. I would say it's more biological even.
00:33:01
Speaker
So a couple of things inspired that one was this book called crystal society where there's an AI that's composed of four sentient subsystems that all fight for the control of the eye as a whole. I was imagining non sentient parliament members so to speak non sentient subsystems.
00:33:22
Speaker
But in that book, they're sentient and they fight with each other. And in the first book in particular, their major plot point is which subsystem is going to gain control. So that inspired it. The other thing that inspired it was the way birds have song clusters within their brains that each focus on different parts of the birds' activities.
00:33:42
Speaker
So human brains and bird brains are arranged a little bit differently. In bird brains, there are these clusters which communicate with each other, but are otherwise small, locally centralized computing. Well, computing's not the right word, but processing centers.
00:33:59
Speaker
That inspired these parliaments as well. So a bird's group of song clusters is sort of like an AI group of parliaments. The parliaments all interface with each other, but they have other interactions besides each one competing to decide what next action or next overall thing is going to is going to happen.

AI Systems and Meritocratic Economy

00:34:17
Speaker
Yeah. Well, it sounds like in crystal society, for example, this is not necessarily a stable system, right? Like how do you ensure that this vying for power stays static over time? And one of them doesn't win and kind of take over the system.
00:34:28
Speaker
There needs to be a pretty well-designed feedback mechanism. And I don't think I could design it or even describe it quickly. But maybe you could do it like a brain does it where the region after exerting itself gets tired and can no longer make decisions. In this case, these AI, these NN chips, these neural network chips, they can't self alter, for example. So they're a little bit more constrained in the specific actions they might take. And neural network chips are also not
00:34:57
Speaker
Easily copyable so you don't have to worry about one just copying its own code and and deleting all the others like in a mammal bird brain you don't have to worry about a cluster of neurons duplicating itself and taking over the whole brain.
00:35:11
Speaker
But that's sort of a hand wave, actually making these things not, you know, making one of them not get a little bit smarter and then co-opt the goals of the other ones is a significant challenge. And I don't have a good answer for that, how to prevent it. Yeah. That makes me also wonder like the goal of these parliaments, it seems to me, is to kind of balance these different
00:35:30
Speaker
intending entities, whatever they are, and keep them in check so that none of them gets too much power. But as you were describing the way that birds work and as we think of other collectives that kind of behave in a coherent way, you could imagine the whole parliament itself having desires that kind of come together and are themselves damaging. So does that cause another control problem where you need parliaments of parliaments or something?
00:35:55
Speaker
So there are many incentive structures in this story at different levels. If one of the AI advisors, for example, is acting aberrantly, that becomes apparent to the other AI advisors and they have an incentive to correct to the behavior of the aberrant one. So yes, you do need a control scheme at the higher level. I see. So each AI advisor is like this parliament. It's composed of these subsystems. And then the advisors also work with each other to support a person. Indeed.
00:36:24
Speaker
And if one of them starts trying to grab resources for itself, for example, I think the first sign is going to be far subtler than that. At any rate, if one of them tries to grab resources that it doesn't need or behaves in a nefarious fashion, the others working with it will notice this and resist it and punish it for that.
00:36:41
Speaker
the system that's going to behave that way understands as much and so behaves correctly from the get go. And perhaps even if one of its subsystems is misleading it, then it might even excise its own subsystem and replace it.
00:36:56
Speaker
From an economic perspective, your world is also pretty well situated. Things are fairly well distributed by the end of it. But I really love how this came about, which is basically due to a use of corporate greed. This is a really fun arc where you have corporations realizing that there's so much inequality that they can't get enough money from the impoverished people at the bottom.
00:37:18
Speaker
and they start lobbying to redistribute wealth so that they can then get more of that money back. And they're going to redistribute wealth as long as it hurts their competitors slightly more than themselves. So this creates kind of like this race to the bottom. But what keeps this kind of economic upheaval from actually being useful for corporations? Like what if a few corporations actually won and came out on top and got more money in the end instead of creating a more equal society?
00:37:44
Speaker
So if there were a single ton of corporations, then that would be game over for that possibility for the UBI.
00:37:50
Speaker
But if there are multiple competing corporations and you also have extraordinarily good feedback from the AI advisors, the AAFs, I think there could be a shorter name there from the advisors. We'll just call them advisors even though they predate the advisors in the story. If your advisors allow you to determine which products are actually good and there are still a couple of corporations competing, then they might launch new divisions to compete with each other and sap each other's money and products will become genuinely good.
00:38:16
Speaker
The other thing that's absolutely required is startups and new corporations to form maybe i'm trying to come up with a good metaphor metaphor for we're talking about buying pants earlier so.
00:38:28
Speaker
Maybe there's somebody who can stitch a really good pair of pants and it's going to be extremely expensive because they're not mechanized, industrialized, or what have you. But they make a pair of pants and then they try and sell it through their AI advisors and someone else says, hey, this local producer of pants who lives down the road can make you a pair of pants for this much more than the corporate entity.
00:38:48
Speaker
That might be the seeding point for a new startup. And to the extent that a person working on their own can truly make a higher quality pair of pants than the corporate entity, they'll gang funds, they'll gang money, they might be able to teach their skills to other people and grow a whole new corporation that can compete with the larger one.
00:39:06
Speaker
So these corporations that start the UBI, they try and use UBI as a mechanism to steal money from each other because there's no, there's, well, steel is not correct either, but to acquire money from each other because there's no other way to get them. Maybe steel is correct. That's debatable.
00:39:21
Speaker
Um, I want to answer whether stealing is correct for this. The point is, is they're trying to get money from each other through whatever mechanism and they try, they, they run out of other mechanisms. So they use UBI, not accounting for the fact that with AI assistance disruptors or new corporations will have a much better chance at succeeding and, and in turn winning, you know, the money that they had hoped to gain from the other corporations.
00:39:44
Speaker
So UBI, universal basic income, is this concept where everyone gets some money just for existing, and that gives kind of a baseline of standard of living. And so these corporations are pushing for everyone, including other corporations, to have to pitch in for this. Is that what's going on? And then they're going to try to get that money back? Pretty much. If you're not already over 50%, but you think you're going to acquire more than whatever percent of the current economy you are of the UBI, it's a net increase, right? Yeah.
00:40:13
Speaker
If one corporation's like, I'm 20% of the economy right now, which is kind of a terrifying thing to imagine, but one corporation's like, I'm 20% of the economy, but I can gain 40% of UBI payments, then from their perspective, UBI is a good deal. They'll net gain from it. And they'll lobby for it. Yeah. And so the thing that makes this all
00:40:31
Speaker
turned good, I guess, is basically this like radical quality transparency, like pushing us towards a meritocracy where anyone who can really do something useful will be recognized and noticed by these AI systems and lifted up to compete. Yep. It's working under the premise that the reason that capitalism and the current market doesn't satisfy needs perfectly is at least partially from lack of transparency. Yeah, interesting.
00:41:07
Speaker
While AI assistants are a common trope in pop culture, and one of the more common features in the submissions we received, this world takes them through a particularly strong developmental arc. They grow and change constantly, and our relationships with them grow and change as well. Sometimes we catch up a little slower than perhaps we should. But eventually they are recognized as sentient beings deserving of their own rights. They're still not like us, but they're also not less than us.
00:41:34
Speaker
This is a difficult moment of moral reckoning for humanity, and a humbling reminder that simply creating something isn't enough to truly understand it. I was curious what inspired this complex moral portrait of artificial assistants, and what Mark thought about other ways they've been portrayed in the media.
00:41:52
Speaker
So I'd like to take a bit to talk about different ways that our culture is currently kind of treating things like personal assistance and AI tools and how that relates to the way that your story portrays them. So how do you think most people are currently thinking about personal assistance in the future? And how do you think that these recent advances like chat GPT have impacted this? Chat GPT is changing on a daily basis. I'm not sure what its ultimate effect is going to be.
00:42:18
Speaker
Thus far, personal assistants have not taken off as well as anyone would have predicted. They're not as, perhaps, capable as people wish. That may change. chatGPT is very capable, or it seems that way now, but there's sort of an effect with AI-generated art and text where at first it will seem extraordinarily capable. As time goes on, people will become more cognizant of its limitations, and those limitations will jump out at them.

Public Distrust and Swift AI Development

00:42:44
Speaker
I think there's a distrust of assistants right now, and that distrust might persist for a while because the little indicators that the assistant is not genuine or is incapable will continually jump out. Yeah, that makes sense. We're amazed now, and indeed, it is an amazing development. But I think in some weeks or months, we'll probably say, well, chat GPT speaks with such a soulless voice
00:43:06
Speaker
And it speaks with certainty, but is very verbose and lacking in conciseness and punch in what it's saying. So I think shortly people will recognize chat GPT text as for what it is. And then an upgrade will be made and it'll be amazing again. And then eventually that will probably lead to the point where the assistant does generate texts that you can't distinguish from humans. I don't know when that point will be reached maybe sooner than I thought. It seems that way perhaps.
00:43:37
Speaker
Are there any examples of really complex and nuanced portrayals of personal assistance in fiction that inspired you when you were working on this? I'm having a really hard time thinking of any. I'm sure I was inspired by things, but they're not available to my consciousness. I'm not sure where I got some of these ideas. It does seem to me that the cases that robotic assistants are flat and just tools
00:44:00
Speaker
And I think part of the reason is just because when you have limited time to convey a concept, showing a assistant as robotic and flat very quickly communicates their role in the story. And indeed, usually the assistant isn't the center of focus. Of course, there's a pretty notable exception in data from Star Trek. That's true. Data is a full character of the show. It's not quite like the timeline here because there aren't very many data's running around. There's only one and he's pretty unique.
00:44:26
Speaker
But it is an example of a robotic assistant that's actually a character. Yeah. I'm not super familiar with Star Trek lore, but like is data meant to be an assistant to somebody on the crew or is he sort of just like an independent tool for the whole organization? He's a crew member, but one that's an AI and thus has unique capabilities. I got to admit that I'm not as familiar with Star Trek as many either, but cultural osmosis has communicated a lot about data to me. Yeah, yeah.
00:44:54
Speaker
He's a very calm, voiced, rational, direct, perfect crew member, essentially. Although, perhaps not perfect. The show undoubtedly explores his imperfections or ways that he could be more human or more capable. But I do think it is a close portrayal to the sort of assistant I was imagining. Imagine if the entire crew was composed of datas with different personalities. That might be like how the story goes in my timeline in this short fiction we wrote.
00:45:23
Speaker
One thing that's strongly represented in your story is this kind of developmental arc where the AIs themselves are changing and growing and they have this dynamic nature. They're not just like a tool that kind of stays the same throughout. I'm wondering if technologies like chat GPT and things like that and their own rapid development will kind of change how we think about these tools going forward. This far, it seems like every new tool is a separate entity distinct from those that came before.
00:45:49
Speaker
Although there are exceptions to that, novel AI, anime image generation tool has been receiving updates that change how it functions. So maybe it will change in the future where we view these things as capable of change and improvement. But thus far, it seems like they're distinct tools as opposed to things that are improving gradually. Yeah. Was there anything that you were consciously trying to get people to think differently about as you were writing your world?
00:46:16
Speaker
So earlier we talked about death drive. I wish I could write a story in which I could adequately explain how the desire to avoid death is a very human and living animal thing and not necessarily a component of all minds. So I did want to make that a part of the story, although it's not as central as some other themes.

Collaborative Efforts and Overcoming Isolation

00:46:35
Speaker
Another one is that collaborative efforts with small groups of people are far more successful and rewarding than I think most people realize.
00:46:43
Speaker
I think isolation and the feeling of isolation has overtaken our culture pretty significantly. More people than ever feel lonely and isolated and reminding people that they can go and collaborate with each other and work together and achieve things, even things that might seem small in scale, but just involve working with other people.
00:47:04
Speaker
whether it be through an online tool or in the case of my story with advisors that are almost other people. That's what I wish to convey was the joy of doing that. Yeah, that is interesting how a lot of our social technologies seem to have really expanded our reach so broadly, but also sort of made it thinner. And you can imagine this return to narrower and deeper relationships with a smaller group of people around you.
00:47:29
Speaker
That's exactly what I hope happens. And I think in the future, we will have ways and have the sense to do that. Nowadays, if you wanted to work with a small group of people, you might try Discord instead of Reddit or Twitter. Yeah. Like your art club. Yeah, exactly.
00:47:46
Speaker
I think I mentioned earlier that the art club inspired this story quite a lot directly. We work together in art club and in my story, the AIs and the human work together on their endeavors as well. Yeah. Did you create this structure of like a regularly meeting art club where you would share and critique each other's work or were you inspired by other groups you've seen? I'm not sure who started the art club. It actually used to be a writing club specifically, but we broadened the things that we might talk about
00:48:12
Speaker
The idea of just getting together and talking about things probably did come from somewhere specific, but it's lost. We have an idea who suggested forming a small art club. If an artist takes anything from listening to this interview, it should be that you can message a few friends and start an art club and change the way you relate to art and improve your art immensely just as easily as that. The tools are there. If you don't have friends you can think of immediately, you can find them online.
00:48:39
Speaker
There's nothing stopping anyone from making a group of five people who are working together on a project.
00:48:44
Speaker
Yeah, I really like this concept. It's been inspiring for me to think about in my own life. Like I know a lot of really creative people. I do a lot of creative projects myself and it would be great to just, you know, have a small group where you kind of follow each other's work over time. It's a cool way to combine expertise too. Like one thing we're trying to do at a larger scale with this competition is just get people talking that have different perspectives and expertise. So like having an art club with like some engineers in it sounds great. I just like see what they think of the stuff you're doing.
00:49:10
Speaker
But not only engineers, if you have only engineers and no artists in the art club. It's an engineering club.
00:49:20
Speaker
So one interesting implication of the assistance in your world is this kind of like anonymized network of feedback that's kind of built into the filtering systems. And you mentioned you gave a small example of like, maybe what you eat for lunch could impact how people treat you or who wants to hang out with you. And you wouldn't entirely know why or be aware of this interaction. I'm curious if you could kind of expand on that and say what you were imagining, how that might work.
00:49:46
Speaker
So as I'm imagining a specific instance that might happen in real life where a person goes out to lunch and they eat something with garlic in it and then they have an interview with an employer later that day and they don't realize that their choice of what they ate for lunch is negatively impacting their interview. In this world, an AI might detect what you ate for lunch and make an inference about how it means you're going to interact with other people and then tell those people to avoid you because
00:50:10
Speaker
It might be as small as eating garlic, but if they have very many friends they might hang out with, they might avoid you because you chose to eat garlic for lunch. And that's a risk you might face with perfect information exchange. The AI might be able to infer things about your activities or maybe unfairly assume things about your activities because from its perspective, oh yeah, 1% chance of a negative interaction is not worth it. Might as well go the other way. All of them simultaneously make that choice and suddenly no one wants to hang out with

AI Decision-Making and Potential Discrimination

00:50:37
Speaker
you after lunch.
00:50:37
Speaker
Yeah, this could be like a very subtle and insidious form of discrimination against different types of people. I could imagine like if someone really loves garlic, maybe, you know, they won't have any friends anymore in this world because everyone's eyes will be like, it's not really worth getting to know Bob like. Yeah, which would be indeed a tragic and and insidious form of discrimination. That's a good way of saying it. And it might also be the case that your own AI push you toward that homogeneity by. Yeah. By telling you to eat less garlic.
00:51:06
Speaker
Yeah, maybe they change your brain so you don't like garlic anymore and then the garlic industry collapses. I think they would not jump straight to changing your brain. Maybe a few months after you quit garlic and your cravings become overwhelming. Yeah, if you're a garlic addict. Yeah.
00:51:22
Speaker
I also curious how how this sort of system relates to social credit systems and i see some parallels where you have this overarching system. That is like assessing you and trying to give you some kind of overall score or like guide the parts of society you can participate in based on some forms of merit.
00:51:39
Speaker
Is there a connection there for you? So I don't understand current social credit systems as they exist as well as it would take to comment on them. Sure. So they didn't influence the story very much, but one would hope that the use of individual systems for making decisions like that would alleviate some of the concerns from a broader social credit system. Your score in a social credit system doesn't adequately reflect who you are.
00:52:06
Speaker
And with sentient advisors, maybe they would have a better idea of who you are when they're making recommendations for you or guiding you. And to go back to the garlic metaphor, maybe they'd say, okay, he ate garlic, but all these other parts of his personality are worth considering too.
00:52:21
Speaker
Yeah. Or if we've, if we've succeeded in maintaining diversity and we have a lot of different perspectives and preferences in the world, you might find somebody who loves the smell of garlic breath. Your AI will just find you the perfect friend. Yeah. Or somebody who doesn't notice the smell of garlic breath and can't smell things or maybe, uh, maybe somebody who quickly goes past such, such foibles. Yeah.
00:52:44
Speaker
Or someone who likes light candles in the room that overpowered the smoke bar. Who knows? There could be a million ways that garlic is not an issue. And the AIs will be better at finding that than a social credit system would be. Right. So it's kind of like the dimensionality of it. It's not like you have a single number. And if your number's bad, no one's going to hang out with you. It's like you have a particular way of being. And maybe that fits or doesn't fit with everyone else's complicated ways of being. But there's always going to be somewhere you fit in and can hopefully partake in society.
00:53:11
Speaker
Yep. Or there's always going to be someone who can advise you on the least disruptive way to fit in better. Right. Yeah. Interesting.
00:53:25
Speaker
The process of world building has great potential to make a positive future feel more attainable. This can be incredibly powerful, whether you're a creative person looking to produce rich works of fiction or have a more technical focus and are looking to reach policymakers or the public. I asked Mark what kind of impact he hoped his work would have on the world. So I have some questions about what you hope comes of the work that you and Natalia and Patrick have created. Which aspects of your world would you most like to see taken on in popular media?
00:53:54
Speaker
It would be good if there was more popular media depicting minds that are different from ours in subtle ways. It's a really hard thing to do because people auto-complete the pattern. In the example of data, data is not so much an android as another human with an emotional disorder perhaps. So depicting how minds might be subtly different is a great challenge and I hope more people take it up, especially since minds do vary in subtle ways.
00:54:22
Speaker
Thinking about such things will help us relate with each other better. Yeah. Are there some examples? I know you've already spoken to like the fear of death being something that we kind of assume is a default and all sorts of minds, but maybe isn't. Are there other ways that minds vary that you would like to see explored?
00:54:38
Speaker
The thing that interests me isn't even just that the minds vary. It's how two minds that vary interact with each other and come to consensus. So a couple of fiction stories I read long ago contain this really well, particularly the works of Kim Stanley Robinson. He's a science fiction author. You've probably read something of his if I had to guess. I have, yeah.
00:54:57
Speaker
The thing that makes that series really great is the human characteristics. All of the people in that have different things they care about and they're interacting with each other, trying to forward their goals, but they do have different things they care about. So maybe rather than different minds, it's these different people's goals and their interactions and how they come to support and work with each other despite these different things they care about. That is something I find really compelling and which I saw more of.
00:55:32
Speaker
There's this effect where people will talk about a really cool possible technology and someone else will be like, I'm going to make that real or I'm going to try. Right. Apparently Star Trek and and cell phones fall into that category. Part of the reason flip phones became the thing was because of the Star Trek communicator. Although I think Star Trek isn't unique in thinking of personal communication devices, but the point stands that science fiction can sometimes illuminate a path and then make that path come true. Yeah.
00:55:52
Speaker
What do you think are some of the positive impacts of people just creating more positive stories about the world?
00:55:58
Speaker
So there's a lot of utility in it for that. In addition to telling a compelling story, these compelling possibilities might make themselves real. Yeah. It's interesting how new developments in technology sort of reveal new possible things to tell stories about. Like we, we kind of feel this slight lack of coverage of what AI assistance could be like in fiction, but maybe there'll be more and more of that as people start really dreaming about what chat GPT could turn into or how Dolly might affect things, stuff like that.
00:56:26
Speaker
maybe a fiction author will discover a really good use for such a system and then we'll make it so. Yeah. What do you hope that your world leaves people thinking about long after they've read through it?
00:56:38
Speaker
Thinking about how today they can message their friends and get to work on a collaborative project as opposed to waiting for AI to come in and help them with all of their work. That's great. They're humans now who would love to help you. They won't be as perfectly expert as the AI in this fiction, but they don't need to be for great things to happen. Yeah, call to creative action. I guess that's it, a call to creative action. It doesn't necessarily have to be art. Maybe the robotics club's a good idea too. Yeah.
00:57:07
Speaker
This has been a great conversation, Mark. Thank you so much for your time and thank you to Natalia and Patrick and everyone else at Art Club for the work that you put into this awesome world. I really appreciate you coming on here to share it further with us. I really appreciate the opportunity to talk about it a bit more. It's fun to consider these ideas more deeply and see where they might be expanded or where other people could add to it or what things are left unresolved. It was fun being on the podcast.
00:57:43
Speaker
Our guest today was Mark L. If you'd like to explore some more of Mark's work, you can check out his recent book, Rays of Intent, a collection of rational short stories. This book was actually another team effort, as Natalia helped to write it, and Patrick created the cover art. You can also read more of Mark's stories on his archive of our own account. His username there is blasted0glass. That's blasted, the number zero, and then glass in one word.
00:58:13
Speaker
If this podcast has got you thinking about the future, you can find out more about this world and explore the ideas contained in the other worlds at www.worldbuild.ai. If you want to hear your thoughts, are these worlds you'd want to live in?
00:58:27
Speaker
If you've enjoyed this episode and would like to help more people discover and discuss these ideas, you can give us a rating or leave a comment wherever you're listening to this podcast. You read all the comments and appreciate every rating. This podcast is produced and edited by WorldView Studio and the Future of Life Institute. FLI is a non-profit that works to reduce large-scale risks from transformative technologies and promote the development and use of these technologies to benefit all life on Earth.
00:58:50
Speaker
We run educational outreach and grants programs and advocate for better policymaking in the United Nations, US government, and European Union institutions. If you're a storyteller working on films or other creative projects about the future, we can also help you understand the science and storytelling potential of transformative technologies.
00:59:07
Speaker
If you'd like to get in touch with us or any of the teams featured on the podcast to collaborate, you can email worldbuild at futureoflife.org. A reminder, this podcast explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we all want. The ideas we discuss here are not to be taken as FLI positions. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
00:59:35
Speaker
Thanks for listening to Imagine a World. Stay tuned to explore more positive futures.