Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Imagine A World: What if global challenges led to more centralization? image

Imagine A World: What if global challenges led to more centralization?

Future of Life Institute Podcast
Avatar
175 Plays1 year ago
What if we had one advanced AI system for the entire world? Would this led to a world 'beyond' nation states - and do we want this? Imagine a World is a podcast exploring a range of plausible and positive futures with advanced AI, produced by the Future of Life Institute. We interview the creators of 8 diverse and thought provoking imagined futures that we received as part of the worldbuilding contest FLI ran last year. In the third episode of Imagine A World, we explore the fictional worldbuild titled 'Core Central'. How does a team of seven academics agree on one cohesive imagined world? That's a question the team behind 'Core Central', a second-place prizewinner in the FLI Worldbuilding Contest, had to figure out as they went along. In the end, this entry's realistic sense of multipolarity and messiness reflect positively its organic formulation. The team settled on one core, centralised AGI system as the governance model for their entire world. This eventually moves their world 'beyond' nation states. Could this really work? In this third episode of 'Imagine a World',​ Guillaume Riesen speaks to two of the academics in this team, John Burden and Henry Shevlin, representing the team that created 'Core Central'. The full team includes seven members, three of whom (Henry, John and Beba Cibralic) are researchers at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, and five of whom (Jessica Bland, Lara Mani, Clarissa Rios Rojas, Catherine Richards alongside John) work with the Centre for the Study of Existential Risk, also at Cambridge University. Please note: This episode explores the ideas created as part of FLI’s worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we want. The ideas present in these imagined worlds and in our podcast are not to be taken as FLI endorsed positions. Explore this imagined world: https://worldbuild.ai/core-central The podcast is produced by the Future of Life Institute (FLI), a non-profit dedicated to guiding transformative technologies for humanity's benefit and reducing existential risks. To achieve this we engage in policy advocacy, grantmaking and educational outreach across three major areas: artificial intelligence, nuclear weapons, and biotechnology. If you are a storyteller, FLI can support you with scientific insights and help you understand the incredible narrative potential of these world-changing technologies. If you would like to learn more, or are interested in collaborating with the teams featured in our episodes, please email [email protected]. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects Media and Concepts referenced in the episode: https://en.wikipedia.org/wiki/Culture_series https://en.wikipedia.org/wiki/The_Expanse_(TV_series) https://www.vox.com/authors/kelsey-piper https://en.wikipedia.org/wiki/Gratitude_journal https://en.wikipedia.org/wiki/The_Diamond_Age https://www.scientificamerican.com/article/the-mind-of-an-octopus/ https://en.wikipedia.org/wiki/Global_workspace_theory https://en.wikipedia.org/wiki/Alien_hand_syndrome https://en.wikipedia.org/wiki/Hyperion_(Simmons_novel)
Recommended
Transcript

The Problem with Silicon Valley Optimism

00:00:00
Speaker
on this episode of Imagine a World. I think optimism is very much associated with a certain kind of, to be crude stereotyping here, a certain kind of Silicon Valley libertarianism. And I think that has given optimism a bad name among certain people because it's a very monolithic vision. But I would like to see a kind of more pluralistic visions of how AI can make the world a better place or how the future could be better than the present.

Introduction to the Imagine a World Mini-series

00:00:29
Speaker
Welcome to Imagine a World, a mini-series from the Future of Life Institute. This podcast is based on a contest we ran to gather ideas from around the world about what a more positive future might look like in 2045. We hope the diverse ideas you're about to hear will spark discussions and maybe even collaborations. But you should know that the ideas in this podcast are not to be taken as FLI endorsed positions.

Exploring Core Central - A Unified AI Government

00:00:53
Speaker
And now, over to our host, P. Elm Reason.
00:01:09
Speaker
Welcome to the Imagine a World podcast by the Future of Life Institute. I'm your host, Guillaume Reason. In this episode, we'll be exploring a world called Core Central, which was a second place winner of FLI's world building contest.
00:01:22
Speaker
In this world, humanity becomes unified under a single world government, supported by a fantastical artificial intelligence network known as the Core System. This system is spread fractally along societal lines, with components representing different countries and communities, and even smaller ones interacting with individuals. Its global reach forms a kind of social exoskeleton that helps to support all manner of human flourishing.
00:01:46
Speaker
The core structure is centralized and monolithic, but also contains a huge range of diverse and semi-independent components. This forms a paradoxical network of intelligence that really challenges our notions of agency and individuality. Other changes in this world, such as increased lifespans, empathy-building technologies, and personalized education, are also richly explored. There's a deep sense of enduring complexity throughout. Thorny issues of justice and well-being remained thorny despite technological and economic progress,
00:02:17
Speaker
This is a world unified despite, or maybe thanks to, its engagement with all manner of difficult conversations between individuals and communities. This world was created by a team of seven, hailing from the University of Cambridge in England. They were our largest winning team, with a wide range of expertise spanning ethics, cognitive science, policy, and infrastructure. Our first guests today are John Burton and Henry Shevelin, who are both on the more philosophical side of things.
00:02:43
Speaker
Later in the episode, we also get to hear from Laura Mani, who studies the effective communication of existential risks. Her work is so relevant to this competition, so make sure you stick around for that. There are four other team members, where Beavis Abralik, Jessica Bland, Clarissa Rios-Rojas, and Catherine Richards. So hi, John and Henry.

Building Core Central: The Cambridge Team

00:03:03
Speaker
Thanks for joining us. It's a pleasure to be here. Yeah, likewise.
00:03:08
Speaker
So you're one of the biggest teams that we had in our top 10. There's seven of you. How did all seven of these folks come together to work on this submission together?
00:03:18
Speaker
Well, so we are coming from two distinct centres, like sibling centres, the leaving him sense, the future of intelligence, the centre of the study of existential risk, both of which are based in the same building in Cambridge and have lots of overlapping projects. But I think that meant there was a big pool of potential researchers to draw on. So that was the first part. Yeah, I think quite a few of us were perhaps interested in
00:03:43
Speaker
I may be entering individually or in smaller groups but we sort of found out. I decided one of the one of the team members she decided to see if anyone wanted to do like a joint Caesar cfi submission and we kind of just met and evolved from there.
00:04:01
Speaker
That's cool. So multiple of you, it sounds like had come across the contest, but Lara was kind of the, um, the nucleation site that kind of brought you all together. The cat herder, he got us all, uh, he got us all in, in order. That's cool. And that makes sense since my understanding is Lara studies risk communication and what makes it effective and has done stuff like role playing activities and like scenario based exercises that involve like group work in that way. Yeah.
00:04:25
Speaker
So you two have this kind of academic perspective where John is looking at the AI risk and power assessment, among other things. And Henry, you're looking at consciousness of AI and creativity and these other measurements. There's two other people that I kind of, when I was going through a team sort of classified on the more academic side. There's Biba, who's a philosophy PhD student studying ethics for emerging tech and online influence efforts. And then there's Laura, who I mentioned, who is studying ways of communicating risk.
00:04:55
Speaker
Would you guys agree with that kind of categorization where you're kind of the more academic perspective and then there's kind of a more policy or engineering side? Anything I'd quibble with there is, you know, the term academic, we're all academics in a way. I guess I mean more theoretical, perhaps. Right, exactly. Yeah, that's exactly right. How was it to work with a team of this size?
00:05:16
Speaker
First it was immense fun right and i think that's one reason we managed to keep the team together because you know with seven people seven busy sort of relatively early early career academics we all have way too much to do but.
00:05:31
Speaker
Actually, most of us showed up for most sessions. You know, we all came into the project with our own angles and we very much at least in the initial phase kind of worked on our own bits. There was a lot of subdivision of stuff. And then sort of the latter part of it was figuring out how to make it all gel together.
00:05:48
Speaker
And I think one of the consequences of that is it made it feel more realistic. There isn't sort of one overriding narrative. There's not a single clean story.

Core Central's Technological and Societal Impacts

00:05:59
Speaker
It's a very hard world to sum up in a sentence.
00:06:04
Speaker
I think partly because it's almost like simulating a kind of stochastic element when you have so many different authors contributing their own things and then figuring out how to make it play nicely together. That's my experience. What do you think, John? I think you're right there. First of all, it was very fun. A lot of the time we were having these conversations, we were meeting up to do the world building
00:06:28
Speaker
together and we would just get chatting and not really for quite a long time, not actually make any progress because we were just enjoying chatting about these different things and where things might leave and so on. And then the deadline starts looming and we have to start like actually writing this down.
00:06:44
Speaker
But yeah, with seven people all having a lot of ideas and also very different views of the world and maybe even what desirable AI could look like or what the biggest priorities for using it should be. So it sounds like this is a really intensely collaborative discussion-based process for you, which is really cool to hear.
00:07:06
Speaker
And I do think you do feel that. I agree that you have this kind of naturalness in the world, you know, the way it kind of goes, all these different directions and all these different factors are going on. And one thing I really appreciate is how many interesting little corners you kind of dive into and sketch out like discussions or conflicts that are happening, like things people in the world are arguing about. And I can see now how that might come from side discussions you had where you kind of spun off into this and were like, I guess they'd have to figure that out in this world themselves.
00:07:41
Speaker
This world explores many parallel threads, from global-scale standoffs involving lethal autonomous weapons, to anti-aging drugs, companion robo-butlers, and even artificially intelligent Facebook friends. There's a bustle and a chaos to it that feels believable. The team put a lot of thought into imagining how these changes are experienced by different individuals and societies, and I wanted to hear some of John and Henry's ideas on what life might actually be like in this shifting world that they helped to imagine.
00:08:09
Speaker
So one major feature of your world is this commitment to centralization. And this is both in terms of how AI technologies are developed, but also in terms of how governments are functioning over time. They kind of join together. Could you describe how a person in your world would experience both of these types of centralization?
00:08:27
Speaker
Yeah, so one of the ways that things are set up in our world are that a lot of the political unions, such as the EU, have sort of federalized to some larger extent than they are now. And, you know, the similar thing has happened in South America and in Africa. And this sort of originated as like a trading bloc, because, you know, we're getting to this globalized state where having a sort of trading bloc is quite important.
00:08:52
Speaker
But this sort of really came to a head when in the last few years from our world's perspective, the AI sort of core central that we have envisaged winds up becoming far more integrated in governments and search around the world, mostly just because of the natural advantages that it's able to bring. But what this winds up having in effect is this sort of
00:09:18
Speaker
almost like an unintentional cooperation going on, right? Like this AI system is sort of able to, you know, it's sort of acting in such a way that's beneficial for all of its sort of constituent sort of countries that are involved, but it's sort of aware that it's sort of involved in other nations as well, right? And so it's sort of trying to bring together and make choices that are sort of equitable and helpful for all of the sort of countries that are sort of onboarded this
00:09:47
Speaker
I also integrated this into the decision making processes and this kind of has a snowball effect right because other countries that haven't initially opted into this sort of see oh wow hold on this is actually quite useful and they sort of wind up joining as well. I think at the very end of our timeline is the point where the countries are sort of about to have a vote.
00:10:08
Speaker
on whether they should actually just form a single world policy and sort of have a merge of all the countries. But at that point, it's not something that's happened yet.

Challenges of Centralization and Regulation

00:10:17
Speaker
I guess from the perspective of a person living, right, it, like with the rest of the world, it's sort of a very, the world is in a lot of change. There's a lot of change to orders and flux. And, you know, you can see this, this is quicker than maybe most people would see political change in their lifetime as we normally know it. I imagine it would be quite exciting, but also maybe a little bit intimidating.
00:10:40
Speaker
Yeah. Who do you think would most benefit from or be threatened by each of these types of centralization? Like as an individual, I understand why this would happen at a larger scale, but like, what would it be like to see your nation kind of be absorbed into this larger thing? So like a hopeful thing or a scary thing?
00:10:57
Speaker
Yeah, I think it depends, right? In a sense, it could potentially be better or worse depending on your current sort of, I guess, standing in the global world in terms of economy or whatever, you know, because you kind of if you're a smaller nation, you maybe have this hopeful assurance that your wishes are going to be represented and that the current system will work for you.
00:11:19
Speaker
Whereas I guess if you're one of the more traditionally prestigious nations that has a history of trying to look good or whatever, I'm not entirely... The word has escaped me here. Yeah. Those which have enjoyed more power historically. Yeah, exactly. If you're from one of these, then you maybe stand to lose your relative position in some arbitrary hierarchy.
00:11:42
Speaker
I guess also on a sort of more personal level, leaders of certain nations might realise that their current style of government or something is not particularly well suited to the new transition that is maybe coming up.
00:11:58
Speaker
Obviously things like dictators and so on are probably gonna lose out if this is something that happens, right? If you go from having unchecked power in a relatively small area of the world to, oh, okay, that's just not a thing anymore, then obviously you then have something to lose. But I guess, obviously the populations of these countries and in general are probably going to benefit from this.
00:12:27
Speaker
What about the centralization of the AI technology? Do you imagine any individuals struggling with the fact that all the AI technology becomes part of this core system? Yeah. I think this is where we maybe have to be a bit careful about when we talk about AI technology. I'm sure that a lot of the technology that we are beginning to see now, what is comparatively basic to type of technology that's in the world, the chat GPTs, the assistants, the image recognition,
00:12:54
Speaker
I think that's still going to be available.
00:12:59
Speaker
Right. Maybe there are certain limits on misuse. Maybe there is more regulation from things that we see nowadays like the EU AI Act and so on. But it's this more sort of general AGI that is sort of more centralized. And I think that with the type of AGI that we've imagined, which is very sort of agentic, right, like it can take actions in the real world, it's trying to embed the lives of humanity and help with flourishing. I think that
00:13:26
Speaker
having two of those that may be trying to optimize slightly different things could be potentially a problem in the future. So whether or not the normal people sort of just trying to get on with their lives are super aware of this is a different question. There's probably going to be some pushback from the tech companies and so on who are now saying, well, why can't we make one of these things? Yeah, right. And I think that's something that has to be handled with proper outreach and so on in this world.
00:13:56
Speaker
You mentioned this issue of potentially having two and how they could clash. Does that imply that you kind of see the single AGI centralized model as necessary to having a good outcome for the development of AGI? I think it really depends again on what type of systems we're talking about. If we're talking about something that's sort of like an agentic AGI, then very small differences in, say, value preferences
00:14:26
Speaker
could potentially lead to big disagreements. The same way that we see countries in the real world disagreeing over things and eventually going to war. And I think that seeing or imagining an AGI as an entity that may have these preferences and is comparatively more powerful than most human entities or organizations or whatever,
00:14:50
Speaker
you know, we don't want humanity to become stuck in the middle of some tiny, you know, some, uh, some power struggle for, for the, you know, who can help humanity the best, right? So even if both of them are trying to do things that are good, you know, there might be a slight preference for one thing over another, which if you're then optimizing in some super intelligent way could still become, you know, a point of contention. Yeah.
00:15:17
Speaker
And in terms of the sort of governmental structure, do you see the eventual potential unification of the whole globe as kind of like a victory for humanity, like something we've achieved with the help of AI? Or do you see it as something that is sort of necessary to survive the existence of these powerful technologies?

AGI's Role in Global Governance

00:15:33
Speaker
This is a tricky question, right? Because I think there are ways in which this could be tremendously helpful, but also ways in which this could be sort of tremendously risky.
00:15:40
Speaker
We also have to consider that we're not only dealing in a world where AGI is the only threat. There's also other sort of threats to humanity, such as climate change, nuclear war, and so on. And I think that having a sort of centralized structure can help you make decisions that are sort of without the risk of falling into tragedy of the common situation. But it also definitely opens up problems in terms of potential totalitarianism if things aren't done appropriately.
00:16:09
Speaker
And I mean, maybe this is where the AI can sort of help keep us in check and we can sort of help keep it in check with a sort of hopefully neatly balanced system of, I guess, checks and balances on each of them in terms of alignment, but also in terms of the AI sort of helping us to keep the worst aspects of humanity sort of in check, I guess. Yeah.
00:16:32
Speaker
So one other thing that we really should touch on is the structure of AGI in your world, because it is really kind of unique and alien. You have this really branched thing, which is all arguably one system, but kind of at its limits presents itself as varyingly distinct subcomponents. You have these like avatars that sort of have embodied, you know, entities themselves, but are also in constant communication with Core Central. Can you explain a little bit more about how how this whole system works?
00:17:02
Speaker
Yeah, so at a very high level, Core Central is sort of the organizing component of the AGI system. And it oversees sort of a vast array of other core systems, which are often delegated a particular task, maybe relating to health or to the economy or to education or to particular continents and so on.
00:17:31
Speaker
And there's sort of a sort of hierarchical distribution of tasks going down, sort of mirroring a sort of tree in a way. But there's also sort of communication between different cores, sort of trying to come to mutual agreements about
00:17:44
Speaker
how exactly the sort of thing they need from each other and how they should be interacting. So a lot of these systems will be sort of distributed throughout various places in the world. And this introduces a lot of potential latency, which for very intelligent AGI systems becomes almost agonizingly slow.
00:18:08
Speaker
And so there's sort of a necessity for that to be like a higher level of sort of decentralized, you know, autonomy, if you will. And so we kind of came up with this sort of separated, you know, I think my biggest inspiration for this was like, octopuses, octopi, octopeds, actor codes, you know, they have a lot of sort of almost parts of their brain in their in their tentacles, right? Yeah, 40%, I think is bigger.
00:18:36
Speaker
I guess my idea was something about what if this, but on a much bigger scale, where the tentacles are still more intelligent than humans, but they're still receiving this communication from an even greater intelligence, but still have a lot of remit to act among themselves.
00:18:58
Speaker
Yeah. So this tree of intelligence, these tentacles kind of grow along the lines of society to support all the different factions and groups of humans that are on the planet. But I wonder if they're all one thing, how do they support these individual groups of people, like countries, for example, the way you currently understand them, when they might kind of have differing interests? Is there a sense of loyalty to the parts of the world that they're focused on?
00:19:25
Speaker
Yeah, so this is an interesting question and I'm not sure exactly
00:19:31
Speaker
how to go about answering it. Because I think Core Central and the core system as a whole is still constantly looking forwards. It's trying to nudge the world to be in a more unified place with this idea of reducing inequality, breaking down arbitrary barriers that don't really serve any purpose. And so maybe some of these actions that it's taking while initially unpopular, perhaps,
00:19:58
Speaker
might be nudging society into accepting the more long-term view of this.
00:20:07
Speaker
I'm also, um, uh, struck by, so I also love octopuses. And so that's one reason that I, you know, I love this model. Um, but I think, you know, even at the level of the human brain, I, um, and you know, I'm not going to take too much of a detour into neuroscience of consciousness here, but, um, I, uh, I think we have this very alluring image of an entirely unified conscious experience.
00:20:29
Speaker
But it's very hard to make sense of that when you actually look at your processes so one view in the science of consciousness and very attracted to is something global workspace theory which basically says there is kind of a central clearing house and executive part of the brain.
00:20:45
Speaker
probably your prefrontal cortex, that basically negotiates differences between different parts of your brain and sets a central policy. And that's sort of what your consciousness is, according to global workspace theory. And so I immediately saw some interesting analogies between that and the kind of organization that we were laying out there. And it seemed a really natural way to organize a complex intelligence.
00:21:11
Speaker
Yeah. And this can go wrong. My background is also in neuroscience slash cognitive science. And like, you know, there's alien hand syndrome, like some people will end up in a situation where a part of their body seems to be behaving in a way that is opposed to their sense of control. So you can kind of have these opposing systems suddenly revealed.
00:21:29
Speaker
famously Dr. Strangelove, right? The classic cinematic illustration of this. Yeah. But yeah, I guess that is one of the ways in which this could, you know, in our world at least go wrong is that one of these, say, cause goes rogue, right? And I think that's what one of the big organizations in our world, the preservation and alignment organization is sort of there for, right? Like monitoring some of the subcores, making sure that they are remaining aligned, remaining in sort of communication
00:21:59
Speaker
you know, to ensure that this sort of thing doesn't happen. To ensure that none of the cores sort of do a Colonel Kurtz, go and go wild, lose touch with central command. Yeah. Maybe the biggest story in this world, if you zoom out, is the unification of humanity. By 2043, all world governments have incorporated the core system into their workings.
00:22:29
Speaker
The strange paradoxical structure of the core system seems to make this possible. The cores each fray out into bespoke components that can connect with and support every different human culture while still maintaining this centralized sense of control. In this world, that centralization seems to be the key to keeping global systems, both artificial and human, from catastrophic conflicts. I wanted to hear more about the potential benefits and drawbacks of this kind of centralization and how it came to be.
00:22:57
Speaker
So early on in your world, there is a number of runaway AI scares, and they push us to research control alignment and explainability in different ways that we can ensure these systems are actually going to do what we want them

Risks and Alignment of AI Systems

00:23:10
Speaker
to do. Could you briefly describe what one of those runaway scares would have looked like?
00:23:15
Speaker
I'm reminded as well of this interesting Tom Scott video from a few years ago where it's like a hypothetical scenario where some copyright company is sort of developing an AI to try and identify the sort of thing, the automated copyright that you sort of see on YouTube.
00:23:32
Speaker
And the idea is that it will automatically detect when music is being played that hasn't got the appropriate rights, whatever, and it will go and remove it automatically. But it comes to some realization that there is a lot of unlicensed music going around in people's heads.
00:23:50
Speaker
Right. And so it starts developing and becoming more general to the point where it starts just harmlessly removing it from people's heads. So like there's no like giant apocalypse, but like a whole a whole century, almost a few decades of music just winds up disappearing from people's memory. And then it takes the sort of the traditional sort of
00:24:12
Speaker
AGI self-protection precautions to stop it from being removed and so on. And life goes on. But there's just this giant amount of time missing and a bunch of these invisible barriers around humanity that they aren't really aware of. Now, I feel like my memory of this conversation is going to disappear.
00:24:37
Speaker
Well, so I think we should clarify one thing as we're talking about AIs versus AGIs. We've been talking about chat bots and things like that, which are in our world right now, kind of based on these multilayer neural networks and similar technologies of just like large data. But in your world, you have this clear distinction where the AGI, like the really fancy smarter than human in most ways stuff, comes about from a different thread of technology, which begins with full brain simulation. So can you say a little bit about that?
00:25:05
Speaker
Yeah, so when writing this, I kind of wanted to make sure that it's not reliant on just being an extension of what we have, but bigger, right? But neither is it also something completely different. So the idea I had initially was not necessarily sort of simulating an entire person's brain, but figuring out the general connective structure of, say, the human mind,
00:25:31
Speaker
and trying to build an architecture based on that. Rather than necessarily uploading someone into a computer, the idea is to use it more as a base architecture that can be somehow trained and aligned and evolved from there.
00:25:49
Speaker
Yeah. Well, one thing that you are able to do in your world because of this simulation origin of AGI is develop intent analysis. So I take that as a way to just guarantee that you know what this system is trying to do. Can you say a little about that? Because for me, intent is squishy in a way in that it seems like a narrative that an agent has about itself and could be misleading or not the whole story. How does this give us real security?
00:26:17
Speaker
Yeah, so this is interesting. I mean, this is going off of, I think it's Paul Christiano's sort of definition of alignment where it's not necessarily that the agent is going to perfectly do what we ask every time, but it's at least going to try. And it's this trying to do the right thing that we wanted to capture with intent analysis and kind of just hope that the capabilities take us the rest of the way. And I think this is something that obviously the
00:26:47
Speaker
The technical specifications for intent alignment aren't here right now, otherwise we might well already have AGI. And so actually talking about it in detail is quite tricky, but the idea is that you can sort of figure out some way of trying to understand what the AI system is intending to do.
00:27:07
Speaker
And obviously, there are many ways this might fail, right, through something like self-deception, where the AGI is intentionally deceiving itself to intend to do something. But it gets very messy. Or like the story you just told. I mean, you could say that that system was just trying to remove copyrighted music from people's thoughts.
00:27:26
Speaker
It really means well. It's trying to protect the creators. They need their money. They should get kickbacks. Yeah, so I think the idea with this intent alignment was more about finding out the intention of their specific actions that are going to be used to achieve this goal, rather than just what goal it is intending to achieve, but sort of a plan for how it is going to achieve this. And that does give you a bit more of a concrete idea about what the risks might be.
00:27:54
Speaker
if this copyright AI said under some intent analysis, oh, I intend to go into everyone's brain and start zapping out the neurons that are responsible for this, you'd go, no, I don't want you to do that. And I think it's that sort of predictability that becomes important for guaranteeing safety with regards to these sort of powerful AI systems.
00:28:17
Speaker
So one of the things that alignment suggests as well is the dangers of this in that it incentivizes what they call deceptive alignment, where there's this idea that the AI pretends to be aligned until it's sure. And so this idea about constantly keeping it guessing about whether it's in a test relies on you actually being able to convincingly do that to some entity that may well be smarter than you.
00:28:44
Speaker
Well, so this is obviously an incredibly thorny issue. I mean, the whole concept of alignment itself is thorny and complicated. There's a lot of moral questions there about who it's aligned to and stuff. But in your world, one of the main ways that this is all managed is through the Preservation and Alignment Organization, PAO, which is the first social institution that AI created. And it's a big collaborative effort. I think there's 10,000 humans and a bunch of portions of this AI system.
00:29:11
Speaker
all working together to try to ensure that nothing goes sideways, basically. But I was curious, like, what happens if something does go sideways? Like, if we're going back to thinking this is like a big kind of octopus tree sort of alien thing, you know, say that one of the cores is discovered to be misaligned, is it kind of amputated or what do you do about that?
00:29:31
Speaker
I mean, before we go into that, I think one thing that is important to highlight is that the preservation of an alignment organization can only keep things running along. It can't be responsible for the initial alignment. So whatever your goal is as an AI system, you have an incentive to keep it the same. This idea about you don't let people come along and change your utility function.
00:29:59
Speaker
And the reason for this is that it stops you being able to do whatever your current goal is because it would change what you liked. It's almost kind of like an immune system in a way where the whole system is functioning, but there might be these local areas that get out of control, kind of similar to cancer or something, and they get squashed out. Yeah, exactly. But I guess if something were to go wrong, I guess Core Central would probably treat it like some kind of cancer, as you say. Like this idea that, okay, this one aspect of the AI has
00:30:29
Speaker
started going out of control and it just has to be sort of removed. Yeah.
00:30:36
Speaker
Well, so this is really leaning into this centralization as being vital. And we're talking about these variations of intent as cancers and bad things. But you could also imagine maybe one of the cores starts to get a new intent, which is to develop the most beautiful music. And this is a relatively harmful thing, but it would be squashed in your health, right? Because it's no longer aligned. Is there a concern here of losing some potential for diversity or dynamism in the system if it's so focused on preservation and alignment?
00:31:05
Speaker
Yeah, I mean, I guess that sort of concern is always there when you have these sort of very large, almost monolithic systems. They become static, slow, and resistant to change. But I feel like in this instance, at least this sort of becomes
00:31:23
Speaker
almost our fault at the outset for encoding alignment from a human perspective. And so one of the things that was most important to us when making AGI in this world was making sure that humans don't die out. And that's obviously still incredibly important in 2045, five years after AGI.
00:31:45
Speaker
But we never included a value for the AI. And so I guess it almost logically extends that Core Central would have no real issue, maybe, with removing part of itself and destroying it.
00:32:01
Speaker
Yeah. I mean, maybe there's a better solution, right? Like maybe there's the option to become a different core. And this is obviously something that Core Central and the cores would have to sort of decide for themselves, right? Like maybe there's a core devoted to education that now wants to be making a massive nice garden. Maybe there's a new role that can open up to sort of handle this. But I guess at the end of the day, it's also a question about
00:32:27
Speaker
why has this misalignment occurred, right? And is this something that could lead to sort of much worse situations other than just doing a different job, right? You know, like if you could drift from education to gardening, how does sort of big AGI still have the assurance that you're not going to drift from gardening to murder, right?
00:32:54
Speaker
I think, I mean, what you've said really gets to this core problem that I think, you know, people are also discussing just abstractly about alignment in general, which is like, who is it aligned with?

Core Central's Global Integration and Its Implications

00:33:03
Speaker
Like you're saying it's aligned with humans being preserved and that maybe leaves out the AIs. It also leaves out other sentient beings or, or, you know, animals or maybe other aliens that might exist in the world. So there is, there is this kind of focus that's built in at the beginning. Yeah. And I think that's something that's going to be very tricky for
00:33:24
Speaker
human or humanity to do in the real world over the next however long until we have AGI. Yeah. So I'm curious to explore the path towards this kind of monolithic centralization that your world ends up going towards. Basically, every country eventually incorporates Core Central into their administrations. I think by 2043, that's already happened. And this is just due to the huge benefits and incentives of doing that. Can you describe some of those benefits and incentives?
00:33:54
Speaker
Yeah, I mean, the most obvious one to me is the benefits the core central or the core system brings can lead to sort of almost objectively or demonstrably better standards of living for your people, right? And as a leader, it's very hard to try and go against that, particularly in the short term. And this is sort of just how I imagine things naturally going. If you have these giant monolithic AI systems,
00:34:22
Speaker
where if you do include it, then you're going to benefit massively and your neighbors who haven't are not. And I think that there's a bit of a power disparity here that I'm still not sure how I feel about this part of our work, right?
00:34:38
Speaker
For my part, I would add that when it comes to economics, I think capitalism is a very flawed system, but it's the least flawed one we've come across so far because the invisible hand and price signals are very useful. I think capitalism has brought
00:34:55
Speaker
Many many many benefits and global capitalism particular but so many industries it just creates awful incentives so i'm one of the things that i would hope is that i could add call central could help countries outperform capitalism in terms of economic efficiencies.
00:35:11
Speaker
Yeah, this also gets back to that question of the world improving in ways that are measurable, but not ways that are unmeasurable. And one of the little kind of side discussions that you sketch out happening in your world is this global conversation about the quantifiability of human goods. Like you're using these AI systems to analyze legislation, see what kinds of changes you might roll out across your population.
00:35:33
Speaker
But what is actually available to you to measure? Like, can you actually tell if people are fulfilled, whatever that means, and how do you avoid over-indexing on easy to measure factors like GDP or other things like that?
00:35:46
Speaker
Yeah, I mean, at a high level, this all goes back to sort of trying to avoid pitfalls from, say, a good high school, right? Like the idea that once you start measuring anything and you start using it for optimization, then all of a sudden they are almost measuring the wrong thing. Your measure ceases to be useful in a way.
00:36:05
Speaker
And I think that's something that's very difficult to escape. But my hope is that by sort of taking in so many things into account, you can sort of almost out proxy the proxy in a way, right? Like sort of escape that trap. Yeah.
00:36:29
Speaker
Both utopias and dystopias often feature highly centralized societies. Of course, Central doesn't shy away from some of the more frightening aspects of unification and surveillance, but its portrayal is overall aspirational. I wanted to take some time to explore how this colorful, messy world pushes back against dystopian narratives, and to hear about the other positive stories that may have inspired it.
00:36:51
Speaker
So I'd like to transition to talking about how some of the popular portrayals that people might be familiar with of our future relate to what you've imagined. And one thing in particular that I'm curious about is the centralization issue. Your world goes really hard on this direction of centralization. It's kind of this like mostly positive surveillance state in a way. There's like this widespread intensive data collection in place just to make sure the AIs have all the information they need to make good decisions.
00:37:19
Speaker
And obviously the potential negatives of this are like pretty well explored in media and you mentioned them in your story too that there are some like political leaks of personal data for example, but overall you portray this as a net benefit for people. Can you say a bit more about that? Yeah, I think there are definitely some downsides or potential downsides to
00:37:38
Speaker
to the way that a lot of data is collected in the society, but it's also sort of important to remember that it's mostly going to these cores, which are sort of very alien and unhuman. I find that a little more palatable in the sense that you don't have the judgment associated with it, particularly if it's using the correct type of sort of anonymized data collection practices, such as differential privacy and so on. So it's not really about me, it's about the populace as a whole.
00:38:07
Speaker
And I think as long as it's not misused for political reasons or to sell me more stuff, which Core Central doesn't really have an incentive to do, then I think it's not so much like a surveillance state, but it's, I don't know, it's more just making use of information that's there. And when I was writing this, I was also sort of implicitly assuming that this is all pretty much opt-in, right? So you can probably just opt out of this if you want to.
00:38:37
Speaker
Interesting. I would add that whilst we were planning and having our planning meetings, I had been just teaching a module on privacy in my AI ethics course. And one thing that can seem a bit dispiriting, I think, chatting to some of the students on my course, some of whom are educators themselves, teaching high schoolers, is that
00:38:56
Speaker
There seems to just be a declining sense of the importance of privacy. It's really hard to get high schoolers, at least this is what I was told, to care about their personal privacy when it comes to which apps track them and so on. Do you want to pay the $1 for the premium version of the app that doesn't track you or get the tracking version for free? Nothing's better than free.
00:39:23
Speaker
What does this bring as benefits though? So we might be able to deal with the increased privacy. Maybe we see it as like a transparency state instead of a surveillance state. But how does that connect to us being empowered in this way and the right to free reach sense?
00:39:36
Speaker
Well, the immediate obvious benefit is that Core Central has information in order to make the best decisions that can take into account what benefits everyone rather than just a few. But also, if you opt in and you get access to more benefits from Core Central or something, then there's obviously going to be quite a lot of incentive to do that.
00:40:01
Speaker
Yeah, it's interesting to think of this extreme centralization and surveillance as a way to bring about like more intense democracy in a way. I mean, you would truly in this world be being seen and understood by the agents responsible for making the rules and like they would have your best interests in mind and just know, know you and know what you needed and hear you.
00:40:22
Speaker
Yeah, but equally, I think there's still the option, you know, to just go into a place that doesn't have, you know, we're not talking about sort of cameras in every in everyone's room for the express purpose of this, right? Like, there's probably a camera on your phone, the same way there is now. There's probably more things that have
00:40:39
Speaker
cameras on for, say, gesture recognition, but there's no sort of big brother style, you know, two-way TV in every room that's sort of monitoring your every move. I think it's just about using the data that you do get sort of more naturally, more efficiently.
00:40:56
Speaker
So you're describing this world as very unified, but not necessarily overbearingly surveilled. Um, there is in 2045, one of the last beats of your story, this grand unification that basically kind of calls for one planet, one people. I was curious with core central already involved in every country's affairs. Like what does this actually do for the world? How does this shift things? I mean, in many ways it's, it's,
00:41:23
Speaker
Not quite symbolic. My intuition here is that it would be similar to if, say, the EU now were to federalize. In many ways, a lot of the legwork is done in terms of just the intertwining of the economies and building up a sort of solid cultural foundation and that people are more willing to take that final sort of push towards unification.
00:41:50
Speaker
Well, this isn't incontroversial in your world. You also do mention like in your short story, one of them, you say that there are people who are arguing against integration and they're kind of fighting to have their own independent local AI systems that are less intelligent, but not a part of this whole core central thing.

Resistance to Global Integration and Diversity

00:42:05
Speaker
What do you see happening to like this part of the world some years down the road? Like are they still dissatisfied and is there room for them to find purchase or do they kind of die out?
00:42:16
Speaker
Ideally, whatever is causing this desire would be addressed and fixed. But at the same time, you can't make everyone happy all of the time. And even within blocks like the EU or in individual countries, there are often secessionist movements. And I think that isn't something that we can just magically fix with intelligent AI. And yeah, this is almost a problem for further down the road for Core Central and the government.
00:42:45
Speaker
Yeah, I guess I would add that sort of one of the things that I really appreciated once I started reading really good science fiction, not the more basic stuff, is that a lot of the simpler stuff like Star Trek presents a world in which everyone just agrees with each other about my stuff. The human conflict just kind of magically drops off the scene. And then, you know, reading stuff like Hyperion or the Expanse books, you realize that, you know, like, no, people still have different religious beliefs. People still have different political values. People still have different national values.
00:43:14
Speaker
And they're not gonna magically drop away. And so I think, yeah, those conflicts will remain and finding better ways to navigate them is part of the push towards centralization. Yeah. I'm also curious what some of your biggest sources of inspiration were when you were working on this together.
00:43:30
Speaker
And so for me personally, I came relatively late, relatively recently to E&M Banks' culture novels, which I think are one of the rare examples of pretty optimistic sci-fi. And I was astonished when I came across them that like some of them were written in the 80s. And yet he gives really sophisticated portrayals of what AGI could look like and how AI can coexist alongside
00:43:58
Speaker
sentient biological beings in a relatively benign fashion without being afraid of exploring what use do relatively cognitively constrained biological beings have in a world of 4000 IQ minds the size of small moons.
00:44:17
Speaker
I found that such an exciting and inspiring source of literature so that was one big source for me. In other words, I'm a huge fan of The Expanse, the show and the books. Obviously, that's kind of semi-dystopian, but one of the things it does really well is,
00:44:33
Speaker
It's a very politically messy world something you don't often see in sci-fi you know star wars has this relatively clean division the good guys bad guys star trek has again the federation usually seems like this pretty monolithic entity but the idea that you know,
00:44:50
Speaker
There are all these complex factions constantly kind of battling together arguing or presenting different perspectives that's something that i think is missing a lot of good sci-fi and a lot of sort of speculative fiction so i was really keen to integrate some of that kind of social messiness social and political messiness into the world.
00:45:09
Speaker
Yeah, I mean, I also have to echo Ian M. Banks' The Culture series as pretty inspirational. I mean, that series has been something that shaped my view of AI as a whole, not just for this world building competition, but for the sort of the way that I would like to see things go. I really appreciate you both talking through all this with me.

The Role of Creative Risk Communication

00:45:29
Speaker
It's really great to get a little background view of all this fascinating material that your group has put together.
00:45:34
Speaker
I'm looking forward to hearing Laura's perspective next as we discuss some of the potential impacts of this work you've done. Yeah, it's been fun chatting. Oh, it's been great.
00:45:47
Speaker
With a team this big, it's impossible to really do justice to all the people who contributed. We weren't able to feature Jessica, Beba, Catherine, or Clarissa, but we did get a chance to catch up with one other member with incredibly relevant expertise, and that's Lara Mani. Lara is a research associate at the Center for the Study of Existential Risk, and she's actually done research into how to most effectively communicate about global catastrophic risks.
00:46:10
Speaker
So I was really curious to hear her thoughts on this world and also what she thought about the kind of creative, aspirational, yet grounded world building that FLI's competition encouraged overall. Hilary, great to have you with us. Thanks for having me.
00:46:24
Speaker
So before we dive into your expertise on communication, I'd love to hear your thoughts on one of the major themes in your world that I've been discussing with John and Henry and that's centralization. So like in 2045, your world has this vote for a single world government. And I'm just curious how you feel about that. Is that something you'd celebrate or something you'd be kind of concerned about if you were living in this world? You know, with the framing of having the most optimistic world or building hopeful futures as, as the competition had,
00:46:54
Speaker
I think it is a really nice kind of goal to aim for that you have this this kind of unified approach how realistic it is.
00:47:02
Speaker
I don't know, but we definitely strive to keep in there as well elements of equality and inclusion around different kind of communities and voices and that's where our world has these kind of subcores. So it has the core central but then has subcores that are designed to kind of meet with more community level need
00:47:25
Speaker
Yeah, so it sounds like that was kind of one way of addressing a risk which would come along with this of, you know, reducing diversity or having some kind of strong armed power structure across all different kinds of people. Is that like the main sort of concern that people were pushing back with as you were thinking about the centralization? Yeah, I think we were really concerned about
00:47:47
Speaker
keeping this kind of homogenized approach to thinking about the way in which it might function, our kind of core central might function, and the fact that the needs of one region may be vastly different from another and trying to ensure that that gets encapsulated in those discussions. Yeah.
00:48:03
Speaker
Well, I'd love to hear about your own work with communication. So you've you've developed role playing and scenario based exercises to help communicate global catastrophic risks. I'd love to hear a little bit about what those look like and how their efficacy compares to more traditional communications techniques.
00:48:19
Speaker
So this whole kind of world of experiential engagement with risk is something I'm just really passionate about. And it's kind of filtered that throughout my career and it's kind of culminated with my work at CSER looking at this for global catastrophic risk. And so one of the first things I did in this space was to bring together as many people working with scenario-based exercises across all the different risk domains within global catastrophic risk to basically come together and talk about what we're doing.
00:48:47
Speaker
Why we're choosing to do it that way, what works well and how can we improve? And I think we still have a bit of a long way to go in understanding what is truly effective because I think a scenario based exercise is very much
00:49:00
Speaker
Its impact is dependent on how well it's tailored to its need and its audience. Everybody approaches scenarios differently. So there are very simple kind of matrix type scenarios where you test two variables against each other and you can come up with four very different scenarios.
00:49:20
Speaker
But I prefer this kind of more creative side to scenarios. So we have one that's really well established within the center, which is a role-playing game that explores AI safety and ethics features. And it's called Intelligence Rising. We've been doing that for about three years now. And this is just a really interesting game and something that's just really captured my imagination for the last few years. It's a scenario in the fact that you play either the head of a nation state or you play the head of a company, an AI tech development company.
00:49:50
Speaker
And you work through this world along with the Game Master and every single game has a different narrative that emerges, a different future that's told, which is really exciting. But what's really unique about intelligence rising as a game is that we ask the players to embody that character. They're not just forecasting our futures thinking.
00:50:10
Speaker
in a scenario that actually asks to play that character as if they're right there and as if they're the ones making that decision. So you get this additional kind of embodiment of the risk itself. I really love this kind of realm of scenarios and features building and thinking about features and risk in a more creative kind of way. And I think scenario-based exercises can really help us do that.
00:50:34
Speaker
I mean, there's still, and I said that earlier, there's still a little bit of a way to go with this. And we've seen this pre-COVID. There were many kind of serious exercises used for pandemic prevention or pandemic preparedness. And what happens is the people that run them come out with a whole host of recommendations and suggestions on how you can better improve or prepare for such a risk. And that's where they stay. They quite frequently don't get adopted into policy.
00:51:01
Speaker
And there's still this gap between us doing something like this and making sure that it has an impact on the way in which we prepare for those risks. So that's why we're working quite extensively at the moment is to try and think about how we can bridge this gap and kind of work to improve that for sure.
00:51:17
Speaker
One of the goals of this overall effort at FLI is to try to help creative thinkers to see how they can have a valuable impact on the future with their work and also how to get technical thinkers to realize that storytelling is valuable. So it's so cool to hear you talking about how this embodiment and this creative engagement really seems to help people get to richer and more interesting narratives. Do you have any advice for how filmmakers or other creators might be able to help bridge this gap?
00:51:44
Speaker
I'm a massive believer in the power of creatives. And I try to embed creatives in as much of my work as possible. So for example, last year in September, we had a creative communications workshop that we hosted at Caesar. And we had a whole range of creatives come in. So that included people that worked on narratives. That included the board game designer, Matthew Menopace, who designed Pandemic. We had a trapeze artist.
00:52:14
Speaker
We had a museum curator from the Art Science Museum in Singapore, and we had people that deal with very specific kind of arts like thermochronic paint, which is paint that changes color with temperature. And we got them all to come in, and we invited a whole host of academics or people that worked within the space of existential risk. And we got them to sit together and think about ways, creative ways in which we could communicate risk.
00:52:40
Speaker
And this was just really fantastic, even though I was one of the people behind designing something like this. I was really quite stunned by thinking about new media as a way to kind of communicate some of these risks. I think there's just so many ways to kind of bring creativity in what we do.
00:52:55
Speaker
But I think it's important to not put creativity in a box and say, oh, those are creatives. They can come in and help us draw our conferences or things like this.

Innovative Strategies for Risk Communication

00:53:07
Speaker
That's one way to use it. But creatives think about risk in a different way. And they're able to frame it in a different way. I think all different people from all different backgrounds have different ways of thinking about risk and thinking about these kind of themes.
00:53:21
Speaker
It's really important that you cover all of those because if you're going to communicate risk, you have to speak to all sorts of different
00:53:32
Speaker
So, the more voices that you have in the designing and the development of communications helps with the dissemination and the impact that it has. Yeah. So, say I'm a person who's fairly technical and not as creative. If I have some concern that I'm really worried about existential risk-wise maybe, and I want people to be worried about it too, how would you
00:53:54
Speaker
advise that I try to find creative thinkers or push my own thinking and more creative way to kind of get in this direction and reach more people.
00:54:03
Speaker
Yeah, that's a really interesting one. And, you know, a lot of the stuff that we've been able to do is because we have that extensive network or people that have worked with different people or kind of know them through that kind of base. And we've done it through all sorts of different means. We've approached creatives and just said, I really like your style and I would really love it if you would think about this kind of work with us. Is it something that might interest you? Some people have really taken to it and have really tried to help us through different creative means of developing cartoons or things like this.
00:54:33
Speaker
Other people have been more skeptical and there's certainly something in there about the skill sets that you have. It's a very fine line to tread when you're talking about risks that have this potential fear factor around them and carefully navigating that and being quite sensitive to that too. But I think you'll find that there's already people that are already interested in these topics and that would be keen to team up.
00:54:56
Speaker
So going back to that kind of like fear factor and fear versus hope, how did those kind of play off of each other when you were thinking about designing this world that you helped to create?

Creating Hopeful Futures in Core Central

00:55:08
Speaker
Yeah, I'm laughing because this was just so challenging when we started this world building contest.
00:55:18
Speaker
It was the first time, bear in mind that we work at the Centre for the Study of Existential Risk and the Centre for the Future of Intelligence, places to think about risk. It was one of the first times that some of us had ever tried to think optimistically about these features.
00:55:38
Speaker
What did that do? What did that feel like? It was really challenging because for so long we've thought about what are the problems that might arise or the different pathways to get there. And then suddenly trying to think quite optimistically about it. One, it was quite refreshing. It was like, oh, this is quite fun. Yeah, all the potential that is here. And it kind of opens up these new kind of pathways to new messages that we might have around the way in which we talk about these risks.
00:56:03
Speaker
But I guess it was hard to still be really optimistic and I think we still held back somewhat for quite a large amount of the process until we really felt that we started to relax a little bit more into it and be a little bit more optimistic and hopeful about it. But it was really exciting for the whole team. I think we really enjoyed for once being challenged to be more optimistic about the futures in this space. It was really good fun.
00:56:34
Speaker
Awesome. Do you think that this kind of more optimistic balancing of hope alongside risk is something that would be good to have more of in kind of pop culture stories about the future? Yeah, I'm a big fan of using hope in messaging. And I think that's something that comes across quite interestingly across some of the kind of academic literature in the space. So
00:56:57
Speaker
Owen Cotton Barrett and Toby Ord coined that phrase of existential hope as a message of don't always talk about the negatives, but let's also talk about the positives. It's something that I think can counteract things like the fear factor and can get people a little bit more motivated. I think it is really important to have the balance between talking about risk
00:57:20
Speaker
but also talking about the positive benefits. I think it's not like doing one or the other. I think there's a way to package the way in which you talk about those and encompasses both of those. That's why I love this world building contest. I was so keen to be involved because for me, it was one of the first times I'd ever tried to write hopeful messages or write hopeful features.
00:57:40
Speaker
And that would just felt like a really powerful thing that we could do. Yeah. And John and Henry told me that you were kind of instrumental in bringing everyone together to create this. So thanks so much for doing that labor to kind of get everyone in the room and get the conversation started. And I really appreciate the world that resulted. Yeah. I was just, yeah, when I saw the competition, I was like, okay.
00:58:01
Speaker
It's time. It's time for us to do something like this. One, it feeds into lots of communication work that we've been doing. It's that thing where we've been working in the pandemic, sitting in our home offices as well, and a really nice way to bring all sorts of different skill sets across the teams together and apply it to one problem or one project together.
00:58:25
Speaker
I got to work with people that I don't normally work with. John and Henry are mainly on the AI side and I don't really work as much on the AI side. Getting to work with them was really great, along with other people across the two centers that I wouldn't normally engage with. So fun. Well, thank you so much for all the work that you put into this. Thank you for taking the time to chat with us about this process. No worries. Thanks so much for having me.
00:58:58
Speaker
If this podcast has got you thinking about the future, you can find out more about this world and explore the ideas contained in the other worlds at www.worldbuild.ai. If you want to hear your thoughts, are these worlds you'd want to live in?
00:59:12
Speaker
If you've enjoyed this episode and would like to help more people discover and discuss these ideas, you can give us a rating or leave a comment wherever you're listening to this podcast. You read all the comments and appreciate every rating. This podcast is produced and edited by WorldView Studio and the Future of Life Institute. FLI is a non-profit that works to reduce large-scale risks from transformative technologies and promote the development and use of these technologies to benefit all life on Earth.
00:59:34
Speaker
We run educational outreach and grants programs and advocate for better policymaking in the United Nations, US government, and European Union institutions. If you're a storyteller working on films or other creative projects about the future, we can also help you understand the science and storytelling potential of transformative technologies.
00:59:51
Speaker
If you'd like to get in touch with us or any of the teams featured on the podcast to collaborate, you can email worldbuild at futureoflife.org. A reminder, this podcast explores the ideas created as part of FLI's worldbuilding contest, and our hope is that this series sparks discussion about the kinds of futures we all want. The ideas we discuss here are not to be taken as FLI positions. You can find more about our work at www.futureoflife.org, or subscribe to our newsletter to get updates on all our projects.
01:00:20
Speaker
Thanks for listening to Imagine a World. Stay tuned to explore more positive futures.