Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
How Will We Cooperate with AIs? (with Allison Duettmann) image

How Will We Cooperate with AIs? (with Allison Duettmann)

Future of Life Institute Podcast
Avatar
0 Plays2 seconds ago

On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children. 

You can learn more about Allison's work at: https://foresight.org  

Timestamps:  

00:00:00 Preview 

00:01:07 Centralized AI versus decentralized AI  

00:13:02 Risks from decentralized AI  

00:25:39 International AI governance  

00:39:52 Cooperation with future AIs  

00:53:51 AI for decision-making  

01:05:58 Capital intensity of AI 

01:09:11 Lessons from history  

01:15:50 Future space law and property rights  

01:27:28 Is technology invented or discovered?  

01:32:34 Children in the age of AI

Recommended
Transcript

Valued Pluralism in Society

00:00:00
Speaker
What are we aligning to? We have a society of valued pluralism. That would be really shame to lose. Sentient creatures care about very different things, but they are nevertheless enshrined in a civilizational architecture and which both allows them to peacefully coexist.
00:00:20
Speaker
when you know their goals misalign, but also it allows them and actually encourages them to cooperate on period or preferred possibilities. Rather than having like one entity trying to regulate and control AI development, instead what you have is like,
00:00:36
Speaker
many different actors and like coming to a type of agreement of light which kind of capabilities are dangerous to develop. This agreement is then basically enforced multilaterally by this cryptographic monitoring fabric. I think if we're only ever focusing on the world that we're really trying to avoid, You can do that all day long, but if you never build the stuff that you do want, then there's just less and less even to strive for.
00:01:01
Speaker
And I think we're at that point now for civilization. We need to level up and lean in. It's possible. Welcome to the Future of Life Institute

Introduction to the Future of Life Institute Podcast

00:01:10
Speaker
podcast. My name is Gus Stocker, and I'm here with Alison Duertmann.
00:01:14
Speaker
Alison is the CEO of the Foresight Institute. Alison, welcome to the podcast. Thanks so much for having me. I'm really excited for the conversation. Fantastic. All right.

Centralized vs Decentralized AI Development

00:01:24
Speaker
You have a bunch of writing on two different paths to developing AI.
00:01:32
Speaker
One you could call the centralized path and one you could call the decentralized path. On this podcast, we've discussed a bunch of the risks of decentralized approach to so developing AI. But I think it would be worth it still to go into detail discussing the risks and benefits of centralization versus decentralization in AI development.
00:01:56
Speaker
Yeah, again, let's do it. Perfect. What do you see as the main dangers of this of a centralized approach? Well, I mean, like first, you know, it's never as cookie, clean cut as you make it out to be. You know, I think over time we're just like evolving in different patterns and it's very difficult to even to distinguish centralized from decentralized. At what level are you ah even looking at it? But like,
00:02:21
Speaker
you know In a nutshell, looking at it very crudely, like a centralized path could mean both centralization in terms of you know like one company or like one actor or one AI system or one government gaining lots of control and power through AI development.
00:02:39
Speaker
And on the decentralized path, it could mean anything from you know like multiple different actors to like a variety of different actors to everyone in civilization. having really a stake in AI development.
00:02:53
Speaker
And so that's like, it's a it's a spectrum, of course, always. But like saying very crudely, like there's just a host of lists. I have this post where I really go like one bullet point by bullet point. But I think sometimes it's like really good to actually like lay out the trade-offs and compare them for both centralized and decentralized systems. So I guess, you know, a big one that,
00:03:14
Speaker
and One is worried about perhaps my background in philosophy from like a meta ethical perspective is like the possibility of value lock in and the possibility of like an end of progress lock in to some extent, you know, I think you know, value lock and, you know, that's really ignorant of how values developed over the history of civilization. So if you think of, you know, how we have evolved as ah society and how we've made progress over time, it is by people trying out different things and by different pockets going off and and and developing and and developing new cultural norms, et cetera, and then like meshing it all together again. And I think ultimately if you have this one centralized actor that,
00:03:59
Speaker
that that like stably controls the world pretty much in in a very in a very concrete and you know radical example, then I think that that's quite ignorant of how we have evolved pluralistically.
00:04:15
Speaker
Yeah, so that's one thing that I'm very worried about, um just meta-ethically. And I think that's not something that like the AI community has very well grappled with

Value Pluralism and Perinotopia Concept

00:04:22
Speaker
and grasped with. It's just like, what are we aligning to, right? Like we have a society of value pluralism and we have a society that like rather than trying to make us all agree on a specific value set, has mostly worked by creating architectures in which value pluralism can coexist and in which different entities can cooperate for mutual benefit, even if they don't share values.
00:04:44
Speaker
even if they don't share goals perfectly. And so that would be really a shame to lose. And it's very unlikely that we'll get it right on the first try. Historically, we haven't really been good at it. So I think the first one is, you know, just the meta-ethic concept. I don't know if you want to dig into it.
00:04:59
Speaker
I can rattle. I can go down the list and just rattle a few more off. No, this this is this one is is central, I think, and important. So if it's the case that, say, one company or one government is...
00:05:15
Speaker
is the entity that arrives at AGI or perhaps even superintelligence first. And if if there's only one of these systems, then it seems that the values of that company, of that government, will might be able to assert power over the world for perhaps a long time.
00:05:32
Speaker
And so... it's is there I guess the central question then is, is there a way to incorporate this this feedback and critique that we have in in our current culture, ah the way we develop values and refine them over time?
00:05:49
Speaker
Can that be incorporated into an AI system or perhaps in an approach taken by a government or a company? I hope it's possible.
00:06:00
Speaker
and i think that there are these like different paths of developing AI systems and one that, you know, a future I would like to see more draws on like a few notions of, one is this notion of a perinotopia, which bear with me, it's a and utopian, it's a utopian outline here, but it's basically a utopia of utopias.
00:06:23
Speaker
And so, You know, you go from a civilization in which different entities, including humans and possibly eventually AI systems and perhaps eventually like other sentient creatures care about very different things, but they are nevertheless...
00:06:40
Speaker
kind of enshrined in a civilisation architecture, and which both allows them to peacefully coexist when you know their goals like misalign, but also they it allows them and actually encourages them to cooperate um and on Pareto Preferred.
00:06:56
Speaker
possibilities And so that doesn't always mean for mutual benefit, but it at least means that at least one of these entities is better off incorporating than the other without having left the other one worse off. And so over time, you know, this type of, you know, I guess like blueprint of a civilization could move along these like Pareto preferred paths and into the not quite perhaps a Pareto-topia, but definitely Pareto-tropic notions. And this is a concept that was developed by eric Drexler and Mark Miller, who we've co-authored a book with. And Eric has filled it out and then colored it in to some extent, and Mark has, but I thought it was always really inspiring. And I do think we can get to something like that, hopefully, because to some extent that is how civilization has evolved already some extent.
00:07:48
Speaker
just by virtue of the fact that over time, rather than, you know, perhaps like coercing each other into specific, into specific arrangements, it has become more valuable for us to ah figure out how we can get to where we want to go by also helping others along the way, by offering them potential deals that are also good, and seen as good by them on their own terms.
00:08:16
Speaker
And so this kind of notion of cooperation has a lot of precedence in civilization. So I don't think it's impossible ah that we can make it, yeah that we can continue to enshrine it in civilization. But I do think it takes a lot of architecting and mechanism design and a lot of like you know trial and error ah to figure out how to do it. And we don't have much time.
00:08:39
Speaker
Yeah, I guess that's a that's also an important point, right? Talking about what's Pareto optimal, it seems quite theoretical still.
00:08:50
Speaker
And so the difference between what's theoretically optimal and what can be implement implemented in the machine learning pipeline as it exists now or in the corporate structure of a company or in the the governments that are currently in the lead in AI development now,
00:09:09
Speaker
Do we have enough time to ah to implement something like what you're thinking of here? Well, I think Drexler makes this point in his post on Paratiotopia, which is kind of interesting, which is basically that over time, maybe automation...
00:09:28
Speaker
robotics, possibly even nanotechnology-enabled production, makes the pie of civilization grow a lot, like the GDP pie, I guess.
00:09:39
Speaker
And so actually the gains that you get by cooperating from each other, even on, let's say, a deal that is per-ritor preferred, but perhaps not you know like highly optimal for you, are so big that it actually becomes super attractive for you to cooperate.
00:09:55
Speaker
And so cooperation might become much more attractive over time if only we find, you know, like, if only we find our way into it. And of course, you know, there's like many different ways in which I could go wrong in reality. But I do think that but might actually be a future that we're moving to or can it we move to by just highlighting and getting very precise about highlighting many of these cooperative

AI's Role in Global Cooperation

00:10:19
Speaker
opportunities. And so one way in which civilization is currently suboptimal is that it takes...
00:10:27
Speaker
it has cooperation comes at a lot of cost, right? Like we have search costs. You need to actually like go. So let's say, you know, i'm I'm probably not even aware of most of the ways in which i could cooperate very usefully with most of the world. I just don't know.
00:10:43
Speaker
I'm sitting here ah in the Bay Area. I'm i'm just not not aware of it. And that's because I don't have the time to go out there, roam the internet all day, look for other people that might want to offer me something or or other companies that might want to offer me something that that I would love to that i love to take them up on. But you know by decreasing search costs, AIs can possibly help a lot with cooperation.
00:11:07
Speaker
It's not only by decreasing search costs, by but but also you know like finding a possible deal, then helping negotiate the terms. right now That takes a long time to actually negotiate contracts.
00:11:20
Speaker
That is a big, big, big transaction cost that we're all facing um and that, you know, really like prevents us from reaching a much more cooperative world. And then finally, like the enforcement side of things is also, you know, like suboptimal in the way that we do it right now. So you could imagine a world in which you have these fiduciary AI systems that are really like, you know, kind of,
00:11:42
Speaker
tied to your well-being and your welfare that you trust. Hopefully there's some privacy protections in place if they have access to a lot of your information, but that basically act as your multipliers out there in the world and that go out there for you and look for these better cooperative deals. And once they found them, they negotiate terms that are actually acceptable for you and that would ah leave you up at no.
00:12:05
Speaker
leave you in a period of preferred state and then i finally help enforce them. And there's various ways in which you could do that too. And I think we could totally get there. Nothing's stopping us from creating these types of entities, but it takes some intention.
00:12:20
Speaker
it takes a little bit of ah takes some intention It's something like, as the world moves from scarcity towards abundance, it the strategy of cooperation becomes more and more and preferable to to each person or each company or each government.
00:12:40
Speaker
and And you have, on top of that, ai decreasing search costs and transaction costs and making the world more transparent. So you you have more options to cooperate and you have more knowledge of those options also.
00:12:55
Speaker
And so that's that's that's ah that's actually a pretty positive vision.

Risks and Defense in Decentralized AI

00:12:59
Speaker
let's Let's hope the the world goes that way. i think I think we should also touch upon the risks of decentralized ah AI development before we get further.
00:13:12
Speaker
Perhaps the biggest one or the one that I've discussed most on this podcast is just the risk of proliferating dangerous capabilities. So if you have, say, something that we would think of as decentralized ai development would be perhaps open source development and everyone can take the system, everyone can fine tune model however however they want, they can use it in in ways they want.
00:13:36
Speaker
And so if you end up in a system, in in a world, sorry, in which it's much easier to, say, develop bioweapons or at attack critical infrastructure, do cyber attacks and so on, that's that's what I see as perhaps the main downside of decentralized AI development.
00:13:57
Speaker
Do you agree? and and And what can we do to mitigate that? Yeah, I mean, i think we're wouldn't be we be doing us a disservice, even if we're pushing from our decentralized path.
00:14:09
Speaker
If we ignore the risks that are coming from it, that would be, you know, we would really be shooting ourselves in the foot. And while, you know, at Foresight, I guess, generally we have... somewhat of a bias towards open source development with Christine Peterson, our co-founder, having been instrumental in coining the term open source software.
00:14:29
Speaker
Like this, like AI just brings a host of new problems. It's not only that, you know, I think the decentralized development of technologies in general, including bio, nano, just on its own, let alone you know, multiplied by AI is risky enough. But, you know, with AI, those all of those risks possibly multiply just because it allows possibly like much, much larger number of people with much less financial and other costs to develop um more and more civilization destroying technologies and
00:15:06
Speaker
And and that that that is a that is real risk. And I think that, you know, we are almost incredibly lucky that so far we've been able to sail through or let let's say muddle through. We haven't planned to sail through.
00:15:19
Speaker
We have muddled through. And so but this needs to be taken into account. basically. and And I do think that over time, this offense-defense dynamic and the way that it's playing out is just something that um has happened over the development of civilization. and but But now it's really supercharged. And so we just don't know on the short run how AI will influence offense and offense capabilities and whether they whether anything will become offense dominant. And so I think we need to really be be very, very careful here.
00:15:52
Speaker
Yeah, so it's definitely a very big problem that decentralized approaches are facing. But I do think we can build decentralized approaches with these risks in mind. We just need to basically, it's not the the kind of dichotomy that people usually think about of like, either you have risks of runaway technology, so need a centralized actor to like have perfect surveillance and enforcement capabilities to crack everything down.
00:16:20
Speaker
Or on the other hand, you have totally centralized technology development. and And here, yeah, you you know have the have possible risks of this open source development. But on the other hand, we can think about what are actually like decentralized approaches for addressing these risks. How can you actually build in this differential technology development framework?
00:16:41
Speaker
How can you build in this DEAC framework ah that actually builds ah with taking care of risks as we develop the technology and building and and building for that in in decentralized ways.
00:16:52
Speaker
I can go into some details or like, you know, into some examples. Yeah, let's let's talk about some examples of of how to do decentralized defense. The one that springs to mind for me is is strengthening cybersecurity or developing and kind of AI-enabled cybersecurity. cybersec security so having using AI to defend against cyber attacks, for example, that seems to be something that it can be implemented by companies and you know in a decentralized fashion, but would be quite useful in making the world more secure and and perhaps more stable.
00:17:25
Speaker
Yeah, I mean, like, I think one big benefit of decentralized ah technology development is that you just have more eyes on the ball. So that means that as technologies get developed and they might turn from a white ball into a black ball in like a Bostrom's terms, you have more people spotting this.
00:17:40
Speaker
right like The more people or the more entities you have red teaming, the easier it is for you to figure out when something goes very, very, very awry. And I think many AI companies have found that actually, that once they release their products is when they actually spotted some of the biggest flaws.
00:17:56
Speaker
And so we need to bring the kind of super intelligence of civilization to bear on to red team and actually make safe some of the technology technologies that we're developing. Over time, I do think that you know it's unclear whether just human red teaming or human vulnerability discovery will be enough. We might have to automate some of that too because the threats are getting automated, especially on cybersecurity.
00:18:20
Speaker
you know it there It's a very big question you know how much or how fast cyber offense will be will be exac exacerbated by AI development. So we need to also think about automated tooling that we can develop and like AI and how it can help us in defense.
00:18:38
Speaker
But in a nutshell, I think the more ice we can bring on the ball, the better it is. And I do think that even when you think about not only the kind of swarming around red teaming and actually like checking whether something has been developed correctly, but even just when you think about the architecture and the design of systems,
00:18:58
Speaker
and how we could design them to be more resilient, their decentralization could also come in handy. So for example, there is this really interesting kind of like prototype or like example that I often use, which is the SEL4 microkernel, which is one like hell of a bit of technology because it's,
00:19:20
Speaker
A microkernel that is formally secure, but so you can actually verify its security, but it also withstood a DARPA grand challenge or like a DARPA red teaming.
00:19:32
Speaker
red teaming swarm. And so that means on the one hand, it's combining two different sets of security for this approach of defense in depth, which is awesome. But on the other hand, it also, the way that it's set up and the way that it works is actually, you know, using this kind of like principle of least authority by actually like really separating the different processes, that virtual machines and the different components that it's made of into different sub parts.
00:19:56
Speaker
that can then be verified. And so I think we really need to ah think more about how can we you know ah create even systems that are set up in a way where they're being built bottom up, where you can verify possible sub pieces of critical infrastructure easily and then build in this modular system way more secure systems from the bottom up that are easier to defend and easier to attack and and and and how can you give access only really when it's required to force some parts of a component rather than to a whole system in itself so I think the decentralization piece just gives you so much more modularity in how you want to how you want to deal with access as well so there' there's a bunch of different like architectural advantages as well I think for building things in this way
00:20:43
Speaker
Is this compatible with how modern machine learning systems are built, though? it Can they be ah separated into modular parts and examined one by one or apart from each other?
00:20:57
Speaker
Yeah, I'm not technical in and machine learning. So I don't actually know if, like, and how, like, again, i think it's to some extent a way, like, to some extent, I think it's a way of, like, how you look at a system or at what, I guess, at what level you look at a system.
00:21:15
Speaker
So I think, for example, if you look at, just AI development as a whole, you know, I sometimes wonder if, you know, we have, we are running more towards a ah world in which we have many different specialized AI systems that are performing specialized tasks or whether we're getting more of let's say, like a deep research style, let's say, ah AI system that like, you know, can can and perform or that can perform like a lot of different tasks, perhaps even if paired with operator. operator
00:21:48
Speaker
ah rather than we currently have it. Well, it seems like the the main, the the AI companies in front are gunning for general systems, systems that can work as agents, systems that can solve a wide variety of problems.
00:22:03
Speaker
So basically, ah they're they're trying to create AGI.
00:22:08
Speaker
And perhaps perhaps it would be as we would have a safer world and we would have a more pleasant world if companies were putting more energy into developing ah systems like AlphaFold or systems that are ah a superhuman in ah in a narrow domain.
00:22:24
Speaker
But it seems it seems ah but like the current strategy from from the may from the AI companies is to is to go for AGI, think. I mean, yeah, definitely, i think, really within their mission statement. But I do wonder, like, you know, the way that we're currently set up, is it possible to push for more of these like super intelligent but very

Specialized AI and Civilizational Architecture

00:22:46
Speaker
specialized entities that are then again enshrined in a larger civilizational ah architecture that, you know, resembles perhaps more ah supercharged economy that we currently have?
00:22:57
Speaker
And there it's just unclear to me, you know, you you do have systems like deep research that I think are trying to just create this one-stop shop for, know, for answering questions. But then you also have these incredibly specialized and really powerful systems like AlphaFolds who, you know, like we just saw that AI systems can actually,
00:23:17
Speaker
be co-awarded a Nobel Prize but to some extent through their creators for the for the scientific contributions that they've made. And so it's it's unclear, I think, which future we're racing towards. And I do think that, you know, at least we, when we we do some ah RFPs that fall out about research automation, and this is very specialized research automation. So we want to see a lot more automation of like specialized research specialized research problems. So for example, there's like a really interesting prototype but brain GPT that is really trying to create this AI system that can help you lot with neuroscience literature. It's really trained on neuroscience literature, which can already,
00:24:00
Speaker
propose and then predict the outcomes of neuroscience experiments better than human researchers can. And so I think that if we just incentivize more these very specialized entities, if we create these like larger and like architectures in which they can incorporate we can actively compensate for some of the centralized dynamics that perhaps many of the AI systems face. And of course, ultimately, it's an empirical question. just like I think we shouldn't be kidding ourselves that sometimes it really helps for creating more intelligent systems to centralize everything back into one.
00:24:36
Speaker
But there again, it's a question about architecture design. and and about how this centralization looks. Because at the end of the day, it really depends how you look at a company. Is it centralized? Is it decentralized? Well, it really depends at what level of the company you're looking at and how it's set up.
00:24:53
Speaker
I think the important thing about decentralization when we name it ah the fact that you have different entities that can keep each other in check. You have different entities that can monitor that development is safe and secure, that it's resilient.
00:25:07
Speaker
um You have more and more eyes that are being brought to the table to like check whether things are developed in a secure way. So I think there are a few kind of properties of decentralized systems that including its vi their resilience, that we have to kind of try to build in and bake in and choose system development as much as we can. And I do think that by actively supporting specific, more specialized, super intelligent ah you know subsystems rather than the very, very centralized ones, we can sometimes make and make a difference.
00:25:40
Speaker
yeah I wonder if we could have or develop, and I'm thinking of humanity, if if we could develop a ah system or civilization where you have decentralization at the lowest level.
00:25:56
Speaker
And as you go up in in complexity, so say as you go from companies to governments to international collaborat collaboration, you have more centralization. But that central that centralized authority ah has authority over a very small set of issues.
00:26:13
Speaker
To be more concrete here, I'm imagining something like you have a number of of companies in in in different countries developing AI in different ways. That's relatively decentralized.
00:26:25
Speaker
Then you have national governments regulating those companies globally. so certain in different ways. So you have some diversity there, you have some plurality in the way you're developing AI.
00:26:37
Speaker
And then you have an international organization trying to prevent the most extreme risks. And so that's, of course, a centralized that's a centralized institution.
00:26:49
Speaker
But perhaps we could limit that institution's power to to only... govern a small set of issues, like, for example, the development of new weapons or self-improvement in in AI systems.
00:27:08
Speaker
Do do do you buy this vision of of decentralization and centralization at different levels of civilization? Well, guess if we want to build something like that, we should have had we should at least have this principle of subsidiarity in there, of like really only deferring to the upper level when it's actually necessary, but trying to make decisions on the lowest level of governance possible. right like But in general, um I don't know, because from what I've seen from most centralized organizations is that they have a few other problems in the sense that, you know, first of all, that would obviously still be the most like but the but whichever one is furthest up the stack is at the end of the day, is the most powerful one. Right.
00:27:53
Speaker
And so it does create this competition, you know, for power and possibly for the most power seeking actors to rise to the top. So that's number one. So you have possibly a problem of internal corruption.
00:28:06
Speaker
Then I think even if these internal actors in this very, very powerful meta entity are benign, they are still somewhat prone to extortion and extortability by external actors. So you know like you have this kind of single point of failure where if I wanted to attack that entire system, i would go to the top.
00:28:28
Speaker
right You also have a problem that like you have mission creep and just like the you know the gradual expansion of the domain of interest and responsibility of whichever this organization is. And we've seen that, I think, with so many governmental organizations that we have that have just exploded.
00:28:50
Speaker
I think their domains. So I'm just not sure if we can create this actually in reality in a way where I would be like, yep, this seems like it has some positive dynamics in place over time.
00:29:08
Speaker
I guess the more the lower parts of the stack would have an ability to compensate for the power dynamics of the higher parts of the stack,
00:29:20
Speaker
and cross-check and monitor and verify, the more optimistic I would be about a scenario like that. But at the end of the day, in this and your scenario, how would you do, how would this entity decide what is actually you know an incredibly dangerous technology and like how would it actually enforce and prevent the development of this technology? Would it require something like um ubiquitous surveillance and ubiquitous enforcement capabilities? Or like how do you envision that? Because I think proof's in the pudding.
00:29:57
Speaker
Yeah, yeah. I mean, there are various suggestions for how to do this. And it's true that the... that you would have You would need to have some kind of surveillance, but perhaps you could limit that surveillance to surveillance of the largest training facilities or perhaps the the kind of surveillance that governments are already doing to each other.
00:30:21
Speaker
um and and and And hopefully you could avoid surveillance of private citizens and and so on. But i agree i agree that that there are there are there are very difficult and and thorny problems with implementing something like this.
00:30:36
Speaker
The problem is just that if we don't have something like this, if we we face the other side of of this issue, which is just the potential of of human extinction or the potential of incredibly harmful events.
00:30:51
Speaker
Yeah, no, I definitely don't sugarcoat one side. It's always easy to poke holes into someone's proposal than to actually come up with something yourself. But I do think one one thing I want to mention about this is that, you know, even if you could design something like that in theory,
00:31:06
Speaker
I'm really worried about that it's just not the world that we're in right now. We are in a world in which there are many different centers of power, like not a lot of them, but like definitely a few strong contenders.
00:31:20
Speaker
China and the US and Russia are like three strong contenders and have been historically over some time. So this is not news and and power balances are like dynamic hamicck and shifting and there's just a lot on the line. So I wonder, you know, like in this and constructed entity that if we're trying to move from this more multipolar world that we have, even though it's not very decentralized, but there's at least a few entities that are holding power to this more unipolar world of like this, you know, one final checkpoint for dangerous activity.
00:31:53
Speaker
How do we get there? Because any effort that would originate in the U.S. for moving there might not be super welcome and it might understandably not be super welcome by Chinese actors.
00:32:07
Speaker
And they might not have like total trust in the fact that this is happening. that this happening to their advantage in Russia, like let alone onboarding Russia or or onboarding many other of the other countries that are obviously also still very relevant.
00:32:20
Speaker
um So it's really difficult to move there. And if you try to just impose it, you're creating this kind of first drive instability where, you know It's almost like the more credible such a system that is perhaps US-originated or US-run or something becomes to China or Russia, the more of an incentive they might have to strike first to prevent such a thing. Because ah for all they care, that is kind of singleton takeover scenario that they're worried about.

Regulatory Marketplaces and Decentralization

00:32:51
Speaker
and And they just have a very different perception, I think, of how the world will work over time. I think moving into these states or even just credibly signaling that we want to go there also comes with some costs just because we are currently not in this state.
00:33:06
Speaker
And yeah, so I do think we we should be should be careful or mindful of that. but if If we want to design anything like that, i think the way to design systems that are preventing risks the more open source we can design them, the more input possible and the more from the bottom up we can design these systems, the more skin in the game different entities will have to uphold it, to trust it, to check that these systems are true.
00:33:36
Speaker
And I'm not arguing for at all for a total technological proliferation with me no guardrails in check, but I do hope that you know we can... create possibly using technologies like including much of the cryptographic technologies that have been developed for a long, long time, but that are still economically somewhat lagging, ah that we can develop basically new types of technologies that allow us to do some of the monitoring and some of the ah checking for risks
00:34:11
Speaker
revealing without revealing all the information that we might have. So for example, one notion that think is quite interesting is from ah Gillian Hatfield. So if we wanted to create anything like and these kind of like regulatory entities, she has this notion of regulatory marketplace or regulatory marketplaces. I don't know if you've come across it.
00:34:32
Speaker
No, I haven't. I haven't read about this. Okay. So in a nutshell, let me try to piece it together. So I think, and and then I think there was a GovAI paper actually that built up on it. So I'm probably meshing the two here, or I might describe the version that GovAI was using.
00:34:47
Speaker
But in a nutshell, you basically, rather than having like one entity trying to regulate and control AI development, instead what you have is like many different actors kind like coming to a type of agreement of like which kind of capabilities are dangerous to develop, which we don't want to you know, like prioritize for now and and and what's the kind of like kosher to go to, to to develop for ah the time being.
00:35:14
Speaker
And so this will A, be more flexible than having like this one entity deciding, but it's this kind of like pretty, multi-polarly designed safety framework.
00:35:25
Speaker
And once they agree, this agreement is then basically enforced multilaterally. And it is enforced by this kind of like cryptographic monitoring fabric where you just have to kind of attest that you hit specific benchmarks that are still seen as safe or that on the other hand, that you're developing perhaps things that are, that you're not developing things that are not safe.
00:35:47
Speaker
And so and over time, you can basically have a, have a monitoring fabric in place that does something what usually you think a government does, but that is nevertheless controlled by the different actors building it, and that perhaps has also input from, you know, like societal and civil society committee that can prove that this is in check with civil society's preferences.
00:36:12
Speaker
And that is just a little bit more adaptive to the work the you know the pace that the world is moving in. And these are all, I guess, you know possible structures that we could be developing through these techniques that I think Ben Garfinkel often summarized under structured transparency techniques, where you're using various different cryptographic techniques tools and privacy preserving technologies to develop systems that allow you to test very specific properties about a system without having to reveal everything.
00:36:40
Speaker
And so there is an incentive for companies to sign up to this because rather than them having to reveal all of their model rates to everyone else, and they might only have to reveal the things that really matter from a safety perspective. And so you could imagine at least things like this. Of course, they're not economically viable or feasible competitive yet, but they're not impossible to build either.
00:37:02
Speaker
Yeah, it sounds very interesting. The thing I would be most interested that in there is whether such a system could be used to to prove that you are not developing a a certain capability.
00:37:15
Speaker
That's the difficult one, I think. That is a hard one. How do you show that you don't have some other training run running in the background developing a highly capable system that can help users develop bioweapons, say?
00:37:31
Speaker
I think that's an unsolved technical problem. It is an unsolved technical problem. Yeah. I think, I mean, that is one of the trade-offs of, you know, having...
00:37:44
Speaker
having privacy yeah systems is that you just don't know what's not being done. um that That's a much harder thing to test for. But there is at least, you know, you get, I mean, the good thing about this is that it would create a little bit more trust and it would just, we don't need a perfect solution. We need to create like enough trust and goodwill and enough ability to coordinate over time that the incredible uncertainty that we're under right now it's just a little bit more bound. And, you know, historically, at least there have been prototypes and cases where this has been tried on a very, very small scale. So for example, I think Benny, when he was still heading the NSA, had this prototype within the and NSA where created this,
00:38:36
Speaker
system called thin thread where basically rather than some of the NSA agents having access to unencrypted telephone data and information, they would only see encrypted information. But then when specific tripwires were triggered that would cause the system to think that they're now discussing you know terror hero or a dangerous or dangerous capabilities, then that information would get revealed to the analyst.
00:39:08
Speaker
And at that point, they could ah you'd have a human looking over it. And so what this allowed is that they were able to monitor a lot more of the of the data coming in, but only when specific tripwires were triggered, actually a human was able to look at and the information. And so this was an internal way in which the NSA was trying to keep itself in check, ah but It was really just a prototype and I think it was discontinued.
00:39:36
Speaker
Never saw the light of day, even though it had really excellent tests. But these are things that we can be doing to create more checks and balances. And even if they're not perfect, they can at least move us gradually into a world from which we can then develop the next set of solutions.
00:39:52
Speaker
Yeah, yeah. So one theme I see running through your writing is that you you wish to ah you wish for humanity to continue the way we've been corroborating in the past into the future.
00:40:07
Speaker
And the future will involve an increasing number of AI entities that we have to cooperate with. So you write about scaling cooperation, which is about beginning to cooperate AI to human,
00:40:24
Speaker
AI to AI and humans staying in the loop of, of civilizational cooperation as we integrate more and more of these AI entities.
00:40:35
Speaker
How do you envision that? What are, what are the main challenges? Yeah, well, I mean, I think. It's a very big question. I know. But, but it's also really exciting what I think you think about because we can,
00:40:50
Speaker
We're now really in a position where we are building some of these some of these entities. So it's a very timely question, but you know if we like we are we are developing currently AI systems, many different types of AI systems that are quite different AI.
00:41:11
Speaker
you know, other cooperative partners, which were mostly made humans that we, that we, that we really like known or that we used to cooperating with. And so these AI systems, you know, they don't share our substrate, you they have evolved in different ways that we have not evolved to par. So for example, if even just in a normal social situation, you know, like the fact that we we're biological creatures, you and I, have the fact that, you know, we have evolved in like similarly cultural contexts with a similar biological context,
00:41:46
Speaker
evolutionary history gives me at least like some confidence or like some bounce in how you will react to a specific situation. And it's not perfect, you know, like there are many different ways and how to trick this, but over time we've become pretty good at figuring out culture, and norms, institutions,
00:42:06
Speaker
all the way from my contract like contracts to like more fancy ones ah to cooperate for human beings. And so I think that if we want to include AIs into this framework, we need to update much of the game theory and we need to update much of the institutional mechanism design that we've relied on for so long in civilization.
00:42:26
Speaker
And that's a really, really exciting project. tough Because on the one hand, you want to avoid that AIs can collude, out-corporate, deceive you with the new types of strategies that they might have that we don't just haven't evolved to track yet. So, for example, this is, I think, work from Kristin Schroeder-DeRitt and a few others on steganography and how AI systems can already you know, like hide messages pretty much in plain sight to humans, ah but pass them on to each other that are only readable by or decipherable by the other AI entity.
00:43:02
Speaker
And so they are just by virtue of the fact of how they use and pass language, which is very different to how we do it. And they might only be able to collude and deceive in these ways.
00:43:14
Speaker
And so those are new ways in which they can deceive us that we've not evolved to past. And so we need to kind of get better at detecting this. um And then on the other hand, we also really want to draw on their new possibilities or like the new opportunities that they give us to cooperate much better.
00:43:31
Speaker
And so there's this paper from Andrew Critch and I think Stuart Russell and a few others where they discuss open source game theory and open source.
00:43:44
Speaker
Basically, in their paper, they lay out these basic bots that can read each other's source code and try to see what types of Yeah, what types of cooperative deals they can come up with.
00:44:00
Speaker
And so, you know, one benefit of open source bots is that when you and i try to enter into an agreement, I actually can't look into your head. I have some somewhat of an understanding of whether not you'll deceive me, but I actually can't check whether not you will. Even if you tell me that you're corporate, will you follow through? I don't know.
00:44:21
Speaker
But open source systems can actually check how why the other one will respond. And so they might enable us to create institutions which are much more collaborative and cooperative and where conditionally upon the other one, possibly cooperating.
00:44:38
Speaker
If you ah do a specific action, you can then offer them. very ah like my my much more favorable terms. Of course, it's not all rosy what they lay out in the paper, but like I think we can use that to our advantage. And so we just need to start building, we need to start developing theories around the game theory for this. And we need to really like importantly also just start trial and error and testing a few of these systems.
00:45:03
Speaker
Do you think we will need to limit AI systems in perhaps their speed or their memory? and in order for humans to stay in the loop. So I'm guessing that in the future, there will there will be a temptation on the on the part of the of AI systems, perhaps directed by humans to collaborate at ah at ah at a very high speed and to perhaps draft a document and in 10 minutes that would have taken humans weeks or months to create.
00:45:34
Speaker
And so do we need Do we need kind of artificial limits on these systems in order for us to so stay part of the way that the world works and how we cooperate?

Human Adaptation vs AI Evolution

00:45:48
Speaker
Well, there's the question of like, do we need them? And then can we m
00:45:55
Speaker
reinforce them? Because, you know, especially if like in in that scenario, it sounds like it would be much more competitive to actually create these contracts faster.
00:46:08
Speaker
And perhaps those that evolve and experiment with doing things faster are the ones that, you know, that we're just have more, are better at creating economic value very fast. So it's going to be very difficult.
00:46:23
Speaker
to throttle that, I think, artificially, especially, throttling, I think it's it's it's difficult. um and And if you think about it, like, you know, the way that we've evolved is often by you know, having a technology come in, us being kind of overwhelmed, and then getting better at using it and getting better fast at using it.
00:46:42
Speaker
um And so one, I think, interesting example is like, you know, then when I think Christine Peterson, actually, our co-founder, told me this. I think when when the ability to create create newsletters with your own fonts first became ah possible, we basically created the most outrageous newsletters at Foresight. Foresight was founded in 1986 because no one was able to use fonts correctly and and and it was just all over the place. It looked absolutely atrocious.
00:47:18
Speaker
you know Now we're like really good at creating good template templates libraries and and good style guides, et cetera. And of course that is a very low stakes example here, but even in the way that we create contracts, right? Like we we we we basically, I think, get first overwhelmed by the possibilities that a technology that a technology creates. And then over time we get better at adopting for it and ah and and and creating templates that then other people can reuse and just think better about it.
00:47:47
Speaker
So I wonder if, you know, one, like I'm not sure what normatively I think we should be doing, if we if we should be swaddling them or not, but I think like perhaps realistically what we should also be thinking about is like, how can we make humans better ed keeping up with with the speed of them and and actually using them and leveraging more? And we have like a very big neuroscience and neurotech program in that area that we're really trying to use to like help to speed get up to beads
00:48:19
Speaker
be able to compete with, merge with, collaborate with AI systems. It is a little bit more speculative and out there. I'm not sure if you want to go there, but but but I think that, you know, one way in which we can stay relevant in this world is by also...
00:48:36
Speaker
improving our ability to make use of these systems. And whether that is through neuroscience and neurotechnology, or whether that is through having AI systems actually help us parse this information ah better, or whether that is through AI-enabled forecasting of different scenarios that would play out if this contract would get enforced and simulated out.
00:49:00
Speaker
We can also just use these tools to kind of get up get get us Yeah, I guess it's ah it's a question of time, whether we have enough time for ah for us as as humanity to adapt as we've done to previous technologies.
00:49:18
Speaker
We've adapted to the to the car, to the to factories, to industry in general. we've weve The world has changed tremendously over the last, say, 300 years, and we've adapted to it.
00:49:31
Speaker
In some sense, I think we're still adapting to the internet and social media and so on. We haven't kind of perfectly integrated and understood those technologies. And if that's and any indi indication, it takes us perhaps decades so to have this this back and forth with the technology and and adapting adapting it wisely and seeing what we don't like and changing that and so on.
00:49:56
Speaker
If it takes decades...
00:50:00
Speaker
We might not have enough time where with AI. Things might move so fast that we don't we can't rely on this on our usual procedure of trial and error and updating the technologies along the way.
00:50:17
Speaker
What then? If we can't if we can rely on that, how do we quickly get up to speed? Yeah, i I mean, we're going to find out because like when you said decades, that's long.
00:50:31
Speaker
Decades is long. I'm not sure if we have decades. um I don't think so. i think we need to be faster than that.
00:50:42
Speaker
i really do think, you know, on the risk of repeating myself that we need to enlist AI technologies to help us with this stuff. And we need to just become better at integrating this into our civilizational defense strategies rather than just, you know, the ancient tools of cooperation and of sense-making that we've used until now.
00:51:09
Speaker
You know, I know that, you know, there's, this is going to come with risks as well. We need to, you know, somehow just try things out. But even if you just look at, it would be very, very good if we could just even predict the pace of progress and not just the pace of progress, but even how different technological developments will influence each other.
00:51:32
Speaker
And even here we can, I think, enlist AI to help us with this. So, you know, I know that, Anthony from ahphali is also one of the co-founders of Metaculous. Metaculous has been one of my favorite projects for so long in the Bay. It's a really fantastic ah forecasting platform.
00:51:49
Speaker
And they just launched like a bot forecasting challenge. And so that basically means that, you know, rather than perhaps the forecast that has been going on on Metaculous for a really, really long time, where human forecasters predict and gradually get better at making sense of the world, we can enlist a lot of AI systems that never get tired, that twenty four seven can predict on a variety of different scenarios and give us really good simulations about outcomes of specific actions.
00:52:17
Speaker
We need to lean into these technologies and we need to enlist them to help us make sense of the world. And I do think that we are receiving so many grant applications right now, which are building on new ways of prediction theories and ah basically even like having different AI agents debate each other before they predict out on a specific scenario again.
00:52:40
Speaker
And I know this is very in the woods, but like there when you look under the hood, there's just so much technology that we can develop now and that we can learn from all the way from sensemaking to cooperating better to, really like leveling up ourselves and our knowledge and our ability to to to come together and form agreements.
00:52:59
Speaker
Like there is a lot we can do. And I think that we just need to, we need to upskill. I remember when i went from high school to university and i was like, wow, this is different.
00:53:10
Speaker
I need to I'm not prepared. need to, and need to level up. And it was one year of learning 24 seven.
00:53:21
Speaker
And it was ah ah kind of like a pace of learning and topics that I was just not prepared for. I think coming from a German school system into like, ah it was the UK university system.
00:53:33
Speaker
It was just like not things that we were taught in the high school that I was in. And I was like, I'm not going to make it. and And then I leveled up and i really leaned in. And I think we're at that point now for civilization.
00:53:46
Speaker
We need to level up and lean in. It's possible. So what do we need? Do we need world leaders and and then company leaders to have ah AI advisors to help them look at forecasts?
00:54:02
Speaker
yeah what's what's the What are the best options for leveling up and for using AI too to make better decisions? Well, I mean, it's kind of like chaining together a lot of the individual examples, I guess, that I mentioned all the way from like, you know, using better AI simulations for helping us achieve and and get at like possible trade-offs of specific Because we will be facing so many high-stakes situations in the few years to come.
00:54:33
Speaker
And so just having really good forecasts of like how different outcomes might affect other outcomes and conditional forecasts, et etc. And being able to simulate out and these scenarios really well would be awesome.
00:54:45
Speaker
That would be one thing. And the second thing is you know the ability that we started them the podcast with, which is this ability individually to just cooperate much better and to create this really cooperative...
00:54:55
Speaker
really corporate almost like superstructure of civilization where you you know you're just like out there and really enmeshed and ingrained with other people. That's great, but we can also bring that to bear and like larger challenges. right like It's not just like us individually getting more of the things we want, but ideally we also want to use these AI tools to find ways in which we can come together on the large challenges like the Moloch traps, the multipolar traps.
00:55:25
Speaker
the kind of big challenges that we're facing. And so coming to perhaps a few like you know really high stakes agreements over time that AIs can help us reach agreement on, enforce, monitor, that would be fantastic. And there's plenty to choose from. But just trying out a few of these more larger way, kind of like multi-way commitments would be great.
00:55:50
Speaker
do you think decision making at the highest level, so at the level of of world leaders, do you think that's do you think what's lacking there in order for us to get higher quality decisions is information?
00:56:02
Speaker
Do you think decision making at the highest level levels in the world is is constrained by lack of information and forecasts and so on? well, it's not just at the highest level. You know, I'm trying to also get away a bit from the highest level um in the sense that Eric Drexler in this book in 1986, Engines of Creation, he laid out a framework of design ahead for specific technologies. And so this idea Modeling out the risks and benefits of a specific technology is also very beneficial on a local level.
00:56:40
Speaker
So I think for everyone developing tech and everyone you know like developing, like let's say, AI for bio, I said, well, like having a window where you can design ahead.
00:56:51
Speaker
And that's not only predictions, right? That's also simulation software that's getting much better. Having a window in which you can first design ahead the systems that you're then building. and and model out what these systems will look like, gives you this delta between the level that you can already create things in the physical world at and the level of technological development that you'll be at soon and those risks and benefits.
00:57:19
Speaker
And even just creating this window of design ahead allows you to then create these more differential technology development strategies to build out for the risks. And so I think this is not just something that should be done at the higher level through prediction markets, but like through modeling and simulate simulating out possible positive scenarios, but also possible negative scenarios for specific tech and for the interrelation of the tech with other technologies. Right?
00:57:46
Speaker
And I guess, yeah, I think we should, it it it requires all of us to take a little bit more responsibility, I think, because the world is going to get much more complex. complex And so the better, again, we can get at like building things safe and securely and red teaming them from a local level upwards, you know, the less problems we have to deal with on the upper levels of the echelon.
00:58:10
Speaker
we've We've discussed a bunch of options we have for developing Technology for cooperation, we could call it. And took technology for for humans to cooperate better and for AIs and humans to cooperate better.
00:58:23
Speaker
I guess one worry there is that we don't have that much time, perhaps, until we have very powerful AI. And many of these technologies are still in a phase where they're not ready to be implemented.
00:58:38
Speaker
This is something that you you've written about yourself. the reality check of whether we have enough time to use these technologies if we have very short timelines, say if we have powerful a AI before 2030.
00:58:53
Speaker
How do we

Centralized Approaches in Rapid AI Development

00:58:54
Speaker
grapple with that? Do we need a plan B? What do we need? A few months ago, when you probably have seen this too on the internet, you just saw a lot more basically very short timeline scenarios.
00:59:08
Speaker
And there were either, you know, specific forecasts or there were fictional scenarios where people were grappling with like two to five year timelines.
00:59:20
Speaker
And in most of these scenarios, there was like, The outcome was pretty centralized, either in a negative case of an AI singleton taking over by creating mirror life or something and only like a small fraction of humans surviving, or by...
00:59:40
Speaker
Or it was positive positive by having you know like one world government come in or ah something and swoop up and and and take on take on technological development. And I think in many of the war games and tabletop exercises that I've seen people do, these are the main but the main solutions that people reach for, right?
01:00:03
Speaker
Yeah, it seems that if we have short timelines, AI development is probably going to be more centralized, right? it's it's It's ah as if the game board is now set and we know which companies and which governments are relevant relevant.
01:00:17
Speaker
and We know which, therefore, which companies ah and and governments are not relevant. And and now it's just we are we're moving quite quickly towards ah powerful systems. So it's a sense in which short timelines is a bad sign for technologies and the the ways of cooperating that we've been discussing.
01:00:42
Speaker
Yeah, I mean, it certainly feels that way now just because of the lack of imagination, I think, of coming up with decentralized, multipolar, multi-stakeholder solutions on that could work on short timelines.
01:00:54
Speaker
But on the other hand, it's also not totally set. We don't just have open We have more than open We have many different, many, like a few main AI developers right now.
01:01:05
Speaker
And the more the more time passes, the more other actors are coming online. I think, you know, like, yeah, DeepSeek might've been like, you know, just one case in point, but like, nevertheless, that happened this year. um And that was kind of came out of the blue.
01:01:25
Speaker
And I do think that, you know, open source usually, or the value of open source system design or like more open designs takes time because it often takes time for,
01:01:37
Speaker
different systems to build up on other on the technologies developed by other open source designers and creating this kind of like very rich infrastructure of open source design. So yes, in general, I would say longer timelines favor these more kind like open decentralized approaches, but we are already seeing like steps into those directions.
01:01:56
Speaker
And to the extent that we have a lever on how systems are built, we can actively influence that by building in an open collaborative way And by developing more specialized sub-agents that then can be used and in in these more open source frameworks frameworks and designs.
01:02:17
Speaker
Mind you, we also need to create ah frameworks for how to deal with the risks that can be exacerbated by open source design. I'm not just saying that let's do it all in the open and like, just not worry about the risks, but we can actively, i think, push the world more into these directions. And many people are doing that right now. Many companies are doing that. I think even Sam Altman said that, you know, he wonders sometimes if he finds himself on the wrong side of history by having pushed for perhaps a less open,
01:02:43
Speaker
technology development within open AI, had originally, I guess, set out to be perhaps a bit more open than the current is. So I think we ah we have definitely seen this year somewhat of a shift, at least mentally and ideologically, to these systems becoming a little bit more on vogue, for lack of a better term.
01:03:01
Speaker
But yeah, the group will be in the pudding to some extent. and but But I think, you know, we are making the pudding. ah we we We are by, you know, by the by the work that we're funding, by the work, by the systems that we're building, by the way that we're cooperating, by the systems that we're like praising, we are actively creating, you know, milestones and ah north stars that become more attractive. So I do think even though the chances
01:03:37
Speaker
don't look amazing right now because we have these very, very large entities. They're also not impossible. And I think the important thing to think about is like, where does decentralization open source perhaps actually like deliver like a few unique advantages that centralized systems don't have.
01:03:56
Speaker
And a few, for example, that is now and more speculative from the crypto world, but you know, AI systems are like pretty good at like solving these like general problems, but like at some of these like long tail problems that require you to kind of solve edge cases or require perhaps to like bring local knowledge to the table.
01:04:17
Speaker
They're not really good at like gobbling up that, that that data. It's just not within big data usually. ah here systems like even within crypto and Web3 are pretty good because they can incentivize individual actors to bring local knowledge to the table, right?
01:04:33
Speaker
Like DAOs or through decentralized marketplaces or smart contract design, what have you. And so you can draw on the kind of areas in which centralized systems are perhaps not as competitive with decentralized systems to try to build more there.
01:04:49
Speaker
Another one, for example, for privacy preserving tech is applications in let's say, healthcare care or finance, because that is just not something that centralized AI can really touch often because it requires the handling of very sensitive data that often by law, or just because people are not willing to share it, centralized systems can't handle. And so if we rely on various of the privacy-preserving technologies to create ways in which we can have these more federated approaches and privacy-preserving approaches to how we can make sense of data,
01:05:25
Speaker
um in decentralized privacy-preserving ways without having to go through centralized systems, that's another really big step in which we can ah create specific niches, at least, in which decentralized systems can possibly out-compete centralized ones. But ultimately, yes, and it will be and empirical question.
01:05:46
Speaker
But I think the more we put compensated dynamics in place against those and the more we actively build the yeah built with that future in mind, the the likely we are to arrive there.
01:05:58
Speaker
Is there perhaps another thing working against decentralized AI development, which is that that' the way these modern machine learning systems are trained is very capital intensive. It requires building large training clusters and drawing lots of power and so on.
01:06:20
Speaker
As we scale up these systems, as we build ah more and more expensive clusters, I don't think we've solved the technical problem of training, and say, decentralized training. so So training one system in, in say, 20 different locations and then combining it in a certain way.
01:06:41
Speaker
is is Is that working to working against the decentralized approach also? So capital, intensive, and just the need to build out a lot of lot of hardware in these training clusters.
01:06:58
Speaker
That is definitely still, I think, the default path that we're on. You're right about that. But there are specific... at least prototypes that are trying to do ah differently, right? Like there's Prime Intellect that is trying to do more, enable more decentralized training runs.
01:07:13
Speaker
And there's various other projects like that out there. And whether or not they will ultimately be competitive is a different question, but like people are using those.

Decentralized AI Training and Competitiveness

01:07:24
Speaker
ah They are actively being developed ah more. and And I think the more and more and people wake up to the power that AI development holds, the more incentivized individuals are also to like take part in these more decentralized compute clusters.
01:07:41
Speaker
And so I certainly know of like a few efforts that are trying to build more decentralized, either community-held or individually-held compute clusters up and and see how how they can collaborate with each other. And I think that's kind of inspiring. so You know, there are probably specific, there are specific problems that the centralized ones will always be better at solving.
01:08:04
Speaker
Absolutely. But the more we can incentivize folks to come together to build up these more alternatives, I think the more likely will will will make them to to at least be viable in in specific scenarios. and And they have already proven to be at least, you know, at least like to produce interesting solutions for like sub problems.
01:08:28
Speaker
So I can't say whether or not I'm hyper optimistic here. To some extent, it's an empirical question. And it's ah it's a question just about like what is actually required to build very powerful AI systems. But But yeah, like so I think my answers are always mixing to some extent like a normative with a descriptive because
01:08:52
Speaker
because we ultimately have to kind of choose the path that we think might normatively be better while still being realistic, possibly enough that it's even worth striving for it.
01:09:03
Speaker
Yeah, so it's going to be difficult, but we have it in our own hands. Yeah. when When you think about the future of ai how much do you draw upon history?
01:09:16
Speaker
Which lessons can we take from history? Because it seems like AI could be so different from many other technologies we've seen in the past. On the other hand, humanity has said also undergone technological revolutions before, and it seems that we have we have an an ability to and the how how do we Which lessons should we take from history?
01:09:44
Speaker
There plenty. I'm not a historian, but I think two things perhaps that are just like often popping into my head is on the one hand, when you think about this kind of long-term game of centralization versus decentralization and offense, defense, which is going to work out, you know we've played that game for a long time and The question is maybe like, you know, both or it will kind be a continually evolving, continually evolving for us. Like we often have habits such that like systems start out decentralized, then over time it gets, you know, more ah economical to centralize some of these tasks together.
01:10:25
Speaker
And then over time, you know there are some of the trade-offs of centralized systems also that they're not as innovative. So people start innovating in specific sub pockets and either break out or fork out of the larger system, or you just have competitors coming in that are able to do what is whatever is relevant in the new technological reality.
01:10:45
Speaker
reality much more concisely and specialized and then they grow up over time to become the dominant player in a specific arena until again different technological realities become available and different subsystems are built and so I think we've just seen this in the history of yeah of of the development of everything from the mainframe computer to ah possibly the internet to social media to even within crypto. We've just seen this kind of play out over and over again. And to some extent, the question is just like, will it eventually crystallize into one substrate? Like, will AI make this different? Or will it just be the same? You know, I find it hard to imagine that that we won't be coming up with like better AI systems and
01:11:35
Speaker
ah in a more specialized way that can just do things just with a very different focus than they had been previously developed and then out-compete the old generation of AI systems again. So to some extent, I find it very hard to think that we currently are putting a stop to this entire development and we'll just like lock in this one AI system or or otherwise world government or like or private actor that will lock down progress forever.
01:12:01
Speaker
you know I find find that just very, very, very hard to imagine that that's the way that the world will will turn out and you will crystallize in this way. but And then, yeah, so basically that might give us some hope. you know There's always like something that that will just become more competitive over time.
01:12:19
Speaker
And then the other one, is that sometimes, you know for everything that I've said about trying to leave the future open, trying to you know allow us and our descendants and other entities to kind of like kind of reinvent the rules from within the game um and having perhaps like a little bit more of an open approach to evolution of the development of AI systems.
01:12:45
Speaker
Nevertheless, we can also point to like some historic examples where at least we have been relatively successful at putting systems in place and that lead to the baseline conditions being set up relatively well. And that was, you know, obviously the U.S. Constitution and many people have many problems with the U.S. Constitution.
01:13:07
Speaker
constitution. it it But nevertheless, it was good enough that it got copied a lot. um And it also is good enough that to some extent, we're still living more or less in that world.
01:13:21
Speaker
And that is something that they have gotten extremely right. I don't think that, well, I don't think if you talk to the founding fathers now, they might be incredibly thrilled about the current expression of the constitution.
01:13:35
Speaker
But I also think that they would be flabbergasted that in a world of such technological maturity, it still to some extent works more or less at putting in a framework within which we can then invent the next better systems.
01:13:52
Speaker
And so now we are again in that same scenario where we have to set something like that up for a world where which will mostly be run by intelligences that are vastly smarter to us.
01:14:05
Speaker
Isn't that an argument in favor of ah value logging then? If the US Constitution were able to last, say, hundreds of years and is still influential to this day, isn't that isn't that an argument that the values we we put into to our AI systems now are going to persist in the future for a long time?
01:14:30
Speaker
Well, I think what's interesting about the Constitution is that it didn't really do that. but What the Constitution did is that it actually just put a system in place by which of checks and balances and procedures and of rule of law so that and different value systems could flourish and thrive and counteract and compensate and keep each other in check.
01:14:54
Speaker
so you know and And I think that's what we need to do, right? We need to, we can't just like totally go hands off the steering wheel and be like, okay, well, we're just going to leave it up to the evolution dynamics at play, but we need to create systems that make that make opportunities for cooperation and the benefits for cooperation on shared goals easier and that restrain power centralization and that restrain you know actions that would like basically wipe out the entire playing field.
01:15:27
Speaker
And so we can put these side constraints in place without necessarily having to say all that much about value alignment or value lock-in or exactly what type of future we would want from an ethical perspective.
01:15:39
Speaker
But we need to create the playing field with which then the next civilization has a chance to iterate on the game and make up new rules and play it forward. And I think, for example, one crazy scenario that we might have to think about soon, and this sounds sci-fi, but I don't think it's actually that far off, is space property rights, right?
01:16:02
Speaker
So if you look at the evolution of property rights, or I guess like the innovations within property rights, we had a lot of time for this. ah And there's many different theories of property rights.
01:16:16
Speaker
But you know we have come up and evolved with systems that allow us to create rights to do things with objects. I think like property rights really are a right. like They give a right not... like The right to a thing is a right to to do a thing with it.
01:16:35
Speaker
And we see that, I think, a lot right now when, you know for example, I think Enando de Soto had this book on basically people trying to come in, I think, and ah to some Latin American communities or sub-Saharan African african communities and try to draw up new land titles for specific 3D plots of lands.
01:16:56
Speaker
But they didn't match at all how the communities actually wanted to use that land. So, for example, oftentimes it wasn't just the this kind of like you know different plot of land, but like different people had to go through to go to a waterway and like, so that a land was used communally for parts of the day and like privately for other parts of the day.
01:17:12
Speaker
And so, you know, the the way that we had grown up property in the West just didn't really quite work out that way for these specific use cases. And so there you see really that like, you know, the right to a thing is like a right to to do a thing with it.
01:17:26
Speaker
And the same, I guess, for you know how we cut out ah property rights before we knew that like radio spectrum was a thing and we needed to update property rights. Anyway, we just had a lot of time for this and we had a lot of time to adapt and we're still adapting. There's pollution credits now and like you know all kinds of like noise pollution problems that we're still tackling with.
01:17:44
Speaker
So it's not like we're perfect at the property right problem, but we've had a lot of time at adapting. and adapting to it. Now, we might soon have to rethink that drastically because if we create incredibly powerful technologies, including AI, that might want to use make use of space resources, aka possible property, the kind of 3D slices of the world that we are currently are drawing, just don't apply there, right? Like, you know, it's a lot about like...
01:18:15
Speaker
what actually has like exposure to the sun? you know how like yeah How does energy work in space? And you know can you build a Dyson sphere between me and the sun?
01:18:28
Speaker
Yes or no? like you know These things will become... like not soon super relevant, but like soon enough-ish that we have to drastically rethink the way that we've done things.
01:18:39
Speaker
And we don't have much time for that either. And the fear is that if we don't think things through or don't simulate things out, possibly or model things out, then it's just going to be, yeah, come first and settle.
01:18:56
Speaker
and and And that will be me who owns one. my My impression is that we have actually but attempted to create some and space law or property rights for space. but the But many of these many much of this legislation is from the 1960s or 70s, something like that, where they they were probably not thinking of an AI-driven kind race just to settle space. And so...
01:19:29
Speaker
it's yeah it seems It seems right to me that some of this legislation should be updated in light of what we now know about about AI and and what's possible. But it it's a difficult problem. it's It's difficult to even know where to start there.

Property Rights in AI-driven Space Exploration

01:19:46
Speaker
I think property rights have been a good technology for for cooperation on Earth um to allocate resources efficiently and so on. it's It's unclear how that maps onto onto space where the property rights will be useful there also. so that The example you mentioned is kind of kind of interesting. you can Can you harvest the energy from the sun such that no one else is getting sunlight and therefore they die, basically?
01:20:14
Speaker
Yeah, things might look different in in in space is what I'm saying. yeah i Yeah, they surely will, I think. And I mean, you already, yeah, some of the current space treaties that exist, ah either some of them are outdated for some some countries just haven't signed up.
01:20:32
Speaker
Then I think, m you know, when you bought Starlink, I think you were also signing that you weren't accepting specific space treaties.
01:20:44
Speaker
I think they they were about Mars though. yeah, It's not necessarily clear that all entities will entirely uphold those, as that will be that we have a claim to space.
01:20:56
Speaker
and And the interesting thing here is also that even, again, it's the same thing that we talked about earlier. like Even if you could come up with a perfect system, theoretically and abstractly,
01:21:09
Speaker
Having enough skin in the game for everyone to uphold it that will become a relevant actor on that sphere, that will be a really difficult one. right like let's say Because you need to create enough skin in the game for all entering parties in that arrangement you can for them so it to continue to be more beneficial to uphold that system that you're developing rather than try to overthrow it. And because it doesn't have any legitimacy from the get-go yet, because it hasn't been around for millennia, and we can just point to the fact that we've always done it that way, it's going to be very difficult to come up with something in this emergent way that has enough legitimacy that we'll all we will'll continue using it.
01:21:54
Speaker
Yeah, I mean, there's a temptation if you're in this in and in front, if you're the most likely to to settle space, you're kind of leading the race.
01:22:07
Speaker
this's There's a temptation to not care about any restrictions, to not allow any of these treaties, potential treaties to restrict your your settlement of space. Now that I'm thinking about it, maybe we have some ideas for solving this because we've solved this problem on earth before, perhaps settling unclaimed land or settling the seas or something like that.
01:22:32
Speaker
Yeah, we actually, so it was a chapter in the book, this book, Gaming the Future, that we wrote that some of these ideas that we discussed are coming from, that we then cut out because it was just too wild and speculative, but like,
01:22:45
Speaker
one idea would be and how, like basically on the one hand we're facing a problem with AI automation and that like, you know, we will, many people will lose their jobs.
01:22:56
Speaker
um And so we, and the, with the more capital you have where you enter, in any strong AI or TAI world, the better you set up in that world. And so, you know, as your labor or like, you know, like your, yeah as your labor becomes less valuable, like capital will always be something that AI systems will want from you, and capital, land, resources, et cetera.
01:23:22
Speaker
So we were thinking, how can you actually equip people with, a way to continue to remain relevant in a strong AI world and where they have something to bring to the negotiating tables with AI systems.
01:23:37
Speaker
And one way to do that is by giving them by giving them capital or property. And where could you take that property from? Well, there's a lot of unclaimed property in space that will soon be claimed by whoever gets there first.
01:23:51
Speaker
And so if you want to avoid that it's claimed by whoever gets there first, arbitrarily, then you know we should create something called an inheritance day, which again, Eric Drexler brought up in Engines of Creation 1986.
01:24:03
Speaker
which is a day by which the remainder the remainder of the at least accessible universe, the light cone or however you want to slice it up, will get divided across everyone who lives.
01:24:16
Speaker
um And you can then trade on the expected value of that property with whoever wants to go out there and make use of it. And so this would kind of like solve the problem of ah capital that you have going to strong AI world with the other problem of we have to solve the space property.
01:24:38
Speaker
problem and would give everyone like really like a very cozy and cushy financial start in this AI world. it's There's a lot of space in space or like a lot of useful resources. We would be a very a very privileged group, us kind of the currently existing people, if we if we all got a ah large share of space.
01:25:02
Speaker
ah But I guess i guess the the problem is then why would an AI that's settling space, respect any of these property rights instead of just doing whatever it can to acquire as many resources as possible?
01:25:16
Speaker
Well, we don't know that. Obviously, we haven't built these systems yet. But the earlier we put these systems in place again, the more we have... i mean, there's a bunch of different problems. We're running into the problem even of just like, how do you divide up that space? You know, like, for example, you first only make part of this deal, whatever is available in the next 10 years, probably, because things are also moving away faster. So like, it's very difficult to even see what will become available at the time that we can settle it.
01:25:45
Speaker
But like, there can be different time releases and delays and people have actually thought about this. so and So there are some of these solutions, but the idea is that If we could come up with a solution, the earlier we come up with it and the more, well, first, if we can come up with one, we need to also make it one that many other nation states and their citizens and civil societies adhere to it.
01:26:13
Speaker
So that will be the first one. It's not just like, how do AI systems adhere to it, but how do we make it even like a stable shelling point for other humans and their various governing bodies.
01:26:24
Speaker
And once we have done that, then I think we can enshrine it to some extent in, you know, in the architectures with which we're cooperating through with AI entities. And yes, if they don't uphold any of our legal systems and if they're entirely are just going to blast bla by through it, of course, they're not going to uphold this either. But like, I think the earlier we couldn't come to a mutually agreeable solution that at least civilization can uphold or more or less uphold, can enshrine that in contracts, can enshrine the contracts in code,
01:26:58
Speaker
And have that be the reality in which these AI systems grow up in, the more likely we'll make it that they will also consider it legitimate rather than if we like suddenly um bring that to the negotiating table of just like, oh, by the way, we all granted all humans these wonderful space property rights.
01:27:17
Speaker
You know, yeah. So it's, you know, we're very much in speculative land here, but but again, we don't have much time. These are all problems that we are facing now. And yeah, So we need to think about

Creation and Discovery of Technologies

01:27:28
Speaker
it.
01:27:28
Speaker
To what extent do you think technologies are created versus discovered? and I'll explain what I mean here. If we say a technology is is created, it means that we are where we are choose we' making choices that influence how that technology turns out.
01:27:45
Speaker
And it's the technology is more or less what we wish to create in the world. versus technology being discovered, which would mean that it is it is us exploring the tech tree, say, and stumbling upon some technology that works in a way that's just inherent in in the technology because ultimately of the laws of nature and and the laws of physics and so on.
01:28:12
Speaker
Do you think we are do think we are beholden to discovering technologies that then just work in ways that we we can't foresee? and Because it seems to me that that, for example, the way large language models have turned out is mostly us just, in some sense, stumbling into that paradigm. And and then it works the way it works. And it's not a set of deliberative choices.
01:28:40
Speaker
Yeah, again, I'm way out of my depth here because I'm not a technical scientist in a specific domain. So take anything that say with a huge grain of salt. This is almost a question of philosophy or of history, of technological history. It's not necessarily a technical question. It's more a question of, yeah, at the grandest level, how technology works.
01:29:02
Speaker
Yeah, i but I think different people have very different theories around it. You know, I've definitely, like, There are strong proponents of both of these theories, basically. And there are examples of both, I think, to some extent.
01:29:17
Speaker
And actually, I came across this really interesting blog post the other day where someone was trying to calculate how useful very large-scale scientific projects have been.
01:29:30
Speaker
Andrew White, I think, at Future House wrote it. and was just like calculating how much CERN costs and how much we actually discovered through it and whether that was a useful thing to do, well yes or no.
01:29:42
Speaker
And then for other things too, like the Genium Project, et cetera. And so he was just trying to see like, was this actually a useful undertaking? Can we do anything that is as directed, yes or no?
01:29:53
Speaker
And he was relatively pessimistic after the post, but you never know the collateral effects, I guess. We could still look back on this in 200 years and have a very different opinion about it. But okay, so back to your question.
01:30:04
Speaker
To some extent, I think we're discovering things about the world. And in another way, we can still influence that path, that tech tree usefully. by ordering the arrival of some of these technological developments.
01:30:21
Speaker
And so here we're back in the offense-defense dynamic. And so the reasons why, like at Fawcett, we're also building tech trees and have done a little bit in collaboration with FLI and together. And the reason why they're useful is not necessarily because you know that like, you know, eventually you will like tick up all the different capabilities on the tech tree But it is because by making the tech tree clear and creating the common knowledge around it you can sometimes incentivize development of specific technologies first in tandem with each other.
01:30:55
Speaker
And that also means that you can sometimes incentivize the development of safety and security enhancing technologies first within a tech tree. So sometimes the ordering of technological development is important. And by, you know, by,
01:31:12
Speaker
actively putting resources towards developing, let's say, getting if we put a lot of resources towards getting much, much better computer security very, very, very fast um before we have a world of strong AI, I will be already a lot more optimistic.
01:31:28
Speaker
that we can basically, I think computer security is a super undervalued issue still. And I know that like many people, you know, in our respective sub communities, I think are also caring about this and have have cared about this for some time and are definitely gearing up to care about it a lot now.
01:31:43
Speaker
But a lot of that work is focused on AI security and securing AI labs. rather than on like general computer security across civilization, securing all the physical ah infrastructure, including electric grids, nuclear facilities, et cetera, like that we're really bad at.
01:31:59
Speaker
And so that's going to really bite us in the butt for lack of a better term. And so by if if we were in a much better world in which computer security was so much ah stronger,
01:32:14
Speaker
I would be much more optimistic generally about the the future of AI development. And so, you know, if we had a tech tree that laid that out perhaps and had laid that out a long, long time ago, maybe, just maybe we could have built systems more with that in mind.
01:32:30
Speaker
So sometimes I do think that by just creating common knowledge around and simulating outward and designing outward what's possible, you can and influence the ordering of technological development to some extent, not perfectly, but to some extent.

Hope for the Future

01:32:43
Speaker
How should we think about having children if we are at the cusp of developing very powerful AI? Now, this is definitely not advice, not perspective, nor anything, because I'm not even a parent yet, but I will be in three weeks if everything goes well from now on.
01:33:01
Speaker
So take this with not just a grain of salt, but like here's someone that doesn't even have kids yet talking about that future, but it's definitely someone who has grappled with it because I've created what will soon be like a child and I've done so very intentionally. So i think that
01:33:27
Speaker
to some extent, We have to believe that the world will continue. And we have to believe that we have a future. That is, to me, the almost the number one necessary to condition for us to have a chance of having a future at all.
01:33:45
Speaker
um you know we Again, with collaboration with Future Life Institute, institute we were really leaning into our existential hope track at Foresight and created a lot of world building around positive futures and positive worlds and how a positive world with strong AI could look like in the next 10 years.
01:34:01
Speaker
And I think if that like entire, our entire work on existential hope has like, you know, taught me one thing is that like, it's really important that we, at least have a grain of hope in the fact that we can like make it through.
01:34:16
Speaker
Because if we don't, then then then the chances go go way down. and you know i think if we're only ever focusing on the world that we're really trying to avoid while never actually actively building for something, you can do that all day long. you can just try to prevent things that you don't want. But if you never build the stuff that you do want, like and then there's just less and less even to strive for because you're not putting anything in place that you're like that actually makes life worth living.
01:34:45
Speaker
And I think this is probably one of the most impactful things I will have ever done in my life is to have a child. ah Definitely one of the most formative and it's also what makes life worth living at the end of the day.
01:34:59
Speaker
And so I do think that without creating these like beacons of hope for yourself, and like then, you know, but what's the point of it all really? So I don't feel, I don't feel guilty around it. i have no qualms around it I'm pretty excited about it.
01:35:17
Speaker
And I think that to some extent, hyperstition, it's definitely not a real thing, but like you know if all of us engaged in the world in a more positive collaborative lens,
01:35:33
Speaker
rather than in only zero sum dynamics and we might not make it through and, you know, the whole economy is going to crash. Like you are creating to some extent the world that that you live in just by virtue of the fact of how you show up every day in the world.
01:35:47
Speaker
And so this kind of like local living of your values is important. Yeah, that's ah that's a good way to so end this interview, I think. Alison, thanks for chatting with me.
01:35:59
Speaker
Thanks a lot, Gus. It was very fun.