Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Emilia Javorsky on how AI Concentrates Power image

Emilia Javorsky on how AI Concentrates Power

Future of Life Institute Podcast
Avatar
4.7k Plays5 months ago

Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation. 

Apply for our RFP here:   https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/

Timestamps: 

00:00 Power concentration  

07:43 RFP: Mitigating AI-driven power concentration 

14:15 Open source AI  

26:50 Institutions and incentives 

35:20 Techno-optimism  

43:44 Global monoculture  

53:55 Imagining utopia

Recommended
Transcript

Introduction to the Podcast

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Dogger, and I'm here with Emilia Jaworski, who's also from the Future of Life Institute. Emilia is the director of the Futures team at the Future of Life Institute. Emilia, welcome to the podcast. Thank you so much for having me, Gus. Great. We're talking

Power Concentration in Society and AI's Role

00:00:20
Speaker
today about power concentration. Why is that something we might worry about? Yeah, so power concentration is something we think about in general in our society where power is already fairly concentrated into a few hands. But that's distinct from what we talk about when we think about AI because AI driven power concentration
00:00:42
Speaker
risk concentrating power into the hands in even far fewer people than it already is. And so from my perspective, there's something really wrong um if AI enables a single private actor with no outside checks and balances to potentially have the power to choose what future humanity is ultimately going to have. And this is not just something that I'm concerned about. This is something that even the leaders of major labs have talked about. They've gone sort of on the record talking about the fact that they're uncomfortable with the amount of power that developing and deploying these systems take. And that at some point, this probably shouldn't be in the hands of just private actors because that level of power concentration would be undemocratic, which is something I think that Daria Ahmeday has said previously.
00:01:33
Speaker
So that's not something you typically hear from the tech world. And I think that raises some alarm bells certainly when you hear it. And also when you see the dynamics unfolding about the level of power concentration and just across how many dimensions that power exists. What do you mean by

Dimensions of Power in AI

00:01:52
Speaker
dimensions here? What are the multiple dimensions where we could have power concentration? Yeah, so I think when people hear concentration of power, the most common sort of analogy in regular speak is when people think about wealth inequality or income inequality. That is certainly one dimension. So financial power and concentration of capital is an aspect of this. And we're talking about potentially concentrating capital at rates far, far higher than they even are today, especially when many of these labs see AI as sort of a winner-take-all dynamic, and whoever is that winner effectively could almost take over the economy.
00:02:33
Speaker
But there's other dimensions we need to think about as well. So there's political power. There's relationships that companies have with governments. We've already seen quite a bit of this in the big tech era of of sort of recent times with social media and the lack of actually meaningful rules and legislations and guardrails being passed there. um But we are seeing it today in sort of big tech lobbying against AI guardrails and having quite a bit of influence over and sort of the dirt trajectory that government regulation has. We see that in the power of labor. So what's really interesting is before it's been quite separate where people are doing the work and then they're working for companies.

Impact of AI on Labor and Society

00:03:15
Speaker
and What AI sort of introduces to the table
00:03:19
Speaker
is by being able to do more and more tasks that a human could potentially do, you have this replace function of human beings and GPUs. And so now you have a single company that could control the labor workforce. There's the power of attention, right? Our attention as citizens, what we pay attention to, what content we consume, um how we feel, right? And seeing sort of the dawn of AI being used as companions or sort of ways that can sort of emotionally hack us. So each of these dimensions is, you know, you can debate whether or not the concentration in today's world is healthy or not.
00:03:59
Speaker
but the world we're heading towards is one where this gets very, very much more concentrated. and The result of that is you know a fundamental sort of disempowerment of people. It probably is going to conter curtail innovation quite a bit in sort of that kind of monopoly dynamic. and Then you're going to end up with sort of a monoculture, right which is also something I think most people don't want to live in. so I think this is all sort of quite scary dynamics. And I guess the

The Reality of AI-Driven Societal Change

00:04:26
Speaker
last point I would say on that is that you don't need artificial general intelligence or super intelligence for this to happen. And so I think that's another point that gets confused in the conversation. It's like, oh, well, this sort of AI functional takeover of society where we all end up disempowered.
00:04:44
Speaker
is a function of super intelligence. That I don't think is the case. We are already with the sort of technology and capabilities we have very much on the trajectory to end up in a situation where most people in the world end up not being empowered or being able to choose what future they want. Yeah, I think when when you say power concentration caused by AI development, I think some people will will think of totalitarian surveillance states and other people will think of big tech and monopolies. And so how do you think, is it useful to talk about those two categories of of risk under the the common umbrella term of of power concentration from AI?
00:05:26
Speaker
Yeah, I

Surveillance State and Private Tech

00:05:27
Speaker
think it is helpful to talk about those as sort of categories. You could also have them be a single entity, right, which is a surveillance state by a big tech monopoly. So so when do we think of surveillance state, we always assume that it's the state that's doing the surveilling. But in this case, I think there are possible futures here where that surveillance state is enabled by a private entity and extends beyond borders, right? That's the thing of being in the digital sphere. Could you have this not just be a surveillance state in the case of you know respecting state borders, but within sort of a global surveillance state?
00:06:03
Speaker
So yes, in general, I'm quite afraid of that. And I think that there's like quite Orwellian dynamics already afoot. And those, I think, could potentially quite continue. And and happy to delve into into what those are of interest. um First, i'm I'm just worried that there might be a lot of basically market demand for these big tech companies that that drive this concentration of power. So is it just the case that consumers just want what what these companies are offering, and that is what is what is causing power to concentrate? And does that make it more difficult to push back, perhaps?
00:06:41
Speaker
Yeah, so i I think it's interesting that none of these companies yet have a viable business model, right? So it's not that the demand is coming because consumers are are asking for for what it is that they're providing. They're certainly using it. And and just to just for the listeners, the the companies we're talking about are companies like OpenAI, Google DeepMind, and Anthropic, for example. Yeah, no, I agree. I think that it's actually really worth having making that point clearly that the AI driven power concentration we're talking about is these very large general purpose systems that are only being developed by a handful of companies. So this is not that AI in general will lead to this result. AI could actually be used to
00:07:25
Speaker
decentralized power and create some really cool new structures and institutions and technologies that make it actually more empowering for us all to live if we choose to develop it that way. that This risk that we're talking about about AI-driven power concentration is from a very, very narrow set of of actors and systems. yeah Yeah. And

Proposals to Counteract AI Power Concentration

00:07:45
Speaker
because we're worried about power concentration from AI, we are launching an ah RFP. What is an RFP and what is this ah RFP about? Yeah, so we are launching an RFP, which is a request for proposals. We think that AI driven sort of power concentration is a very urgent problem, given how quickly these systems are moving ahead, how many different domains of society they're starting to permeate and how much more and more capable they are, and the fact that companies are very much locked in and in a race in this dynamic. So we've seen this of like demo days happening a day with it so within a day of one another, right? Like it's very salient how quickly this is moving. Given that accelerant that is happening in the AI conversation, that makes this question around power concentration, I think, probably one of the most pressing problems of our time. And it's not one that is really being happening as a societal conversation. Even when we think about the conversation around AI and risks of AI and downsides of AI, the power concentration piece has been like a side mention, but not really central to the conversation. And so this is a conversation we need to have happening as a society. So an RFP is on one level saying like, hey, this is a really important topic we need to think about. And on the second piece of it is we're putting some money behind it to identify what potential solutions are. So this is really a call for solutions, institutions, ideas, tools that can start to take on this question and figure out how we start to steer in a different direction.
00:09:23
Speaker
and also spark the thinking of folks who may have you know a tool or an idea that's in somewhat of a related field or hasn't thought about working this on this problem to say like, hey, this is a problem. Here's some funding. Show us what you can do. What's something you'd be excited to see here? What would be

Empowering Individuals with AI

00:09:42
Speaker
an example of a proposal you'd be excited about? Oh, there's there's so many different, you know, because this extends through so many different domains, I think that there's like just so many cool ways you could start to tackle this problem. You know, there's the idea of public AI, right? What does the AI look like? People developing it or governments developing it or funding it. There's the ideas of what are new ways that AI could actually empower people.
00:10:14
Speaker
Right? So could you have a loyal AI assistant that actually does a lot of negotiation on your behalf and make sure that you're sort of empowered and informed in ways that like all of us getting bombarded by terms and conditions and check boxes and unable to navigate kit cannot. There's new ways that AI could enable us to cooperate. And I think cooperation is a key, key piece of actually building to critical mass for change and finding sort of new ways for people to cooperate in a society where perhaps the traditional ways have been eroded, were more isolated, were less part of communities. There's less sort of avenues to sort of come together and and and develop sort of
00:11:00
Speaker
collective and collaborative ways of working. AI is really interesting in like different tools that it could unlock in that domain. um So they could be all kinds of different things. I think things to be mindful of when we think about developing this is like, they have to be pragmatic, right? And so that's one of the challenges when thinking about different ways of doing things is it's, you can reimagine sort of a totally different structure of incentives, right, for these companies, but how do you bridge the here to there? And so I think the challenge is figuring out, like, how do you actually come up with ideas that perhaps start at a nascent level, but have a viable path to scaling and happening and in the real world?
00:11:48
Speaker
You mentioned having this loyal AI assistant as a helper. I guess we're about on the on the verge of seeing these these kind of AI agents that can help us you know draft our emails and and respond to things and and organize our calendars. and And those agents will have multiple different ah constraints. So they'll be constrained by what the company wants them to do. And they'll be cons of constrained by what the user is interested in. Do you have ideas for how to balance those those two constraints? What do you do if the user wants to do something that the company doesn't want? Or if the if the company wants to kind of force the user in a certain direction? What would be empowering for the user in that scenario?
00:12:33
Speaker
Yeah. I

Independent AI vs Corporate Agendas

00:12:34
Speaker
mean, I think this is why you need sort of independent development of these tools, right? So you need system or an agent or a companion or whatever that may be your assistant that has sort of a fiduciary responsibility to you and to your best interest, because I think it is very difficult for any user to ascertain like is this recommendation from the company in my best interest or is it in the company's best interest? There's just fundamentally misalignment there and between you know your what's best for you and what's best for a company. So I think that's where some of this work around public AI or in art R and&D and sort of developing things that are outside of companies, which is challenging for a whole host of reasons given.
00:13:22
Speaker
economics of the situation but is a really interesting and viable path. I would add to that that information and being able to make sense of information is another domain in which I think we'd be excited to see proposals and is really important of like how do we actually ensure we're getting accurate information, what are tools for improved sense making because a big aspect of power concentration comes through not having a common vernacular for all of all of us to be able to speak to one another on right when we see sort of floods of disinformation online and synthetic content and
00:13:59
Speaker
being sort of put in ideological bubbles from recommender algorithms. So tools that can help break some of that and get us back to sort of a common ground of like, all right, what are the facts? And let's start making decisions from the facts, I think is another sort of tool that can be helpful there. Some listeners might

Limits of Open Source AI

00:14:17
Speaker
be thinking right now, Emilio, we have the solution to to power concentration from AI. It's open source. If you have open source AI, yeah ah one company is not in control of that AI. And so maybe we should talk about the pros and cons of of open source as a method of ah decreasing power concentration.
00:14:36
Speaker
Open source, I think is is helpful, but ah not sufficient. And I'll kind of go into, to why that is. Also it's important to say like, what do we mean by open source? Because open source has historically meant one thing. And I think people are sort of flinging the language of open around and doing some hand waving when it comes to AI, because open is signaled as a good thing. And so there's a lot of people taking advantage of the fact that that's great branding and, you know, trustworthy branding. Does open source mean that the model weights are are out there for people to download or that the training set is out there for people to download? It it can mean a bunch of different things and you can you can open source to different levels.
00:15:18
Speaker
ah Totally. And I think what people tend to mean by this is the open model weight framing of it, and which is that the model weights are open, you can build on them and you can tune them for a specific application. But there's a paper I highly recommend by Meredith Whitaker of the Signal Foundation that like goes into this question. of like really delving in. like What does it mean to be open or not open? And even under sort of the more radical definitions of open, they really argue that that still is not going to necessarily meaningfully solve the problem that we're talking about, which is this issue of power concentration and undemocratic uses of AI. So I think there is actually some
00:16:04
Speaker
danger into seeing open source as the solution because for lack of a better term, it's power washing. um It's like the idea that you're solving this problem with a strategy that is not actually going to solve the problems. So it's something that certainly is going to be in some cases helpful, especially for realizing benefits of AI and being able to like tune certain models to different applications, to do scientific research, to do education, to do like all the amazing stuff that AI can do. And I think models should be as open as as they can as long as they're safe and we shouldn't be developing models that aren't safe so my argument would be like most most things should be open um but the reality here is that like at the end of the day these frontier models caused hundreds of millions if not billion dollars to train um So this is like prohibitive from like most small actors or other actors actually like being in this game and being able to develop sort of a truly open model. And they will just always have a competitive sort of edge. And we've seen this in open source past, right? Like big tech is still an industry, even though we have a lot of open source tech.
00:17:19
Speaker
um And so if open source was the sort of be all and end all of being able to kind like take on this problem, like our current landscape of technology would look very, very different. And there's even pieces here that are you know kind of getting at that disempowerment piece. So we think about meta has been at the forefront of this open source conversation, you know we're opening it up and isn't this great? Well, part of what they're doing is opening it up so people can build really new cool things that one would argue they could then feed back into their product offerings. right

Meta's Open Source Strategy

00:17:57
Speaker
And so you're friend essentially outsourcing free R and&D to the open source community that you can then sort of develop products on. So I would be quite careful. There's also, once things are open, there's no liability. Liability becomes a lot more difficult to prove. And so there's other like sort of interpretations one could have as like opening opening things up is also a way to try and diffuse the liability questions as well.
00:18:22
Speaker
Why do you think meta has taken the strategy of open sourcing their models? Is it as you mentioned, because it it might come back to in terms of it might, it might allow them to, you know, outsource their, their R and&D to to developers or is it Is it also because they they perhaps want to initiate a race to the bottom in in model cost? so So if you have open source competitors to the to the leading corporations that might hurt Metas competitors, which would be good for them.
00:18:54
Speaker
I mean, this is something you and I guess have spoken about in the past, which is like the whole commoditizing the compliments argument and the long history here. This is not the first time that tech companies have opened up something for a competitive business advantage. For people who aren't familiar with the term, commoditizing compliments essentially means that you want to drive the price of products that compliment yours down to the lowest amount you can to increase demand for your core products. So like you can think of like cars and gasoline in this model. right like The cheaper the gasoline is, the more attractive it becomes to actually buy the car.
00:19:31
Speaker
So this can be about the strategy for your own business, but also a strategy to sort of kneecap your competitor's business that are that where the compliment is their core business, even though it's not the core one to to you. So this has been like and a common example of this is like Google and Android with operating systems, right? And like by having a free operating system, you drove down the cost of the Android. It was able to be offered at sort of a much sort of more democratic price. And that was sort of an assault to Apple with its proprietary operating system. So this is not just like the, this is, then there's many, many examples of sort of like tech history past of this, this happening. So to your point, because there's one piece here of like,
00:20:14
Speaker
Well, why would they train this model that's very expensive and then just make it freely available? Because arguably they're not, I would argue they're not in the AI business that is not the core sort of of their yeah gen AI is not the core AI business of of something like meta. It's all of these different product offerings they have like Facebook and Instagram and other other arenas that could really benefit from having AI features that are sort of delivered on them. They also are in a landscape where, you know, big tech is sort of, we have an architecture of like, who are the players in that? And a general consensus that they're sort of competitors against once and one another. So there are companies in there that are looking at AI as like, could this be a core business? And so by making sort of another available alternative free, it's posing a challenge to sort of the business strategy of those that are going to be investing in the models as the business.
00:21:14
Speaker
yeah Yeah. And so what is the problem with that? what is it if if it might the but It might be a benefit to Meta, but could it be a benefit to Meta and a benefit for for developers and and consumers as well? Or is there some some hidden cost to to this? The answer is

Academic Benefits of Open Source AI

00:21:30
Speaker
both. like Having an open source model that's comparable to what's happening in sort of the private hands has been extremely helpful for academics doing research. right like in terms ah in Even those doing AI safety research, right because those doing AI safety research need to be able to have a model that's somewhat state of the art to play with and access and like have visibility into.
00:21:54
Speaker
to do the kinds of experimentation that they need to do to make progress in the field. um So I think that that like is a really important piece here. I would say when we think about what is it we ultimately want from AI, we want AI to solve problems and like move fields forward. There's debate as to whether or not these general purpose models are necessarily the best way to do that. like A lot of what we've seen of like alpha fold and antibiotics and new fusion reactor designs have all kind of come out of bespoke systems that were trained on a very like specific set of data and not necessarily these large models. so
00:22:34
Speaker
I think there's probably some marginal benefit for AI benefit. I would still argue that like a lot of the AI benefit we could get going today is probably more in the um on the narrow side. But there is the question of like races to the bottom, as you said, right in the sense of like All right, it's great to have something open and available to do research on AI safety. Well, do we know that that model is safe that we actually released, right? And and sort of by creating these competitive pressures so to go faster and cut corners, what you collectively have is like another accelerant in this race that is already a a very dangerous path that we're on.
00:23:18
Speaker
So you mentioned the difference between these general purpose models, which would be chat GPT, for example, that ah that these models kind of aspire to become artificial general intelligence. And there are there are several companies developing these models. that's That's on the one hand. On the other hand, you have something like Alpha Fold. a specialized system that does one thing at a superhuman level. These are kind of two different visions for how to develop AI. You you could have a narrow superhuman AI or you could try to develop an AI system that's general and that's that's more capable than than humans across a a wide ah range of of domains. Which one of these visions do you think is is best from ah from the perspective of avoiding power concentration?
00:24:02
Speaker
My sort of answer there is both what is best for avoiding power concentration, but also what is best for actually realizing the AI benefits, both from like realizing benefits, but also for maintaining a sort of highly competitive environment and where there's actually like a lot of incentive to innovate and build and sort of create new things. so When you are looking more on the sort of bespoke AI side of like using AI to solve problems, like it's the answer of like why are we developing AI in the first place? right Ostensibly, it's because we want to solve the problems that we have and take ourselves forward and realize like really cool futures with technology.
00:24:49
Speaker
And so getting started on like how can AI actually be applied to solving a lot of those key bottlenecks that we have in science, in manufacturing, in material science, in in some ways in education, right? In these more like development oriented goals in climate. There's just like so many areas that right now should be on fire with investment dollars and teams and like startups happening of like harnessing this technology to actually develop solutions to those problems. And like we should be in this sort of golden era of collective building and collective like problem solving with everything that's like coming online in the forms of these technologies.
00:25:34
Speaker
I think that that is the like world that I would really like to live in. That's the world where I see AI creating a whole bunch of like different companies in a competitive environment that like keeps moving forward and like the the forces of capitalism keep kind of steering it in the right direction. I think the other vision of this, which is like a few companies locked in a race to build a superhuman general purpose system, is not necessarily in any ways going to be helpful for power concentration. I think it is the worst scenario for power concentration. I think it is also probably one of the more dangerous scenarios across the board from like a geopolitical standpoint, from an AI safety standpoint, from a you know
00:26:20
Speaker
ability to preserve a society of choice standpoint, which I think is like really, really important. So yeah, I'm i'm i'm squarely in the former category more so than than the latter. Okay, so we've we've kind of established or we've at least talked about that we are not interested in having a few companies race to develop superhuman a general intelligence. And perhaps open source is not enough to prevent power concentration. What else do we have on the table? what what In terms of incentive structures, this is something I know you care a lot about. How can we design institutions that encourage development of AI in a positive ah way?
00:27:03
Speaker
So

Policy Levers for Positive AI Development

00:27:04
Speaker
I think there's many different levers that you can start to think about here. I think there's one that's like on the policy making side, which is what we traditionally think of as what sets the levers for the rules of the road in sort of a society. So there are things that we already have like antitrust, right? This idea that like we should not have a monopoly is like very much encoded into the bread and butter of what government does. Another area that is like pretty
00:27:35
Speaker
so and sort of ingrained in our society, but less so in tech up until this point, but but should be is liability and and sort of strict liability provisions. Because what you effectively see in these more like monopolistic um dynamics is that companies are taking more and more risks. and they're shifting the negative externalities of that risk onto society versus the corporate entity and they're able to do to do that in part because of the lack of liability provisions for that shifting and and sort of architectures for establishing liability for that shifting so i think that's another piece that needs to be sort of
00:28:13
Speaker
taken on. There are nationalized like investment strategies here. like You could imagine like the public AI idea, sort of a national project that belongs to the citizens of a country to develop AI, like a Manhattan project or in the national labs, and it's done with very sort of high safety standards. You can imagine sort of nationalizing or lots of public investment that actually belongs to the public and not private entities, things like compute and sort of inputting sort of general restrictions on, you know, how much compute any single entity can use. And then there's like the general thing that governments can do, which is like create guardrails and standards that create a race to the top. And so like one area I think about is
00:29:00
Speaker
By background, I'm a physician and scientist, so I think of like where did the FDA do well and where did the FDA do really badly? so Lots of problems with the modern FDA, but if we go back in history to the origins of the FDA, i mean it was a wild west of just like snake oil and claims and toxic products that were on the market. And by the FDA sort of coming in with their Modernization Act, they basically set standards. And the standards were like, hey, this these things need to be safe. right And so what it effectively did was reset this industry where consumers couldn't tell what was safe, wasn't what wasn't. They were taking on all this risk and created ah ah sort of a bar that everyone had to pass in order to get their products on the market.
00:29:48
Speaker
And so as the entity, as the agency evolved, there was this thing where like you have to be as safe or better, right? And by putting the as safe or better standard on there, you created an incentive to develop systems that were aligned with safety, right? so there's And there's plenty of examples of this in aviation and other high risk industries of like setting a bar and incentivizing being at that bar or better than that bar and that bar. than being internalized as a core value of the industry because that is the bar to which they're held and which consumers will ultimately hold them to. um So I think those are all sort of like what are ingredients that governments could potentially and employ and act to try and and tweak incentive structures.
00:30:34
Speaker
I don't think those are the only ones that exist. like I think there are also, you know we have collective power as a society, as consumers, to figure out what it is we want in in in in the world. And I think that AI tools to like enable more collaboration and cooperation could be like a very interesting incentive and sort of area of that. I think we look at what happened with like central banks and crypto, like people can develop things that are cool and take on existing systems. right like That is very possible. And so I think that is another area that warrants a lot of sort of thinking and investigation. And I would say, again, it's really urgent because the most collective agency that we have as sort of humanity as citizens is today.
00:31:23
Speaker
As these systems get more and more capable and power gets more and more concentrated, our leverage and agency goes down, which is why we really need to get started on like really tackling this problem. I think these decentralized solutions might be some of the most interesting if they can work. and how do you How do you see the prospects of of of ah kind of at a decentralized development of AI? So

Decentralized Strategies and Challenges

00:31:46
Speaker
I think in terms of like what are decentralized strategies to take on this question, I think we've seen, you know, we talked about this a little bit with public AI sort of development that could be by a government or that could be by a collective. And we've seen like with many sort of other areas of sort of decentralized organizations or DAOs, we've seen them in science, we've seen them in like many other fields.
00:32:10
Speaker
to enable people to collectively do things and basically create a digital infrastructure that. Previously only existed in sort of physical form. So like in the science field, you basically created the equivalent of like a digital academic medical center to enable people all over the world to do research in sort of a decentralized way. So I think there's other ways we could like. take a decentralized approach to to building. I think that there's really interesting work happening of some organizations like the Collective Intelligence Project, who are looking at basically new AI-enabled tools
00:32:46
Speaker
to incentivize democratic behavior. So this goes everything from like citizen assemblies to new ways of preference, sort of soliciting preferences and preference aggregation, which is quite, quite cool. So I think there's a lot of different sort of ideas brewing out there. I think a problem with the decentralized model is there's lots of ideas that are prototyped in this space. The challenge is getting those ideas to scale and to be reach sort of critical mass in order to have impact. And I think that's something we'd be super excited to see on sort of going back to the ah RFP piece is like, what are tools that we can do to like actually scale these decentralized entities? And like when you're a corporate entity, you have a marketing budget and a marketing team and a whole corporate sort of wind in your sales in order to get products out out into the world and and make them grow. And so what are ways that we can do that with these entities that that don't have that? And like could there actually be decentralized ways of creating that corporate infrastructure and for for growth? So that's an area that I think I agree with you is very promising and would like to see more activity happening. and

Combating Misinformation with Decentralization

00:34:00
Speaker
And one other point that I want to add on the decentralization side of this is also on the sense making side, which I think is very important. So we've seen tools come online like X's Community Notes or other sort of entities that are trying to do sort of news aggregation like Public Editor or Verity dot.News. that are trying to basically use the community, use decentralized infrastructure to help us get better decisions and better information and challenge the sort of information that we're given. And like realizing the power of the collective can actually be a really powerful antidote to sort of misinformation and disinformation and attempts to sort of erode our epistemic foundation as a society. So I think a lot of those tools are also really exciting. They're not
00:34:51
Speaker
squarely at the power concentration, but they're squarely at the foundation on which that that is built and the foundation that we need to have solid if we're actually going to take on the problem. yeah i mean You mentioned this earlier, that if you have a concentration of knowledge or information into a few hands, that is in itself a a form of of power concentration. so so we've We've talked about how we want to avoid a race to develop superintelligence that results in in a lot of power concentration. Maybe we should talk a bit about what future we actually want. Tell me tell me to what extent you're a techno-optimist.
00:35:28
Speaker
So

Techno-Optimism and Future Solutions

00:35:29
Speaker
I am deeply techno-optimistic, which like sometimes you can't tell when we talk about risk. And I think people tend to like to talk about risk more than benefit, like having to been in this area for a while and doing both work on the risk side and on the benefit side, it's just the risk side tends to get a lot more attention given the incentive structures of information that we exist in. But the benefit side is the one I'm really excited about, and that's like why I do this work in the first place by training. I'm a physician and scientist with sort of my academic hat. I research basically how to make people more resistant to radiation through different drugs, different strategies, with the hope that we will one day get off the rock and go to space. um and so i you know It's very much in my DNA, I was formerly an entrepreneur that
00:36:22
Speaker
technology has the power to change society for the better and i think one of the collective sort of tragedies we have in the information space is we're not techno optimist enough in the right way so i think that you know there is this collective zeitgeist that that it's impossible for our best future to be ahead of us that people are kind of resigned to this feeling of well it's not going well and therefore it's going to continue to go not go well and a technology future that's dystopian is inevitable. which is the like total wrong frame of mind to have. like A, things are actually way better than they were to historical norms. And like we may not think things are getting better. Overall, there are a lot of things that are getting better. There are some really big unique challenges that we're we're we have now, which is like the work that we do at FLI is like what are some of the unique challenges that were posed by that getting better that we need to solve and break through to get to the next phase.
00:37:23
Speaker
But this idea that our future is inevitable is just nonsense. Like we are building technology, we're building the future. There are solutions to all of the problems that we've highlighted. Those solutions are within our grasp, either to do be a collective action or through technology research and architecting. And so like getting people excited about that and like believing that I think is actually also one of the most important priorities of our time. Because if people are not excited to innovate and to build and to solve problems in order to get someplace great, like we're just going to not solve those problems and we're not going to build and we're not going to do anything. And we're going to sort of succumb to this like doomy inertia that we're all under. So i'm I'm like super bullish on technology. And I think that like my bullishness is also the reason that we need to be careful that we don't, you know, drive off a cliff or get ourselves into a future where we've taken that ability to, you know, let capitalism do its thing and innovate and build companies and build new technologies off of the table by nature of having a world where we're kind of stuck in a monopoly or sort of digital authoritarianism.
00:38:33
Speaker
One of the results of AI concentrating power might be that money or economic resources in general are also extremely ah concentrated. What

Redistributing AI Wealth

00:38:44
Speaker
can we do to to fight against that? What are some interesting options here? Yeah. So there's a lot of different ideas about that. I think I'm kind of always reminded of the Arthur C. Clarke quote, which is that the goal of the future is full unemployment so we can play. So that's quoted quite a bit. But the second sentence after that was, that's why we have to destroy the present political economic system.
00:39:11
Speaker
that And so I think that kind of sums it up where it's like, yeah, all right, if we get to a future of like true abundance, and I think there's arguments to be made that we are already relatively in a period of abundance compared to historical norms, and there's not a lot of great evidence of people who control abundance developing redistribution mechanisms for it. So figuring out like how we actually have a forcing function on that will be important. But this idea of like, how do you then redistribute that? and So this is schemes like universal basic income has been talked about in this area. People have talked about universal universal basic compute. The sort of Alaskan dividend is another example that's been touted as like mechanisms for sort of redistribution. And this is something that actually in the futures program we, my colleague Anna is working on, which is the idea around the windfall trust and the windfall
00:40:08
Speaker
clause. So this is an idea that if a single entity were to get to a certain amount of the GDP that it would then be distributed to the global sort of population, which is a great idea in principle, but like how do we actually reduce that to practice, what does that mean? Is that the corporate entity is distributing it? What is the mechanism of distribution? Is it governments that are doing it? Is it an independent entity, like a trust that is doing that? Like, how do we find everyone in the world that needs to get that? Is it just, you know, within borders, throughout humanity, there's a lot of questions that have to be thought through here and they need to be thought through and prototyped and pressure tested now.

Complexities of Global Redistribution

00:40:53
Speaker
and Because when that day comes, you like you need pre-commitments and a plan to have any hope of this working. like you know i'm I'm always, again, I'm a huge Lord of the Rings fan, and I always think about like at Mount Doom, you know you have like the best little hobbit in the whole world with this pure soul, and he still can't throw the ring into Mount Doom. right like When it comes to like giving up power, the like absolute power corrupts absolutely. is just like ah key here. um So figuring out and and sort of pre-specifying ways to actually have a forcing function on redistribution when we get to that point is is like a really important priority.
00:41:32
Speaker
And their idea here would be that companies would pre-commit to distribute funds if they reap enormous benefits from from the their development of AI. So that they would be legally obliged to distribute a bunch of money to either citizens of a country or everyone in the world. Do we have examples of of companies signing such a pledge or or anything like such a pledge? Yeah. So some of the, a few of the companies have committed to the windfall clause. So it's worth looking that up, which is the notion of doing this. It's not the plan for doing this and the devil is always in the details. So I think specifying what that plan is will be very important.
00:42:16
Speaker
Another vector worth considering is that level of concentration of capital is probably also going to be accompanied by concentration in sort of the other dimensions. So we need to be thinking also about like what is a windfall clause equivalent for governance and decision making? What is a windfall clause like entity for culture, right? Like the the idea that we're going to end up in a monoculture just seems like very inevitable at this point. And even just playing with, you know, all AI image generation, you just see this reversion to a mean, right? Like everything you make, you're like, got it. It's like some vaguely cyberpunk or solar punk aesthetic and that isn't here. I get, I get where you're seeing AI system, but like
00:43:04
Speaker
to be a truly empowered future for humanity. like We need not only like representation in the data and governance of these systems and benefits of these systems, but also create a world where like choice still exists. And there's like a myriad of futures that people can live in and that that respects different values and ideas and cultures and ways of working and ways of engaging with with these systems. And I think that point is also one that gets
00:43:36
Speaker
sort of sidelined a little bit in the conversation, because people really focus on the capital, and the capital is a very important thing to focus on. But like I also don't want to live in a future where like all of humanity has prescribed this one culture and way of thinking and set of values that was built into it today in the Bay Area, which is arguably not the most representative place of humanity's ideas, values, and and sort of ways of living. Yeah, I mean, it it would be a ah weird coincidence if if the perfect representation or the perfect instantiation of human values were developed in the Bay Area right now. And so we we would we definitely don't want this monoculture. but Please say more about like, why ah why do you think we're headed in that direction? And you know, what is it that you're seeing? Is it just because a lot of the the models, at the top and models have been
00:44:30
Speaker
ah kind of fine tuned in a specific way to give you a a a result that incorporates the values of some some humans that that were part of the process of doing reinforcement learning from human feedback. What is causing this monoculture, do you think? I would first say that there is no perfect representative human future. And I think all attempts that have been to do that are just destined to fail, right? Like we just are way too diverse in our ideas and preferences and sort of ways of living that
00:45:01
Speaker
like they're One person's utopia is another person's dystopia, and like that's just never going to be reconciled across the board. Nor do I argue it should be. like As a biologist, monocultures die. They're vulnerable. like You need a diverse landscape of cultures and ideas. and like This is why diversity is just important. important from the level of like survival and like resilience of institutions. I think that what you see is is partly what you're alluding to, yes, that like you have a specific form of data that's trained, that it's mainly being used in English, that is being fine-tuned in specific ways to say what's good, what's bad, what's hate speech, what's not hate speech, what we like, what we don't like, what's acceptable, what's unacceptable.
00:45:44
Speaker
But I think the problem is that that's the only systems that are being built right now is from reflecting that worldview. So we don't really have sort of the counterfactual of like what does systems look like that are built with a very different set of input data, with a very different set of sort of tuning. Because we don't have that. It becomes really difficult to like see the differences in the outputs and see the differences in the user experience. they're also being released in an incentive structure that is dominated. right like It's no coincidence that like Silicon Valley is the product of the incentive structure that we have, and that's what it's enabled it to be so you know prolific and generate so much capital and wealth in society. And so it is also those incentive structures acting on that system and how it behaves in the real world
00:46:33
Speaker
that are further sort of tailoring it and tuning it towards a specific culture. And it's also the culture of the values that it's being sort of released into. So I would say those are sort of two and pieces here, both the system itself and then the environment around it and what that's incentivizing and not incentivizing. Are there any other projects that the Futures team is working on that that you want to highlight here?

FLI's Futures Team and Technology's Benefits

00:46:57
Speaker
Yeah, so the Futures team is working across a variety of different projects. Our general charter is that second half of the tree. So if you were were fans of FLI back in the day, our original logo was a tree and on one side it was like very dead and on another side it was like very leafy and green.
00:47:16
Speaker
And so the futures team is the leafy and green side of FLI, which is how do we actually realize the benefits that technology has to offer, right? And so this power concentration piece, you could be like, wait, how does that relate to a good future? But it's developing the tools that enable an empowered human future. Right? Like that's the other way. It's not just anti-power concentration. It's pro-human agency. It's pro-democracy. It's pro-diversity of ideas. So that that to me is like also at the essence of this. It's not just solving the problem. It's creating the tools and institutions and technologies that enable the futures that we want. I think
00:47:55
Speaker
Another step in there is like envisioning what futures we want and it's like getting people to think about that. like What future would you want to live in? Not in a prescriptive way, but in a way that like encourages ideation and getting people excited. like We have so much dystopian narratives of the future that are out there. And there's many reasons for this. like One is if it bleeds, it leads. right like People like dark and scary stuff. It clicks. It gets views. it's It's just like our brains are hardwired to pay attention to it. But the other piece of it is like there are just challenges in imagining things going well.
00:48:35
Speaker
Like it is very easy to do dystopian work because it's all cautionary tales and it's all like how do you break something? Like how did this thing break the existing thing that worked? It is so infinitely much harder to think how do you build something new? how do you build something exciting? right like it's really like We know it's really easy to complain about stuff. It's really easy to break things. It's way, way harder to like make solutions and think up solutions and like bring them into the real world. And that's been a challenge that we've just seen in general as our society has kind of moved away from that. The digital world is very attractive because it doesn't have like all those problems that atoms have of like trying to make stuff happen in the real world.
00:49:20
Speaker
and it doesn't behave the way you want and you have to scale it and it's difficult and there's like all of these challenges but like instead of seeing it as difficult we should see it as exciting we should see it as like the things that we need to do to get where we want to go and like the more and more we just like play in the digital world it's just inertia like we're not actually moving anywhere like until the sort of success metric of AI will be the things that it dreams up, how good are we at making those actually happen? And like right now we need to do a lot of work on figuring out the infrastructure, the talent, the stories and narratives that gets kids excited, that get you know people excited in the culture to actually like take on that task. Because that will be like, do we realize the potential or not of AI really hinges on that? It's not on the AI.
00:50:09
Speaker
Why is it so difficult for us to imagine positive futures? I think

Overcoming Dystopian Visions

00:50:14
Speaker
you're right that that dystopias are just much more common in in fiction and in entertainment in general. we don't see Even when I try right now or in general to imagine a positive future, it it might be something like ah relaxing on the beach, but that's not That's a very there's a very limited vision of what we might achieve. right And I think this is ah this is a general feature of when whenever humans in the past have tried to imagine utopias, we end up with something that's limited and that says a lot about maybe maybe you know the problems that people were facing when they imagined this utopia, what if we could have infinite food and so on.
00:50:49
Speaker
This is now a solved problem and in in parts of the world, at least. So is there some evolutionary explanation here where it's just more important to be aware of threats than it is to imagine positive visions? As a biologist, I'm ah obliged to say yes, because we do have much more salient memories and experiences of of bad things that happened to us. It's why we're hardwired to be afraid of of spiders and certain and other things and in in the world. But I would say the amazing thing about being human and having consciousness and having imagination is like we can vanquish and get over those evolutionary hurdles that have sort of been built into us. and i think
00:51:31
Speaker
There are great examples of this. You know, Star Trek was a pretty hopeful sci-fi series. You look at, like, H.G. Wells as things to come, which was another, like, really interesting and hopeful narrative. So, like, these things do exist. Even in its own way, things like Avatar are kind of, you know, very hopeful and and optimistic in thinking about the future. I reflect on like where was there a time or was there a time where the zeitgeist was different in sort of society and I think we can look to like a lot of the stuff that we think about techno utopianism or positive futures. There was like a renaissance of this in the 60s like one of my favorite sort of artwork series is done by NASA Ames and it was like imagining human space colonies based on the state of the science and they hired these artists to like draft what the colonies would look like and like during that Apollo era like people were super excited about science and what it meant for society you know where it was going to take us and we were going to have the Jetsons and we were going to have flying cars and like
00:52:34
Speaker
we were just going to like keep up the pace. And how do we actually kept up that level of investment into the sciences and technology and motivation? like We may be living in a very different, actually, future today, but that has existed before in our society. And it existed at a time where it wasn't the rosiest time in the world. right like It was a really heightened time of tensions. And so this idea that like when times are dark, it's naive to be hopeful. It's actually when times are dark and things are not working, the only way to get out of it is to imagine something radically different and start steering towards something radically different. Because the status quo and the way you've been doing things is clearly not working or taking you where you want to go.
00:53:21
Speaker
And so a key to the Futures program is like how do we start to lay the groundwork to get more of us in the mindset of like the techno utopianism of the 60s than in the gloom and doom that is being like pushed on us today. Yeah, I'm

Striving for a Positive Vision

00:53:36
Speaker
very glad you made that point, actually, because it's an objection I hear to some of the work that the Futures team is doing. it is that it's you know We have so many problems today, and we should be focused on the problems instead of trying to develop visions for what we want in the future. But I think it's right that we we need something to aim at, and we need something to be excited about.
00:53:55
Speaker
Do you think there's some danger in not in the fact that we're not good at specifying what we want? I'm just thinking very, very naive here. that Like, if we cannot specify what we want to an AI that that's trying to help us, how do we know when we've gotten what we want? is if If we can't do very a very precise specification of our desires for the future, yeah, it's difficult to to to say that we've succeeded at some point. I give it to you that humans are terrible at specifying what they want. They're terrible at prospection in general, like really not our strong suit as a species. But we do really know what problems we'd like to solve. And I think that is a very like strong place to start, which is like, what are the things we wish were better? What are the things we wish were different in society?
00:54:42
Speaker
And when you start there and you start like solutioning on the like barriers and like things that are exerting sort of a ah negative impact on society. So like you know we want to get more kids educated. you know We want to figure out a way to like not only like cure these diseases, but like to prevent age-related diseases in general. and like dramatically increase our lifespan. We want like really clean, ubiquitous energy. like We already have that with nuclear. That's a separate conversation. But like how like how do we start getting people excited about these specific use cases, these specific problems, and taking AI to help us solve those problems to start to get into a positive feedback loop? And the positive feedback loop is like we can work with technology to actually take these things on and solve them. um And I think AI also, to what we were saying earlier about like collaboration and cooperation mechanisms and decentralization, like AI can also help to like provide the infrastructure to do sort of these more big Apollo-style moonshots and in a coordinated fashion in a way that like data is shared, data is transparent, data is audited.
00:55:57
Speaker
And those are the sorts of things we need to do to get to where we want to go. So its some of it is specifying what we want. I think that imagining exercise is really important to like seed in people's ideas what could be possible. But another angle to tackle this at is like, all right, let's just start at what problems we need to solve. And let's like get working there. And that's also starting to take you on a positive trajectory. Something we have played with a lot at the futures program is like world building and developing that methodology out into a course. So sort of the nexus of world building and scenario planning and forecasting to think about like what futures we want and what are key levers to get there.
00:56:38
Speaker
By imagining a world and just saying like a prompt, it's 2045. We're with AI. We've solved these problems. It's gone well. Tell us how that happened. And like making people use these tools to actually architect how that happened is actually a really powerful motivator for folks, because even if you look along your timeline and say, all right, I think there's really low probabilities for certain things to happen, there is a viable path at the end of that. And so like just by nature of architecting and like planning out a viable path from like the present to a future you want to live in,
00:57:15
Speaker
is like something that gives you hope and saying, like okay, even the if the probability is really low, what's the alternative? right like You fire for the thing that you that you want to realize. yeah I think this is ah this would be and a fantastic attitude for for pet for young people, college students to have. and what What could our educational institutions do differently in order to kind of instill this ambitious attitude where where where people have a positive vision for what they want in the world? That starts with paying more attention to like the good things that technology has done for us in society, which is something I don't think is covered that much when we think about how history is taught, how sort of modern life is taught. like We just kind of take everything that exists today for granted without actually thinking of like
00:58:03
Speaker
the history of the things that enable our lives as they are. like One of the more mind-boggling facts when I heard it, and then I was like, that's obvious, is like one of the key drivers of women actually participating in the workforce was the invention of washing machines. ah Which is like, ah okay, that is like obvious when you say it, but like I never thought of that as like a technology that enabled like a really interesting like social change that unlocked a whole bunch of productivity sort of going forward. So I think like getting people excited about tools and technologies and really like highlighting like how they have made our world better is like an important priority.
00:58:44
Speaker
I think also instilling the sense of responsibility into folks to create the future. like Nobody else is going to create it, because it's on you. and like We all have a collective responsibility, and I think that's something that's at the heart of FLI as an organization is a lot of the folks within our organization or adjacent to it are scientists and technologists by training. So like we're folks that like build stuff and are with other hats that are like we actually have a really important responsibility to make sure like that this goes well and that the way technology is used in society is one that it goes well. And so I think instilling that in
00:59:28
Speaker
sort of ideas around that of like responsibility around tech, but also responsibility to build tech and responsibility to innovate is really, really important. So a program I really love on this actually is the first robotics program, which is is both like domestic and global, which like encourages teaching kids about robotics and responsibilities and like ethical development of technology, which it's just, it's super, super cool and inspirational. So I think there's so much that can be done here. I think educating kids about AI is really important. This is not part of people's curricula, right? Curricula development is like slow and glacial and like these old like curmudgeony institutions take forever to like update anything, but needing to be a little bit more flexible with like, this is something that's moving really fast and every year are like,
01:00:20
Speaker
course in teaching and values is probably going to have to update and be different, but like this is key and kids need to be taught this not just like at the collegiate level, but like in elementary school, right? So I think creating a more techno-optimistic framework and also a framework where like you know, you're restoring agency to kids that it's on them to build the future, I think is one, all things that could help take us in probably a better direction than today's education system does.

Call for Participation in FLI's RFP

01:00:51
Speaker
So for listeners who are with us all the way through this episode, maybe we should end by talking about what you can do if you've you've heard about this RFP on power concentration that we're doing and you want to contribute. How can how can people test out whether they whether they can contribute meaningfully here?
01:01:09
Speaker
We will have our our sort of announcement of our concentration of power RFP will be online and you can take a look at it and see, you know, read it in depth. We have put a bunch of examples there of like ideas for projects. They're just ideas. We're like open to a very broad range of but approaches to tackling this. So really excited to see hopefully a multidisciplinary audience apply. So if you're listening to this and you're like, I'm not an AI researcher, that's totally fine. There's a million in different ways. And like we've talked about this just throughout our conversation, economic approaches, information approaches, media approaches, and sort of advocacy, societal, democracy approaches, artistic approaches to imagining what we could do, right? So there's like room for everyone in this conversation of how do we tackle this problem?
01:02:02
Speaker
I think a few things we want to be really mindful of in these applications is you know needing to ground this in reality and like tractability and the moment that we're in, and also be sensitive to like the sort of like safety concerns around this. like we We're not looking for proposals that are dismissive of AI safety or are dismissive and don't take seriously like the real ease with which it can happen to remove guardrails, right? Like that that sort of piece of it also needs to be taken into account with any particular approaches. But we're we're really excited to see what happens. I think the pull model of innovation is a really interesting one, like taking inspiration from orgs like XPRIZE that have done this really well. And this idea of like, if there's a problem in society or
01:02:54
Speaker
a technology that we need that's not being addressed by the existing incentive structures, put some money out there and see what people come up with and like just watching the list of what their carbon removal, their sort of giant RFP, much bigger than ours, but had 20 teams with vastly different approaches from all over the world of like how to remove carbon from the atmosphere. right so you know, no matter where you are, we're really interested. um So this is certainly open internationally. And so we're mainly looking also ideally for people who are either at nonprofit institutions or affiliated with them as as well. So yeah. Fantastic. Emilia, thanks for chatting with me. Thank you so much for having me, guys.