Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Breaking the Intelligence Curse (with Luke Drago) image

Breaking the Intelligence Curse (with Luke Drago)

Future of Life Institute Podcast
Avatar
1.6k Plays1 hour ago

Luke Drago is the co-founder of Workshop Labs and co-author of the essay series "The Intelligence Curse". The essay series explores what happens if AI becomes the dominant factor of production thereby reducing incentives to invest in people. We explore pyramid replacement in firms, economic warning signs to monitor, automation barriers like tacit knowledge, privacy risks in AI training, and tensions between centralized AI safety and democratization. Luke discusses Workshop Labs' privacy-preserving approach and advises taking career risks during this technological transition.  

"The Intelligence Curse" essay series by Luke Drago & Rudolf Laine: https://intelligence-curse.ai/
Luke's Substack: https://lukedrago.substack.com/
Workshop Labs: https://workshoplabs.ai/

CHAPTERS:
(00:00) Episode Preview
(00:55) Intelligence Curse Introduction
(02:55) AI vs Historical Technology
(07:22) Economic Metrics and Indicators
(11:23) Pyramid Replacement Theory
(17:28) Human Judgment and Taste
(22:25) Data Privacy and Control
(28:55) Dystopian Economic Scenario
(35:04) Resource Curse Lessons
(39:57) Culture vs Economic Forces
(47:15) Open Source AI Debate
(54:37) Corporate Mission Evolution
(59:07) AI Alignment and Loyalty
(01:05:56) Moonshots and Career Advice

Recommended
Transcript

The Impact of AI on Employment and Society

00:00:00
Speaker
If you have non-human factors of production and they become your dominant source of production, your incentives aren't to invest in your people. My concern here is that as we continue to build technology that is designed to replace rather than to augment, so that we move closer and closer towards a world where people just don't matter.
00:00:18
Speaker
One day you wake up to find that all of your colleagues are AI and the next knock at the door is booting you out too. If you aren't careful with that proprietary info, if you say, all right, lab A, I'm going to give you everything in my life to get moderately better chat GPT results.
00:00:32
Speaker
And they don't lock this down for you. And they don't take extreme care to make sure they aren't going to train on it. You are one button push away from having someone Hoover up that data and sell it to the highest bidder and use it to automate you out of the economy.

Understanding the Intelligence Curse

00:00:46
Speaker
aligned super intelligence in the hands of one person makes that person a de facto dictator unless they choose not to be. And that is not a good outcome. Welcome to the Future of Life Institute podcast. My name is Gus Docker, and I'm here with Luke Draco.
00:01:00
Speaker
Luke, welcome to the podcast. It's great to be here. Thanks for having me. Great. So you have this essay series on the intelligence curse, and maybe we should just start at the very core of that and and ask, what is the intelligence curse?
00:01:14
Speaker
Yeah. So I'd summarize the intelligence curse pretty simply. The idea is that if you have non-human factors of production, and they become your dominant source of production. Your incentives aren't to invest in your people. And this sounds very abstract. What does it mean to have a non-human factor of production? And what does it mean that we can ah build things that actually replace us? And why doesn't this just result in like AGI utopia?
00:01:38
Speaker
But I think we have some concrete examples. And one of the ones that we point to in the essay and what we actually named the effect after is the resource curse. ah where there are states that rely primarily or have a significant amount of their income that come through oil revenues as opposed to investment in their people.

Economic Implications of AI Dominance

00:01:54
Speaker
And what you end up seeing is because investments in oil produce a greater return than investments in their people, those states oftentimes funnel money towards ah the oil investments as opposed to their people.
00:02:05
Speaker
The result of this is a worse quality of life for their people who have much less power because at the core, your ability to produce value is a core part of your bargaining chip in society. Yeah, so the worry here is that as we get more and more advanced AI systems, governments and companies will be incentivized to invest more in building out even more advanced AI systems as opposed to empowering ah workers and citizens.
00:02:32
Speaker
Exactly. Yeah, I guess one objection here that I hear from economists is just that if we look at previous technologies, we see that they basically increase wages and increase living standards unevenly and and with setbacks. But over time, we see increased wages and living standards.
00:02:51
Speaker
Why isn't the same just going to happen with advanced AI? Yeah. Well, I think there's a category distinction of what we're trying to do. ah The last thousand years of technology has been technologies that have been extremely adaptive for humans, that have helped humans do new things.
00:03:06
Speaker
And they haven't encroached upon our core fundamental advantage, ah which is our ability to think and then do things in the real world. Obviously, during the Industrial Revolution, there were lots of concerns that replacing and automating large parts of physical labor was would result in a world in which people didn't matter.
00:03:22
Speaker
But I think the actual outcome was a bit different because, of course, there isn't a machine that was produced in the Industrial Revolution that completely automated human thinking and our ability that's kept us at the top of the food chain.
00:03:33
Speaker
And if you look at the goal of a whole lot of companies in the field, you'll find that they stake their claim, their reason for existence, is to create technologies that can do everything that any human can do better, faster, and cheaper.
00:03:47
Speaker
And of course, the question then is, if it is the case that this allows capital to convert directly into results without removing the need for other people in the middle, why wouldn't companies just invest more and more money into this? I don't think it's Machiavellian.
00:04:01
Speaker
I don't think it's an evil plot by them. What I think instead is that if you have the opportunity to save 50% on your wage bill while also getting better, faster, more reliable results, most people are going to take that option.
00:04:14
Speaker
And so my concern here is that as we continue to build technology that is designed but to replace rather than to augment, so that we moved closer and closer towards a world where

Income Inequality and Economic Mobility

00:04:24
Speaker
people just don't matter.
00:04:26
Speaker
And then, of course, you're reliant on other forces. You're reliant on government to make sure that you still have a high quality of life when you can't produce it for yourself. I think it's a very precarious situation to be in.
00:04:37
Speaker
If we think about pensioners today, for example, they don't produce much for society. In fact, they they are, in ah in a sense, a a draw on society's resources, but they're still protected.
00:04:50
Speaker
Why couldn't we imagine an expansion of of that system? this is This is kind of the obvious solution that comes to mind for people. We we will have universal basic income and we will have protection of individual rights.
00:05:04
Speaker
And so we will maintain agency and relevance in an age of advanced AI. So ended up arguing something like the core proposition is that your economic value is an important part of your political value.
00:05:18
Speaker
We've seen in the history of democracies that oftentimes they start at the moment where there are diffuse actors who have varying amounts of capital who need to find ways to settle disputes without violence. ah The emergence, for example, of British democracy and the Magna Carta came because there were you know lords that had power that wasn't equivalent to a king necessarily, but sure had a lot of influence.
00:05:39
Speaker
And that came from the material possessions that they controlled. This who necessitated a free courts and some sort of a way to solve disputes in parliament, and the evolution kept moving backwards and backwards and backwards.
00:05:50
Speaker
And we continue to see that this like economic liberalization is oftentimes a precondition for the democracies we really care about. Now, there are non-democracies that are fine places to live that don't wildly trample on human rights. But of course, we know that there's an extremely strong correlation between governments that respect your rights and enable you to be prosperous and governments that are democratic.
00:06:12
Speaker
um These things aren't one-to-one, but they're pretty damn close. And so the concern that I have here is that as we level the underlying economic structure that creates these bargaining chips that put us in power, that we end up reducing those.
00:06:26
Speaker
Pensioners are a fantastic example here because, of course, a pensioner isn't someone who appears and never works for the rest of their life. Pensioners have... 40 years of working extremely hard, paying into a system, and then being active members of society who then have a bargaining chip so that in the last 20 or 30, 10, 20, 30 years of their life, they get this exemption.
00:06:50
Speaker
It's because of the system that we have built ah that this is stable. And I would also add that, you know of course, in the history of the United States, for example, ah We treat our retired folks way better today than we did for things like the New Deal, ah which involved you know mass amounts of unrest um and workers trying to use their bargaining chip.
00:07:10
Speaker
So I'm very concerned about the world in which we all are pensioners forever with no way to actually bargain at the mercy of the next election for what happens in our subsequent years.
00:07:22
Speaker
Which economic metrics should we be looking at if we want to try to confirm whether the intelligence curse is actually happening or or disconfirm the hypothesis? ah There are a couple of things that i you know take a look at. um Income inequality seems quite important.
00:07:38
Speaker
Is it the case that you have kind of, and we talk about sudden takeoff in AI where there's suddenly ah you know a foom and all of a sudden AIs are way, way smarter than us. I think you might want to also look for this in economics. Is there a sudden moment in which capital immediately begins compounding because every dollar you put into a system produces some sort of an outbound return?
00:07:56
Speaker
And if you see this kind of rapidly accumulating, where, you know remove talent from the equation and suddenly capital just gets ah begets more capital, then the actors who already have lots of capital can really

AI's Role in Job Market Dynamics

00:08:06
Speaker
rapidly accumulate. Now, it's already the case that having capital makes it easier to get more capital, but there are a bunch of boundaries, a bunch of restrictions, and outsized players can still win.
00:08:15
Speaker
So outside of, ah you know, mass income inequality, I'd also take a look at things like economic mobility. Is it the case that people who aren't rich, know, can move upwards in society.
00:08:26
Speaker
ah United States, of course, is a very famous society for having this as a marker of its success, that you can come from anywhere, start from nothing, and win. doesn't mean you're guaranteed to win, but there's always a pathway.
00:08:38
Speaker
And I think if those pathways start to close, that would be a very alarming so signal here. Now, we'll talk about, i presume we'll get into pyramid replacement here. And I think there some things I'd really want to look at as well include like rising unemployment rates, um especially among your earliest age rackets right there that are just entering the workforce.
00:08:56
Speaker
But those are a couple of the metrics that I'm taking a look at here and that I've advised others to look through. Yeah, actually explain that that concept for us, if you would, a pyramid replacement. What does that look like?
00:09:08
Speaker
Yeah. So at the ah beginning of the paper, we are the beginning of the series of essays. We say that it's pretty likely that if the technological trend continues, you're going to lose your job. And we try to tell a story of how we think that's going to happen. And we start with the example of the multinational white collar firm.
00:09:25
Speaker
These are very large companies, oftentimes that do a whole lot of work. Every year they hire a new class of analysts or a new class of entry-level employees ah whose goal is to work their way up the pyramid. And they hire a lot of them. They spend a whole lot of time ah recruiting from the top universities.
00:09:40
Speaker
ah They show up on campus. And their goal is to create this pipeline of talent because the company has a lot of people at the bottom and a few people at the top. But as people at the top leave because they retire or because they find other opportunities, you need a funnel of leadership.
00:09:56
Speaker
And our claim is that AI first makes it very easy to replace the people at the bottom. Now, there's actually a paper that came out, I believe yesterday, starting to show some empirical evidence for this. In some fields, AI is being augmenting, but in others, it's being just quite replacing.
00:10:11
Speaker
And what we've seen here in that in these targeted fields, I can't recall each one off the top of my head, but obviously software engineering is one of them. We've seen a shrinking in the number of job ah postings and in the number of ah job offers and overall employment between like the 22 to 25 year old brackets in these fields.
00:10:28
Speaker
That's exactly what you would expect if it is easiest to automate the entry level work first. Our claim then is that AI is like going to move up the pyramid. As it gets better, and as it gets more and more agentic and capable of doing more tasks with long horizon planning, and as companies are able to capture more and more of that knowledge for themselves, ah that what they're able to do is move up the pyramid, replacing people bottom up, as opposed to a kind of middle out or top down replacement.
00:10:54
Speaker
One day you wake up to find that all of your colleagues are AI, and the next knock at the door is booting you out too. We think this could happen at every level of a white collar firm. Now, there are a bunch of exceptions here. Obviously, it'll work different in some industries.
00:11:07
Speaker
Some sectors within within a company are going to be easier to automate than other ones. ah And I think this is not exactly how it works in blue collar work. I think blue collar work looks, you know, speculatively, I think it might look more zero to one, as in there aren't the robots required to do lots of blue collar work.
00:11:22
Speaker
And then there are. And I think I'm less familiar and i spent less time in literature on the structure of blue collar companies. But my understanding is there are a lot more people who work ah doing like a similar job. it's It's a bit less pyramid shaped, it's a bit more flat ah with like a small pyramid at the top.
00:11:39
Speaker
That's a pretty disastrous situation if robotics is able to rapidly automate those jobs. Yeah, you might even imagine that the managers of a bunch of physical workers or blue collar workers might be replaced before the workers themselves. So you could you could imagine systems that can automate invoicing and scheduling and so on being easier to to ah to do with or yeah being replaced by AI before we have fully functional robotics to actually do the blue collar labor.
00:12:09
Speaker
um i do wonder if if we're talking about the the trend already happening. I mean, this is this is a quite complex question, but how do we know that it's happening because of AI? Say there are fewer job postings related to programming.
00:12:26
Speaker
Could that be because of a general market trend or interest rates or something different than than than

Human Advantage in an AI World

00:12:32
Speaker
AI? Yeah. So all flag, the paper that I'm talking about is one that I've looked at. I've not spent a ton of time with yet. So I don't want to speak as an expert on that paper. I'd love to link that in the description as well. And I'll spend some time on that myself.
00:12:44
Speaker
But that particular paper, if I understand it correctly, ah works to isolate that, to try to understand what the mechanism was here. And my best guess here is you want to look at a couple different factors.
00:12:56
Speaker
ah One, you're going to want to see what industries are being affected. We have a pretty good sense as to what tasks are automatable right now and what tasks aren't. ah We know, for example, that software engineering is extremely automatable at like its ah like base level.
00:13:10
Speaker
And so you would expect to see, if it's AI, that the tasks that we know were easier to automate are the ones that are falling, while other ones are being augmented or much less affected. And my understanding, again, haven't read the entirety the paper, have just skimmed the initial findings there.
00:13:24
Speaker
My understanding is that is roughly what you're seeing. And if that's not the case, that is what I'd be looking for here is what, based on a projected existing and projected AI capabilities, which sectors are seeing changes in employment?
00:13:35
Speaker
And does that match their expectations? Yeah, yeah. Actually, let's dig into that a bit more and and and think about which sectors or which jobs or tasks would be protected from automation. And I've suggested some some mechanisms of protection that that we can talk about ah where, for example, if if you're a lawyer, ah there might be kind of legal restrictions on replacing you.
00:14:01
Speaker
I don't think we're going to see an AI judge a employed by the government very soon. Or at least, I don't think we're going to see that until that that's basically probably the the last job to be automated.
00:14:13
Speaker
So how do you think about legal restrictions to automation? And could those become more important as we face this increased market pressure to to automate? ah Derek Chang, who's at the Windfall Trust now, but was at Convergence Analysis. ah Convergence Research, one of those. I think there's a lot of things with similar names in this space.
00:14:33
Speaker
ah Derek has a really good piece on what jobs are likely to be more and less resilient to automation. ah And there's some of the ones that you expect. Obviously, things like physical laborers as if more resistant right now. and I think there was a story for 50 years that automation hits physical labor first and mental labor second. And actually, we're seeing the exact opposite given way we're making progress and capabilities.
00:14:53
Speaker
I think your judge point is actually quite interesting to me, and I think it's correct. um The jobs that have strong legal protections are going to be harder to automate. Now, of course, that doesn't mean that people who are in those jobs aren't going to automate their own work. And this is ah you know both an ah an example of opportunity here and also an example for some sort of gradual disempowerment where like you just automate away to a generic model that you know makes decisions on your behalf. I think it'd be a bad world, potentially, but If ah every judge was using the same AI model to make the same decisions and great, there is a human judge, but it's the same prompt, same output.
00:15:27
Speaker
At the very least, you'd want some more diversity that represents the actual ah beliefs, feelings, understandings of the judge involved. ah Other roles that i think make sense here to talk about lawyers, kind of.
00:15:38
Speaker
I think the lawyers who are at the partner level are going to be very easy to not automate. Paralegals are a different story. And I think like entry level law work is an interesting one here because of course, your first year lawyers who've just been hired, their job is most grunt work.
00:15:53
Speaker
And if a firm can hire half as many of them, it might be the case that on paper, it's hard to automate lawyers, but the law firms who have lawyers working there automate their own work such degree that either a, you get an abundance of new law firms arising or B, larger ones continue to accumulate capital without hiring new people.
00:16:12
Speaker
And I think an important question for what happens next is at that moment of initial automation, where a whole lot of entry-level jobs get cut and we start to be reduced, what happens next?
00:16:22
Speaker
Is it A, that large firms continue to grow and like monopolize the industries? Or B, that ah we get an abundance of smaller firms that allow for more diverse economic output? we are very you know Rudolph and I are much more excited about that second world than that first one, the one where this creates a bunch of opportunity, but I don't think it's by default. I think we have a lot of work to do to get there.
00:16:44
Speaker
Yeah, yeah. It's actually and an interesting point that you could see a job such as being a judge staying and and not being automated, but in practice ah being automated because the judge is using an AI model to make educated guesses about cases.
00:17:01
Speaker
And so that would be a way for society to maintain the formal structures we have today and without actually thinking about which functions in society we're interested in automating.
00:17:12
Speaker
And so I think that that would be ah ah quite a bad situation to end up in because then we haven't actually grappled with the question about whether we want to outsource ah kind of the profession of of being a legal judge to Yeah.
00:17:28
Speaker
Yeah, exactly. And I think one of my real concerns there is, again, that same model. you know If everyone's using GPT-7 and they're calling that thing in to do all of their judge work, then whatever flaw exists in GPT-7, that's now your judge.
00:17:41
Speaker
And I think my my my concern isn't just have we automated the task, but with what information are we automating it? so Yeah, yeah. We also have perhaps another barrier to automation is judgment in a broader sense and and taste. So for example, you you can have hundreds of AI models generate whatever you want, whatever ah piece of writing or imagery you want.
00:18:03
Speaker
But judging what is actually interesting to people is something that's perhaps more difficult to automate. Do you think we might become kind of employed because we have human judgment and because we have taste?
00:18:17
Speaker
Or you think that's ultimately also automatable? So it really depends on the pace and progress of capabilities and exactly what we aim for. I am much more excited about a world where that is a strong, durable human advantage, that diversity of taste.
00:18:32
Speaker
One example here, are you familiar with Nomads and Vagabonds? He's an artist on Twitter. He actually did the art for the Intelligence Curse, did the art for Workshop Labs. My understanding after working with him a bunch is he takes a stable diffusion model and fine tunes it on his own work And the kind of work that he's aiming for, he's gotten very, very good at prompting it. so And he produces these absolutely brilliant results. I just cannot get that kind of result out of a model. I don't have the taste for it. I don't know what to what what kind of data should be going in in the first place.
00:19:03
Speaker
i don't know how to write my prompts like he does. And I'm sure I've worked with him before because obviously we worked in the intelligence cursor. And I know he gets hundreds of outputs and yet he releases a very select few. I think that's a fantastic example of someone, how you could use AI to be an exceptional tastemaker.
00:19:18
Speaker
I think his judgment is really exceptional there. It's still his work going in and his work going out. so And because of this new medium that he's using, it's been one of the best examples I've seen as an artist fully embracing new technology while still maintaining their own distinct style and taste. And

Maintaining Human Relevance

00:19:33
Speaker
I don't think...
00:19:33
Speaker
anyone could look at the art that he's outputting and say it's anyone but his own. That is one of the things that I'm really excited about moving the technology towards. But I don't think that's the goal of the major companies. Again, this definition that OpenAI uses of AGI is predicated on doing most economically valuable human work.
00:19:54
Speaker
That is a very different game than the, oh, we're going to do some economically valuable work, but it's all going to be tools in your hand that's going to allow you to change and shape the world. That's a different ballgame to do all of it versus to do some of it.
00:20:07
Speaker
And the target right now is total automation. It's a very, very different outcome. Yeah. One, and watch one, um, One barrier to automation that you mentioned in the essay series is local and tacit knowledge.
00:20:22
Speaker
yeah And this would be knowledge that's that's spread out, that's difficult to to formalize in the way that you can train models on it. And it's knowledge that's perhaps shifting constantly. And so it's it's it it intersects with but taste and judgment in a sense.
00:20:39
Speaker
and yeah is this Is this knowledge and and this yeah this local and tacit knowledge, is that a way for us to remain relevant?
00:20:50
Speaker
So this is part of our belief at Workshop Labs. I think if I summarized our thesis in two sentence it's two sentences, it's that a we believe that the bottleneck to long-term AI progress runs through high-quality data, specifically data on tacit knowledge and local information.
00:21:06
Speaker
That's both the skills that you have that you accrue throughout doing the things that you do. That's really hard to digitize, not because it's it's impossible to digitize, but because it's hard to know where to get it because you have it.
00:21:17
Speaker
And second, local information, the kind of things that you see around you, the opportunities that you can spot because you are an embodied person with access to real-time information about everything in your sphere. ah Right now, the labs really want this data.
00:21:31
Speaker
It's why there's a you know a rush to integrate into your browser. It's why there's a rush to build these bespoke RL environments where an expert gets involved in helping to create a model that's really good at this one task.
00:21:43
Speaker
But you have a distinct advantage, which is that right now you have that data. The kind of data that is valuable to AI progress is in your pocket and on your laptop, and it's in your day-to-day life.
00:21:54
Speaker
So our proposition is, why don't we take that data and put it to use for you entirely privately so that we don't have you don't have to trust us. We just can't ah train a model and sell the data to your boss.
00:22:06
Speaker
We can't train a larger model to automate you. We can take an existing model and dramatically tune it towards your work Lock it down so that only you can use it and let you put it to work.
00:22:17
Speaker
I think you should have control over the tools that augment you and you should reap the benefits of the data that already exists in your world. And that's what we're aiming to do here at Workshop. Yeah, I actually think you could see a future in which there is this form of... um There's a tension between leadership at at a company and and the workers at a company where the the workers are unwilling to to give up their tacit and local knowledge to to be to model for that model to train on. And company leadership might be quite interested in gathering that data and training on it so that you they can reduce ah reduced labor costs.
00:22:57
Speaker
So is that perhaps some some and and the new tension in the economy? So I think that's one of the tensions, but I also think one thing that I think people oftentimes forget is that 50% of Americans work at a small and medium sized business.
00:23:11
Speaker
These are not the kinds of companies that have hundreds of people with which they can mine surface level data from. ah These are the kinds of companies where like most people on the team are doing something that actually matters. As in, know, if they didn't show up for work, something wouldn't work.
00:23:25
Speaker
And because of that, they have lots of like specific information about their processes ah that are really important. I think the outcome I'm excited by is one where AI shifts the direction away from extremely large companies. Because look, candidly, a lot of those tasks are automatable today.
00:23:42
Speaker
But humans retain this advantage or are able to put to use their existing advantages um and with that embodied experience and are able to train models that can help them compete much faster and better, creating an explosion of small companies and small enterprises that really understand what's going on locally and ultimately help break that efficiency gap we usually see where large companies are more efficient because of their scale, because we can put so much intelligence to work for the average person.
00:24:10
Speaker
But I think this really, really means that those important things that make you competitive just shouldn't be given away. I'm a strong believer that data is kind of the new social security number. And I read a piece about this a while back.
00:24:21
Speaker
Where the thing that you got for caring about privacy in 2015, candidly, was worse ads. um There are some exceptions, right? Like ah dissidents obviously need to care about privacy. People in authoritarian countries who are talking bad about the government need to care about this.
00:24:37
Speaker
But for the vast majority of people in the vast majority of cases, you got worse ads. I think in the next 10 years, if you aren't careful with that proprietary info, if you say, all right, lab A, I'm going to give you everything in my life to get moderately better chat GPT results, and they don't lock this down for you, and they don't take extreme care to make sure they aren't going to train on it.
00:24:57
Speaker
You are one button push away from having someone hoover up that data and sell it to the highest bidder and use it to automate you out of the economy. That is a much different situation for the value of your data. And I think people would do a whole lot better if they'd start caring about that soon.
00:25:12
Speaker
I don't think we're there quite yet, but part of the reason that we care so much about privacy at Workshop is because we are aiming at creating a solution that is able to guarantee these things so that we can't use that data to automate you.
00:25:24
Speaker
On a societal level, what you might get from handing over your your tacit knowledge is a slightly better AI model. But on a personal level, if you're a maths PhD student on a on a low salary, and you might get offered ah hundreds of dollars per proof that you that you provide with a step-by-step solution to to to train a model on.
00:25:49
Speaker
that is That is quite an an economic and incentive. do Do you think we we as a society will be able to overcome this and this ah this incentive to give up our data just when the individual incentive is is such is so strong?
00:26:05
Speaker
This is part of the arms race, and it's why we are ah laser focused on delivering models that aren't just like kind of okay and private, but are better at your existing work than an off the shelf model because of the data that they have.
00:26:18
Speaker
And because of this, ah your work improves. I don't think it's the case that you can win this game by walking in and saying, look, we have worse tools and we can't pay you, but don't worry, it's private. so People don't make decisions like that.
00:26:30
Speaker
The answer has got to be the default tool that you want to use cares about what's going on here. And I think Apple is a fantastic situation here where Apple at its bones is what I would call a privacy second company.
00:26:41
Speaker
ah It is for very few people is the selling point for Apple is rarely, oh, this thing is entirely private. so But Apple understands that they are, especially in the United States, they are the infrastructure with which almost all modern communications happens.
00:26:55
Speaker
And so they understand they have a responsibility to protect user privacy. And so unlike many other companies, they have locked everything down to ensure that your messages are private, that your phone calls are private, that your interactions are private, that your device doesn't get a virus.
00:27:08
Speaker
And they've gone through painstaking efforts so that you know that device is always reliable and always works for you. Anthony Aguirre at FLI has a paper on loyal AI assistants. And I know he talks about it as well in Keep the Future Human.
00:27:23
Speaker
But you have got to know that the model that is helping organize and orchestrate your life works for you, not for someone else. And that means it has to be good at working for you. And it has to be verifiably working for you.
00:27:35
Speaker
And I think that's how we plan on overcoming some of these incentives. I don't think the labs are going to pay every single human on earth a couple hundred dollars to gather up all their data. And I think that might be kind of the scale of what they need to do to actually beat this with that kind of incentives.

Societal and Political Challenges

00:27:47
Speaker
So I think by delivering an actually better experience for users, and then secondly, layering on extraordinary protections here, we can both serve customers well and fulfill our impact. How would we guarantee that the data that I'm providing say that that data remains private? Is there a way to do that without just trusting um Workshop Labs?
00:28:09
Speaker
So I'll have more to preview on this soon once we launch here in September and October with a couple of blog posts that I think we'll walk through or working on here. What I can say for now is that we are getting, as an industry, there are now increasingly ah way ways to do this.
00:28:23
Speaker
ah You can do things like encrypting all information in transit, ah decrypting it within what we call a trusted execution environment, where annas ah using la NVIDIA secure enclaves, and then attesting to the code that is running so that you can see that nothing is being extracted from that. Then you could store the weights of a model, for example, also encrypted.
00:28:41
Speaker
Got got If we move back to the intelligence curse for bit here, we talked about, or you mentioned social mobility as an indicator, kind of decreasing social but mobility ah of an indicator, and as an indicator of of the intelligence curse happening.
00:28:58
Speaker
Perhaps you could sketch out what the... what a bad scenario looks like here. What does it look like if we have a more static society with lower social mobility, where capital is the main driver of ah progress, but that progress is not made by a set of diverse actors, it's made by companies that are larger and larger. And yeah, what does that kind of society look like?
00:29:27
Speaker
So I think there are a couple of examples here, but I'll just kind of tell the story through the perspective of one guy. ah Let's say I'm like a college graduate ah let in the year 2020 or the year 2030. I've graduated from college.
00:29:41
Speaker
I'm struggling to get a job. i for some reason, studied CS. I'm not sure why I did that in 2020, but you know, in 2026, it wasn't obvious what was going to happen. So I've woken up in 2030 and I cannot find an entry-level job. I also couldn't find internships, maybe like one or two companies here and there, but on the whole, it's just way cheaper not to get me involved. Okay.
00:29:58
Speaker
So I can't get a job. I'm relying on unemployment, which is increasingly strained because I'm not the only undergraduate who can't get a job. a whole lot of undergraduates can't get a job. Meanwhile, you know Microsoft has published record earnings because they've been able to half their expenditure on employees and double their output.
00:30:15
Speaker
ah This is exciting for a lot of reasons, but remember that in the US, ah corporate taxes are a very small amount of the federal budget, 50% the of federal budget taxes, our tax revenue comes from income tax.
00:30:28
Speaker
So we have a smaller and shrinking income tax base because less people are making that income while companies are posting record profits. And of course, they have the kind of money to work to evade those taxes as well.
00:30:38
Speaker
So our social safety nets are increasingly strained. Unrest is increasingly popular. People are very upset. They don't have a lot of time on there or they have a lot of time on their hands. The thing they do is they protest or they get very upset. And the result of this is our social safety nets just stop working.
00:30:52
Speaker
um They're not able to keep up with the strain. we aren't We have to reduce payments. We have to make fiscal cuts. It's in the name of tightening our belts and not pulling ourselves up by our bootstraps. And in 2040, whole lot of people just aren't employed.
00:31:04
Speaker
And there was battle. There was a political debate of what we would do, and we've passed some sort of UBI for a while, um but that UBI wasn't sufficient for the kind of standard of life that you would expect.
00:31:16
Speaker
And it's increasingly unstable. And of course, now we have a couple of companies who are really, really powerful. And those couple of companies are increasingly realizing that they'd be better off ah if governments weren't getting in the way all the time asking for things.
00:31:29
Speaker
And so if you look at like the Tom Davidson coup paper about how an AI or an individual armed with AIs could take power, you've got increasing social unrest, instability in institutions. This is a ripe environment for someone to come in and disrupt an existing order.
00:31:43
Speaker
Maybe that happens democratically. Maybe it happens non-democratically. But the result is that suddenly, not only are you less economically safe, but you're also in a situation where where the rights you took for granted to return your economic stability are now out of grasp.
00:31:57
Speaker
They're harder for you to get. and that that doesn't sound That doesn't sound so so great. Isn't it the case that companies, say Microsoft and Google and NVIDIA and perhaps OpenAI and so on, that there will be fierce competition in providing products for consumers at the very top? So even if you have and the main drivers of the economy being...
00:32:22
Speaker
capital deployed by massive companies, you would see innovation, you would see from competition, and you would see ah better products and and services.
00:32:34
Speaker
Yeah, potentially. One of the ways that you can break the intelligence curve right one of the necessary components is commodifying the intelligence layer. If it is the case that one or two or three players have total access, a monopoly on intelligence, ah it's then the case that they can continue to raise the rents. I saw something, i saw tweet, I think recently that said something like, if you were a rapper around a commodity, you're a landlord.
00:32:59
Speaker
And if you are a rapper around a monopoly, ah you are a renter and you are totally at the mercy of the monopoly and to continue to set your rates here. And so a world in which there's like prolific, cheap intelligence, and then your job is to specialize it into the thing that you do, that's a better world to be in.
00:33:17
Speaker
But I think, you know, the goal of the labs is to get this recursive self-improvement and just take off here. And in that kind of scenario, That's a very different game. That's one player that's won or a couple players that have won.
00:33:29
Speaker
Now, I don't think commoditizing intelligence fixes the problem entirely. But I do think it's a necessary precondition to breaking this intelligence curse. Yeah. You mentioned Microsoft posting record profits and so on.
00:33:41
Speaker
Perhaps a naive question here is to ask, who are they selling to in this world? If the college graduate is not, it doesn't have a job, who are they actually selling to? Which services and products are they providing? So I feel bad that I'm picking on poor Microsoft here.
00:33:53
Speaker
I don't know if they're like the right people to pick on. But, you know, I don't i i don't mean it, Microsoft. It's not you specifically. I just picked the first tech company that came to mind. But let's go a bit broader.
00:34:04
Speaker
Who are the companies selling to? I think we talk about this in the piece, but the core thing here is probably to each other. ah the B2B environment is quite large. And it is not necessarily true that there has to be um d like what we now call the consumer level in a technology space. And a whole lot of companies get by just fine selling to each other.
00:34:24
Speaker
I think you can expect that to continue to occur across a variety of areas, especially as the core fundamentals become more important. These are primarily land, compute, energy, intelligence. so And the more important those get, the more important the businesses that can provide that to get. so Of course, governments, other possible clients.
00:34:40
Speaker
But I think you know it is not the case that you have to have this like vibrant consumer style economy that we have today. I think this world has way fewer Starbucks. Sorry to pick on them.
00:34:51
Speaker
I think it's got way fewer cafes and way fewer phone cases, but it's probably got a whole lot more data centers. And you can see labs trading with each other, AIs trading with each other, providers trading with each other, and this close increasingly closed loop.
00:35:04
Speaker
Yeah.

Learning from the Resource Curse

00:35:05
Speaker
The intelligence curse is a kind of a riff on the resource curse. Are there any lessons we can take from how countries have dealt with the resource curse in trying to deal with the intelligence curse?
00:35:18
Speaker
Yeah. So the resource curse is not guaranteed doom. It's a curse, but it's breakable. And there are, of course, great examples of countries that did break it. The obvious one here ah is Norway. Norway is a state that you know has a sovereign wealth fund. It is fueled by oil revenues.
00:35:33
Speaker
It does have a real economy on top of that. And I think one of the things to be careful in this comparison is that, of course, oil is not a one-to-one replacement from all human labor. It's a very tempting investment target if you already have a lot of it. so You still need humans somewhere in the chain, and you can get a more diverse economy. More diverse economies tend to win than these oil states in like direct comparisons, but it's a very tempting curse.
00:35:54
Speaker
But what happens in Norway? ah Norway is, you know by many, many metrics, one of the best countries in the world to live in. Excellent education, ah excellent social services, really stable government, really democratic government.
00:36:07
Speaker
ah How does this happen? Well, we we use some of the quotes from like you know officials at the time, and we looked at some of the case studies in the paper. But a core thing here is Norway had extremely resilient institutions before the resource curse it was possible. Before they discovered oil, they had an excellent civil service that was really good at understanding what to do when this happened and a very low corruption society.
00:36:30
Speaker
The question for me is, do we think we currently live in a world with excellent institutions and exceptionally low corruption? I don't think so. I think basically every American that has looked at our government has said something here is fundamentally broken.
00:36:43
Speaker
And it's been that way for decades. And it seems like every time we think we get a reformer in, what we get is increasing brokenness. I don't think we're at the case right now where we have selfless members of Congress and extremely yeah resilient institutions.
00:36:57
Speaker
And I think what it's going to take to withstand the pressures if you actually get total automation is stronger. It's it's more resilience than you would need to withstand the kind of oil pressures here. Of course, another thing going from Norway is that there is still room for a dynamic human economy on top of that. And so you can reinvest that money.
00:37:14
Speaker
Saudi Arabia is a great example of this, where as Saudi Arabian officials have become increasingly concerned that we are near peak oil and that renewable energy is going to be increasingly the way of the future, they are trying to invest their petrodollars into creating a more sustainable and not like uppercase S environmentally sustainable, just like a more dynamic economy that attracts large businesses devised in this as well.
00:37:36
Speaker
Now, of course, important question here, ah while the economics are now starting to move towards democratizing, ah you'll notice that these states i'm mentioning here that are sometimes cited for high quality of life for some people, Saudi ah and the UAE, have high quality of life for certain kinds of people, for people that are economically important to the state.
00:37:56
Speaker
But of course, they also rely on an underclass, and in Saudi's case, so ah you know I wouldn't say it's the beacon of gender equality in the world. For half the population, I wouldn't say those freedoms are well afforded. Now, as it has been the case that Saudi Arabia, in particular, i want to zone in on them, has moved towards this more diverse economy, they've also concurrently started liberalizing their gender relations. I mean, you've seen under MBS, there's been, I'm not going to call it heaven or anything, but there's been a real effort to somewhat liberalize this relationship in an otherwise pretty conservative society.
00:38:28
Speaker
It is not an accident that these things are happening concurrently. And I think one of the things you should be wary of is arguments that, ah well, we're going to centralize all power in the hands of a couple of actors. We're going to automate the entire economy, but the incentives are going to exist for the state to really care about you.
00:38:47
Speaker
The examples that we have of states where this is true is is Norway. Hmm. In other states, if you're not economically useful, it's a bit harder of a sell. It's not always true. There are exceptions. and we there We talk about this case study in Amman where like there was a credible threat of revolution.
00:39:04
Speaker
And this helps force the state to dole out its rents. The argument is that the rentiers would like to have all of the rents. but they also really want to remain in power and continue to get some rent. And so if it's cheaper for them to capitulate than to lose, um well then that's ah you know an easy out for them.
00:39:23
Speaker
But of course, when we're talking about AI that can automate every job, we're also talking about the automation of repression and increasing surveillance.

AI's Influence on Governance

00:39:30
Speaker
As we make things more legible, it's easier for governmentance to governments to trend towards this despotic realm where they can also put down dissent and prevent these kinds of ah you know forces that would otherwise force states to capitulate.
00:39:42
Speaker
So increasingly, by increasing the state's ability to such a dramatic degree, um you have this moment where like states are very weak, and then once they're able to automate repression, they're suddenly very strong. In both outcomes, you risk losing the ability for democratic processes to work.
00:39:57
Speaker
Do you think we'll be able to to shape the future economy using our our culture, using our values? Or do you think that what matters most in the end is the underlying features of AI as a technology and the economic incentives that that it causes?
00:40:17
Speaker
Yeah, incentives are a powerful thing, but they are not predetermined. well One, they're not predetermined, and two, they're not ironclad. ah We have so many examples in history of great people defying incentives. I mean, I can just rattle them off. Washington deciding to step down, becoming the great Cincinnatus and not making himself king ah is is one obvious example here, ah where a leader looked at the incentives, looked at the ability for them to gain power, and said,
00:40:44
Speaker
no ah And oftentimes, i think one of the ways to reconcile like structural views of history and great man views of history is that these structural forces set up the incentives, ah but individuals can then defy or alter those incentives and make different choices.
00:40:59
Speaker
Incentives aren't law, but they are really powerful. And you want to align your incentives so that you're not hoping that every time a bad thing could happen, you are totally reliant on the character of the person in power such that they ignore every incentive in front of them.
00:41:13
Speaker
We talk about this in the paper. We said that economic forces are a predominant force here and a very powerful force, and that societies are extremely exposed to these incentives. But there are other things that shape their values as well.
00:41:25
Speaker
Cultural forces are very powerful, and oftentimes countries make decisions in favor of their culture, or societies do, that are culturally good for them, even if they're economically bad. ah The existing powers dynamics that we have also enable this. ah you know One example here is like Brexit is an obvious example of a country's population choosing a thing that is probably against their economic interest for a different value set.
00:41:47
Speaker
And I'm not commenting on on the merits of that debate. I'm simply saying that there isn't strong economic economic argument on one side and an argument on sovereignty on the other. And that sovereignty argument won the public, even if it failed to persuade their elites.
00:41:59
Speaker
I'm not saying that every outcome should be like Brexit, but I'm saying that this is the kind of thing where you actually can make different trade-offs here. But of course, you know there's that very famous quote, about I think it's Charlie Munger that says, show me the incentives and I'll show you the outcome.
00:42:13
Speaker
And I think you know if you have the opportunity to move those incentives in a positive direction for humanity, you really should.

Technological Solutions to AI Risks

00:42:22
Speaker
One way to do this is to think about which technologies we want to develop first and which technologies we want our most talented people to work on. We can talk about differential ah technological development.
00:42:34
Speaker
So if if you look at the landscape ah as it at it as it is now, which technologies are currently undervalued, where should we be pushing such that we can... kind of change the incentives that the technologies create.
00:42:47
Speaker
So I'm biased, but my company seems to be doing a pretty good thing here. And obviously, you know, we're like not, we're not in stealth. We've announced that we exist. We've got a one page of what we're doing, but no one's seen the thing we're working on yet. and You know, this fall, we're very excited to roll that out and really show people what we're working on here.
00:43:03
Speaker
But I think there are a couple of categories. We walk through you know kind of three in the piece. ah One, and this is kind of counterintuitive, we talk a lot about these kind of like defensive acceleration technologies.
00:43:14
Speaker
The idea that you actually have to mitigate AI's catastrophic risks in order to get over this barrier. And the reason for that is because AI's catastrophic risks provide a very good reason to centralize them in the hands of a couple of people.
00:43:27
Speaker
It is true that by default, AI could be extremely dangerous. It could be extremely powerful and extremely dangerous. It could make it easier for you know actors to develop eye weapons.
00:43:39
Speaker
It can make it easier for a random people to do bad things. and the governments are going And governments and and companies are going to use those as credible arguments, real arguments, ah to centralize this intelligence and decommoditize it, to have a couple of actors who have dominant control over it.
00:43:54
Speaker
And of course, the downside of that is we know that the more we centralize this into the hands of a couple of people, the more it looks like a monopoly instead of a commodity, the worse off regular people are likely to be in the long run.
00:44:08
Speaker
So what we want to do instead here is de-risk the technology fundamentally. If we're going to build it, and I'm not saying that we do, but if we're going to build it, you should make sure that it's safe. And I think there's been this like long-running argument in the AI safety space that doing this is like not possible or a waste of time.
00:44:23
Speaker
And we're increasingly seeing interesting results here that indicate maybe actually there's something to be done Kyle O'Brien had a paper with ah AC a couple days ago talking about how if you just remove you know like biological materials information from the training data when you do pre-training,
00:44:37
Speaker
that you end up with models that are somewhat tamper-resistant even when you try to reintroduce that later in fine-tuning. ah That is the kind of research you want to be seeing a whole lot more of right now. You want to find the kind of research that means that if we develop it, it doesn't have to be in the hands of one actor forever, that one guy is not declared the total controller over intelligence.
00:44:57
Speaker
And then, of course, you really want to work on technology ah that helps democratize this tech with humans still in control. Again, part of what we're working on here is... Trying to find use for these last mile of automation tasks of taking advantage of an individual's data, finding ways to make that even more competitive for them, even as there are larger models.
00:45:18
Speaker
This sometimes looks like modifying existing model. It might look like doing something entirely different, ah but finding ways to put existing human data to use so that the tools that you control are the ones that are helping you do better and that don't they don't disempower you.
00:45:32
Speaker
You also want to work in the kinds of tech that could help strengthen democracies. I think ah Audrey Tong's kind of vision here ah is quite inspiring. And so I think those are kind of like the three buckets I talk about. Tech that actually makes it possible so that if we build it, it's going to be diffuse as opposed to a monopoly.
00:45:49
Speaker
Tech that keeps humans firmly in charge. And technology that ah is it able to help strengthen our democracies such that if we can't prevent them from being a monopoly, we have fallback options.
00:46:02
Speaker
One of the ways I think about this to close this is to close this loop here um is on social media. I think there are two problems in social media, or two approaches, and I think you should take them both concurrently.
00:46:14
Speaker
One approach is to say, the kind of common one, is that social media is super addictive, and so the government should regulate it in some way. ah The government should ah restrict certain kinds of features that are in it or ageate but yeah age-gate it or something like this.
00:46:28
Speaker
I think an approach that is oftentimes less appreciated and is absolutely necessary because you can only regulate things so much is to also introduce technological alternatives. There has been a massive rise of like screen time apps, for example, Opal's one of them, ah where...
00:46:42
Speaker
you know, you download a thing and it helps you reclaim your focus because a whole lot of algorithms are pointed at you. And now you need something pointed outwards. We're trying to build the thing that's pointed outwards that so many people are trying to take your job or take you out of the economy. And we think we can build tools to keep you in it.
00:46:57
Speaker
And I think if we're right, that could be one of the largest markets in history, because if you are building the tools that help keep people involved, people are going to want to be involved. They're going to want to stay involved in the future.
00:47:09
Speaker
And I think that's a pretty powerful tool to be building, both from an impact perspective and from a market perspective. We're facing this tension between trying to control the downsides of AI by centralizing it and then spreading the upside and by by gi by giving as many people as possible access to to the models.
00:47:30
Speaker
So one one um one answer to this tension is just to say that we need to open source AI fully. Mm-hmm. What do you think about that vision and how does it interface with what you're talking about?
00:47:42
Speaker
So I am probably, you know, more like pro open source than I think the average person on the podcast. And I think part of this is because of this real fear of monopolization.
00:47:53
Speaker
I think it is the case that if open weights models are not a core part of the future, ah that you can increasingly charge these wild rents for them. ah I think there are a couple of people who have strong incentives to build them. So I don't think it's the case that like a I don't think it's a case that like you know they're going to fall behind in some near future.
00:48:11
Speaker
And I also think there's this like very pervasive argument, I think especially within like the AI safety community, that open-weights models are always going to be behind. It is absolutely true that in a hard takeoff scenario where like you just foom and go straight to superintelligence, that that's going to be the case. Someone's going to win that race.
00:48:29
Speaker
That's game over. In basically every other scenario, what we have seen is the exact opposite. I remember hearing in a couple of years ago that there's like no way that open weights models could catch up. The two behind, and especially like there's no way that China could catch up.
00:48:41
Speaker
It's just impossible. ah Chinese models right now, Chinese open weights models are like six months behind the frontier. And some of them, I think maybe are even more ahead. Kimi K2, for example, is a really excellent English writing model. I would wager it's probably the state of the art at that.
00:48:57
Speaker
ah This does not look like ah that we are like seeding, and like open weights models are slowing down. The gap continues to close even on providers that have like less access to high quality compute.
00:49:08
Speaker
There's something going on in both the way in which we train them and the data that we're using that still provides advantages such that compute isn't everything. And so I think if if the argument that I oftentimes hear is like, open weights can't catch up, it's not a core part of the story, I just don't think this is true.
00:49:22
Speaker
I think if you're taking AI safety seriously, you're going to have to focus on making open weights models safe because open weights models are going to be a reality and they're going to be quite powerful. How do we do that, though? I guess that's that's the the main worry with open weights models. this is just we can't If we put something out there that's open weights, we can't then take it back.
00:49:44
Speaker
and yeah we don't We don't have this feedback loop of trying to test something and then pulling back and then perhaps putting a more limited version of that model out there. So how do we deal with the technology where if we release it, that capability suite is now out there indefinitely?
00:50:03
Speaker
Yeah, this is where, like again, I'll cite Kyle O'Brien's work here is just quite important. ah The kinds of work that you want to do here to create tamper-resistant open-weights models, such that reintroducing the information by trying to tune them in a certain way breaks them or doesn't work.
00:50:17
Speaker
I don't have a lot. I know I've talked with Kyle a bunch, and so I know some of his work is forthcoming, so i don't want to jump the gun anything here. But as a separate note, the kind of holy grail here ah is a model that when you try to reintroduce this, it just stops working or it breaks because of something that it done they've done.
00:50:33
Speaker
I don't want to preempt any announcements. I know there are people who are working on this in a broad variety of sectors. But that's the kind of safety innovations that I think are extremely important and that move our option space. If you are someone who thinks doom is really likely, the best thing to do is not like continue to evaluate the model to see if we're getting closer.
00:50:49
Speaker
Because if we're getting closer, we're going to actually have to do something about it. And I think like from a technical safety perspective, right now, you're either betting on this like catastrophic warning shot that I'm not convinced actually slows anything down. I think we taught we have a footnote, like seven paragraph footnote the intelligence curse.
00:51:05
Speaker
We couldn't fit in the main thing I footnoted it talking about how in a whole lot of scenarios, a warning shot actually just increases the speed at which AI progress happens because somebody gets spooked over it. And the response is we need better defenses faster.
00:51:16
Speaker
ah So I think if you're counting on like, we're going to keep evaluating the thing and then we're going to see that it's dangerous and we're going to stop building it. so Best of luck. Like, I don't think that is an extremely tractable approach. I think more investment is better spent um by a whole lot of extremely talented technical experts on actually building out the capabilities that are required to make even open-weight models tamper-resistant and safe.
00:51:39
Speaker
And I think this is genuinely achievable. I don't think this is an intractable agenda. We have seen more progress on it than I expected to see. And I think as people have kind of chipped at it, as papers have made it clear that this could be possible,
00:51:51
Speaker
more and more people are starting to get excited about this. And I think that's more of the direction I want to go here. If we don't have the option of controlling AI using a central authority, it seems to me that we are somewhat at the mercy of how the technology just turns out to be.
00:52:09
Speaker
So if it is the case that we can we can limit what models can output and perhaps have the models stop if you try to use them to create biological threats, say,
00:52:20
Speaker
Well, that's great. But what about the next ah the next possible danger and the next possible danger? If we don't have the ah way to control um ai as ah as at least a backup option, are we just kind of at the mercy of how the technology turns out to work?
00:52:39
Speaker
Yeah, this is one of the concerns. We are at the mercy of how fast we can rush our defenses. ah But that means that rushing our defenses is perhaps one of the most important things that we could be doing. And in other forces, we recognize this. On pandemic preparedness, for example, we can't ban pandemics. It's not possible.
00:52:54
Speaker
um Pandemics are always a background risk throughout the world. And yet this means that our response can't be to do nothing. Our response has to be, we know this is a possibility. This is on our threat map.
00:53:04
Speaker
What's everything we can do to build the kind of Swiss cheese model of defense for pandemics? And I think that approach is extremely relevant with AI dangers. One other thing that I'd say here is the kinds of proposals that I'm talking about, the ones that I'm like explicitly proposing here, are those that try to do this like controlled super intelligence explosion.
00:53:21
Speaker
ah The kinds where like we say, all right. 12 people running after AI, too much. One guy is going to do it. We're going to monitor him every step of the way. And what that policy results in is one person has a unilateral, at or one one body, one entity has a unilateral advantage over everyone else forever if they actually achieve this kind of hard takeoff.
00:53:41
Speaker
And then you are just at the mercy of the people who control the weights. Aligned superintelligence in the hands of one person makes that person a de facto dictator unless they choose not to be. And that is not a good outcome.
00:53:52
Speaker
Now, There's a separate category of policies, which, you know, I'm not necessarily supporting. This is not me endorsing these, but I don't think they unlock the kind of intelligence curse style risks. And that's if we just don't build it.
00:54:03
Speaker
ah So if it's the case that like, I think you can very consistently say, The intelligence curse is real, and therefore I'm going to advocate for it, never building systems that can replace humans.

Aligning AI with Societal Values

00:54:13
Speaker
I don't know how tractable that policy is.
00:54:15
Speaker
I'm not sure that's the right approach, but I don't think no one gets it unlocks the risk. The concern that I have is a whole lot of well-meaning people are going after one guy gets it. And I think the much more likely outcome is not between zero and one on extremely powerful AI.
00:54:29
Speaker
It's between one and many. And if those are my two options, man, I'm definitely on the latter than the former. And I think the latter is a world that you can move towards. Spreading AI capabilities, that seems to me when I when i read the the kind of founding essays of OpenAI, that seems to be the vision that they had. They wanted to make sure that Google didn't have a monopoly on AI technology and they wanted to empower everyone with ah with ah AI models. And that vision seems to have kind of degraded over time.
00:55:01
Speaker
How do you make sure that doesn't happen to the vision you have for Workshop Labs? It is one of the things I think about the most um because the road to hell is paved with good intentions.
00:55:13
Speaker
It is paved with people who are working on things that ultimately end up working against their cause. Now, there's a couple of things here. I mean, there's there's the basic legal stuff. like We're a public benefit corp with a fiduciary mission not to ah automate people.
00:55:27
Speaker
It's in lawyer speak like, enhancing economic opportunity, but that is like explicitly our goal. And this is instead of saying like, um, you kind of can do the generic thing of like to make sure AI benefits people.
00:55:37
Speaker
And it's like, okay, but what does that mean? Does that mean like we're going to put it in charge and that we think it's going to benefit people? Or does that mean, ah we are going to try to do a certain thing. And in our case, this is this economic empowerment argument.
00:55:48
Speaker
It is our mission to make sure that AI actually meaningfully increases your power in the economy rather than decreasing it. um I think also I'm a believer that personnel is policy. And so the kinds of people that you bring onto the team will push you in certain directions.
00:56:02
Speaker
Our hiring process is laser focused on mission alignment, and it helps that we have been incredibly public. We kind of stumbled on this company on accident. We had worked on a bunch of research in the area quite publicly and then realized that we had to propose a technical agenda and wanted to go after parts of it ourselves.
00:56:17
Speaker
But of course, there's also kind of the broader question of like, what do you do technically? This is why like we are so committed to launching on day one with like extremely strong privacy guarantees. Because you shouldn't trust me that if you hand all of your data to me, then I'm going to be a good steward of it.
00:56:32
Speaker
What you should instead know is that there's literally nothing I can do to use it in a nefarious way. That's a much more powerful guarantee. um It's not this trust but verify thing. It's I can demonstrate to you that we have taken every measure humanly possible to prevent ourselves from training a larger model on your data.
00:56:48
Speaker
And so that every piece of data that we get from you is used at your benefit and we can't use it against you or use it to sell it to your boss. And I think that's different than a promise. We're trying to give an actual guarantee here so that we can't use the data in this way.
00:57:02
Speaker
That presents lots of novel challenges for our team, but I think it also presents some novel opportunities, both as to how we position ourselves and the kinds of things that we can do to help make your experience better as opposed to worse.
00:57:14
Speaker
We want these models to genuinely be aligned to you and loyal to you alone, and we're going to keep that vision centered as we continue to to work on this. It is really one of the ah big kind of technical, perhaps even political questions of our time.
00:57:29
Speaker
We have ai models that are aligned to certain interests. there's There's a whole separate question of whether we can even align them to to to certain interests. And that, in my opinion, is ah isn' an on unsolved problem. But they happen to have certain goals, certain preferences,
00:57:47
Speaker
and Those preferences are a kind of a mix of what the companies are interested in what governments are interested in and what end users are interested in. and The balance between which preferences should be strongest in the model, that is that that is a a very interesting question and something that we...
00:58:09
Speaker
Yeah, I think there's a lot of work to be done there. For example, i say in in not that long, I expect us to have personal agents that can do our email and our calendar for us.
00:58:23
Speaker
that agent is is it Is that agent working on my behalf when I ask it to book a hotel for me? Or is there perhaps a kind of like a corporate preference to book a certain hotel that that OpenAI might have an agreement with something like that? you could You could quite easily see the incentives of the model or the or the preferences the of the model becoming modeled between what the end user wants and and what the companies are interested in.
00:58:50
Speaker
Do you see a principled way to to solve this or is it is it is this just like any other product where the company selling the product is interested in something and the consumer is interested in somewhat of the same thing, but but the preference sets do not perfectly overlap?
00:59:07
Speaker
I think if you talk to a model and you ask it for something, it should do one of two things. It should either answer in your interest or tell you when it's not. If we're going to go down the kind of rabbit hole of LLM monetization via advertisements, it should be exceptionally clear what is an advertisement, what isn't.
00:59:25
Speaker
I think it was started this way in search and it's less so now. But even still, if you're searching something on Google, and and you type in something, you can see which things are ads. This should be really obvious.
00:59:36
Speaker
Because of course, I happen to believe what we're building is that these things should be loyal to your interests, that OpenAI or you know Anthropic or us, that we shouldn't sign some sort of a deal and then like disguise or nefariously let you know, hey, by the way,
00:59:49
Speaker
have this hotel you should be looking at. um It's a really bad situation to be in if your model doesn't work for you. And I think that this is just true as a consumer. Like you want to know when you are asking something for advice, you are getting the kinds of advice, the kinds of information, the kinds of truth that you would give to a friend because you genuinely care about them.
01:00:10
Speaker
It's what makes these tools useful is that they work on your behalf. Imagine if, you know, there is ah there's a Black Mirror episode recently ah that really stuck with me. It was in the new season where, like, a woman has a bla a brain transplant and they upload, like...
01:00:24
Speaker
half of a brain to the cloud. And this is great because she's still alive. But like every couple of hours, she turns off and gives advertising pitch about something.
01:00:35
Speaker
She has no recollection of giving the advertising pitch. And then she wakes back up and doesn't even know what's happened. And she only finds this out because other people tell her, hey, why did you just bring up this travel site in the middle of your lecture? ah And, you know, it's great that the technology has enabled her ah to do this really cool thing.
01:00:53
Speaker
ah You know, she's still alive. She's able to live her life, except of course, if she suddenly needs to give a sponsored ad on something where she goes out of the coverage area. And because they have this monopoly control over her, because, you know, you you don't have competing vendors for your brain upload.
01:01:06
Speaker
ah You've got like, you know, half your brain here, half of the processing power in the cloud. Only one guy's got that chip. And so it ends up happening. is you know they start around a very cheap plan. It's only a few hundred bucks a month, you're good to be alive.
01:01:17
Speaker
And then they say, oh, we have this deluxe plan now, and you can go outside the coverage area if you buy the deluxe plan. And it's like, oh, you're now on our like freemium tier, and if you just upgrade a little bit more, you can get rid of the advertisements.
01:01:27
Speaker
And suddenly, the thing that made your life so much better is now a massive hindrance to your quality of life because one guy has total control and gets to jack up the rents as they see fit.
01:01:39
Speaker
That is the kind of scenario where that we're trying to avoid. Part of this comes through democratization this technology. Part of this comes through ensuring that they're actually loyal to you. And my expectation here is in the future. If we get to the good future, everyone has got an agent that's aligned to them that advocates on their interest that they know is working for them.
01:01:56
Speaker
One thing I'll add here to close it up, to kind of close loop here is that one of the places I really agree with Sam Altman on is his concept of AI privilege. The idea that actually, if you're giving this much information,
01:02:08
Speaker
to a system, it probably shouldn't be used against you. And this is different than other technologies. So I'm probably someone who'd advocate for more privileged technologies rather than less, even on the status quo ones. But if you are constantly, you know, interacting with this thing and it's helping organize your life, that's a powerful tool in the hands of someone who wants to be an nefarious to you, who wants to understand your life, who wants to interrogate it instead of you.
01:02:29
Speaker
And because it's a chatbot, you know, it's not going to know when it should reserve its, all right, maybe it could, but it maybe it doesn't know when it should like, you know, use its fifth amendment rights. Not clear. It doesn't have fifth amendment right right now. It probably doesn't have a right against incriminating you.
01:02:40
Speaker
And if it has that much access to your life, it probably should. um That's one of the more, I think, like really value aligned things open AI has called for recently is some sort of concept like that. And I endorsed that wholeheartedly.
01:02:53
Speaker
Yep. Both on kind of clearly stating when there's advertising happening in, in model outputs, and on the the privacy or AI privilege. I do fear that consumer preferences are just not up for off for these things. So it's we if we look at social media, if we look at media,
01:03:16
Speaker
kind of digital services in general, it it seems to me that consumers are interested in in free products that are ad supported and companies are interested in hiding to the to the maximal extent what is an ad and what is not an ad, just because it's more effective if you can't tell the difference between an ad and generic information.
01:03:35
Speaker
This is like it's more effective when an influencer personally endorses. Yeah, when it when an when an influencer endorses a product, but that is that is kind of like happening because they're getting paid, not because they actually like the product.

Adapting to Economic Changes

01:03:49
Speaker
Yeah, like sponsored content, things like this. Yeah, exactly. exactly So you have those two things that we see now. Isn't that then, doesn't this point in the direction of the the default AI future being ad supported and being a future in which it's difficult to tell what's in advertising and what is not?
01:04:07
Speaker
Yeah, no, I think that is the default future. It's why we exist. If I thought like the market was like on its own, so the forces of, you know, the forces of nature going to correct itself here, it didn't require an insurgent actor who was going to work on this.
01:04:18
Speaker
I wouldn't exist. If we didn't think it was required for someone to build the technology to make the future better, we would do something else. yeah um But I think part of this is aligning your incentives with your customers. I could not talk enough about Apple.
01:04:31
Speaker
I think this is a fantastic case study of aligning your incentives so that you're serving the right people. Where does Apple make the money in the device they sell to you? You as a consumer have a very strong preference for that device working.
01:04:43
Speaker
And one of the places where we haven't seen this trend of injected advertisements really work is in actual personal devices. The one device you have that's your gateway to everything. Sure, lots of content on that device has this injected information, but you know your device works for you.
01:04:58
Speaker
And actors tried this. Amazon's Kindle had like, ah I think Matt might still have ads on like the front black and white ink page. I'm not sure if anyone has ever, that's ever worked. It's never worked for me at the very least.
01:05:11
Speaker
But even with strong incentives, ah the vast majority of mobile devices don't serve you ads natively. The apps on top of them do. And I think this speaks to a very important point.
01:05:22
Speaker
that sometimes you need the thing to work for you. You need to know that it works for you. And this, I think, is, again, a really massive market opportunity. And I think it's especially true When you're building things that have a lot of data on the user that the user proactively hands over that helps them do their job.
01:05:39
Speaker
And in that kind of thing, I think users, at least in our initial conversations, are more skeptical of handing over all this data unless they know it works for them. And I think being the provider of the thing that people know works for them that also delivers value to them is a really powerful position to be in. I think a lot about companies like Apple.
01:05:56
Speaker
Yeah, yeah. We are perhaps as as a final topic here, and we talk about a great as essay you you had on how to respond to the special time we're looking we we're living in.
01:06:08
Speaker
So it is a time in which ai progress is moving incredibly fast and you call for moonshots, starting a startup, say. What is it that especially young people um should be looking at in these times?
01:06:24
Speaker
The default paths are closing. And this is true no matter, you know, my my, I wouldn't bet the house on any one intervention, right? But you know, my company could win everything. We we could do everything we set out to do.
01:06:35
Speaker
And the consulting jobs are still going away. I have no interest in um changing parts of this pattern. I think it's not our job. Our job is to ensure that the next iteration of the economy works for you.
01:06:49
Speaker
That when this change is said and done, that you're in a better position than ever before to achieve as opposed to a worse one. But... The economy is still going to change. Even technology to create new jobs, if that is the way we can move the pendulum instead of being a job replacer, a new job creator, even those change the nature of the economy.
01:07:10
Speaker
And I think that's going to happen basically no matter what. And you're already starting to see it. Your fortune 500 company that your parents told you, you have got to join when you graduate from this prestigious college, because come on, man, we didn't pay for all that tutoring, uh, for you to do a startup or for you to go join a think tank or you to go to the small company. No one's ever heard of.
01:07:29
Speaker
Those are now the least risky options because those are still opportunities for you to win it. They require you to think on your feet and be bright and do well and really understand the environment around you.
01:07:41
Speaker
Those safer jobs are the first target for automation because companies with 500,000 people on their payroll are going to want to cut some of that payroll. If you are an N equals one person at a company, if you do an important job that nobody can replace by virtue of being there, you are much safer than if you do a job that a thousand other people at your company also do because you are extremely automatable in that role.
01:08:04
Speaker
And I think that's what we're going to see. The automation of rote tasks has the opportunity to do one of two things. It can be the start of a total pyramid or placement where we as society decide that our value is to replace all work and hope the next thing works out.
01:08:18
Speaker
Or it can be an opportunity for us to build an economy that is more local, that is more individual, that allows you as an outside person to have more opportunities than ever at moving in and becoming somebody.
01:08:31
Speaker
But that's not going to happen if you don't change your path now. And I think this is especially true for like the kind of classic prestige paths, people who like got all straight A's and nailed their SATs and went to the right college and have only ever done the right thing according to the status quo.
01:08:48
Speaker
No matter what happens in the next 10 years, I think now is the time for these moonshots because we know the window is still open. It's become easier than ever and everything else looks more risky. So if you are someone who's hesitated on doing the risky thing and like Jane Street has knocked on your door and McKinsey's come calling, they're like, look, here's this massive paycheck.
01:09:05
Speaker
Come do this for a year or two. Know that that is, you are going to be on the last chopper out of Saigon. If you manage to get yourself through that, you are the last breed of consultants. That industry is dying. You are the last breed of entry level, whatever.
01:09:18
Speaker
We are moving towards, if if we can win, we're moving towards a more specialized economy. And I think no matter what happens, if that's the winning play, I think you should take it. So strong urge for people to be take more risks during this time. I think it's more important now than ever.

Conclusion

01:09:33
Speaker
Luke, thanks for chatting with me. It's bit it's been really interesting. Yeah, Gus, this has been great.