Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd) image

Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)

Future of Life Institute Podcast
Avatar
0 Plays2 seconds ago

Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene.  

Follow Benjamin's work at: https://benjamintodd.substack.com  

Timestamps: 

00:00 What are reasoning models?  

04:04 Reinforcement learning supercharges reasoning 

05:06 Reasoning models vs. agents 

10:04 Economic impact of automated math/code 

12:14 Compute as a bottleneck 

15:20 Shift from giant pre-training to post-training/agents 

17:02 Three feedback loops: algorithms, chips, robots 

20:33 How fast could an algorithmic loop run? 

22:03 Chip design and production acceleration 

23:42 Industrial/robotics loop and growth dynamics 

29:52 Society’s slow reaction; “warning shots” 

33:03 Robotics: software and hardware bottlenecks 

35:05 Scaling robot production 

38:12 Robots at ~$0.20/hour?  

43:13 Regulation and humans-in-the-loop 

49:06 Personal prep: why it still matters 

52:04 Build an information network 

55:01 Save more money 

58:58 Land, real estate, and scarcity in an AI world 

01:02:15 Valuable skills: get close to AI, or far from it 

01:06:49 Fame, relationships, citizenship 

01:10:01 Redistribution, welfare, and politics under AI 

01:12:04 Try to become more resilient  

01:14:36 Information hygiene 

01:22:16 Seven-year horizon and scaling limits by ~2030

Recommended
Transcript

Introduction to Benjamin Todd and 80,000 Hours

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker and I'm here with Benjamin Todd. Benjamin, welcome to the podcast. Hi, thanks. Great to be here. Do you want to start by introducing yourself ah to our audience? Maybe talk a bit about what you're working on at the moment?
00:00:16
Speaker
Yeah, in the past, I founded with Will McCaskill 80,000 hours and then was the CEO for 10 years. In the last year or so, I've been focusing on writing and writing about understanding AGI and how we can respond to it both individually and as society.

Guide for Careers Tackling AGI

00:00:35
Speaker
And the main thing I'm working on right now is a guide to AGI, careers that tackle AGI for 80,000 hours. So one of your essays is about reasoning models.
00:00:46
Speaker
This is a reason reasonably new phenomena where you can have an AI model think for longer on certain questions. Maybe you could tell us how does that work?
00:00:56
Speaker
What are the advantages?

Chain of Thought in AI and Reinforcement Learning

00:00:58
Speaker
The basis is a very simple innovation called chain of thoughts. with a large language model, instead of when you ask it a problem, instead of asking it to just kind of generate the solution in one shot, instead you ask it to generate a chain of reasoning towards that solution. So it produces, you say like, okay, we're going to solve this math problem.
00:01:19
Speaker
How would you reason towards that? And then it produces a token of reasoning. It then reviews that token and then produces another one. And then it produces a long chain. towards that solution. And then the, yeah, the final, then the the extra addition, then you you already get a big boost just by using chain of thought.
00:01:35
Speaker
But then where it really gets going is then when you use reinforcement learning on top of that. So if the solution is correct, then you adjust the model, which is called reinforcement to be more likely to do things like that in the the next time.
00:01:49
Speaker
And then you can do that loads and loads of times with loads of examples until the model gets better and better at generating these chains of reasoning that tend to lead to correct answers. Are there any kind of deep technical reasons that this has only recently started working? Reinforcement learning is is is not a new technique. Maybe chain of thought wasn't possible to the extent that it is now before?
00:02:10
Speaker
Yeah, so, you know, Chain of Thought started working a bit with GPT, at least definitely by GPT-4. the The reasoning models paradigm has really only started getting going in 2024.
00:02:22
Speaker
Maybe the wider world still has not quite recognized this because these models are best at things like difficult mathematical and scientific reasoning, which just most people aren't doing in their day-to-day life. They're just using it as a chatbot and they haven't realized anything.
00:02:36
Speaker
how much bit better it's gotten at these things. But yeah, in terms of why it just started working in 2024, I'm not actually sure anyone totally knows the answer to that. But at a very high level, i think one way this can happen is, you know, if each chain of, if each step of reasoning only has like a 90% chance of being right, like 10% chance of being wrong, by the time you've tried to reason through 20 steps, I think you only then have about a 12% chance of being correct.
00:03:04
Speaker
So previously with language models, they kind of couldn't keep it together for long enough to really get to any answers. But it what seems to have happened is around early 2024, the models have just about got to the point where now they can reason for quite a while, like maybe the equivalent of an hour in, or you know at least minutes and probably maybe like the equivalent of human thinking for an hour about something.
00:03:29
Speaker
And then and the next thing that happens is like, if you can't even get close to an answer, you can't do reinforcement learning because there's no reinforcement signal. But once you start getting right, like some reasonable fraction of the time, the right answer, then you can get the flywheel going and start using reinforcement learning to make it even better.
00:03:48
Speaker
Yeah, the underlying model has to be of a certain quality, it has to produce the right answer at a reasonably high percentage of the time for for this for this reasoning to work.
00:04:00
Speaker
Yeah. And I actually think this phenomenon comes up a lot in different parts of AI. Like I think we might end up with quite a similar thing happening happening with agents where like right now they kind of don't really work. Like each step, they just kind of fall apart.
00:04:14
Speaker
ah You can't really do rain reinforcement learning, but we might suddenly get to a point where like they start to work pretty well. And then then you can use reinforcement learning, make it even better. And you get quite dramatic change. Yeah.
00:04:25
Speaker
And this is really, i think, a common experience kind of looking at how AI is developing. There are, or it seems to be that there are these thresholds where AI is bad at something until it's actually pretty good at that thing.
00:04:39
Speaker
Just a couple of years ago, I was discussing with AI experts whether large language models could ever become good at math or programming. And with reasoning models, it now seems that AIs are excellent.
00:04:51
Speaker
maybe Maybe they are the best at exactly math. and programming. So maybe we could see something something similar with with agents, you think.

AI's Divergence in Formal vs. Creative Tasks

00:05:00
Speaker
What is actually, what is the connection between reasoning models and agents?
00:05:06
Speaker
I mean, one very simple connection is just, you could, if you have a really good reasoning model, you can kind of use that as the brain of the agent, the planning module.
00:05:17
Speaker
And so the that of the the better reasoning models we have that can do good planning, and can like figure out what the right next step should be, then the more likely agents are to work. Now, one advantage of reasoning models is that they might be able to generate data that they can then but can then be used to train the next generation of models or even the same model.
00:05:38
Speaker
How can this possibly work? it It seems like something that, it seems like an ah an idea that's too good to be true where, yeah, how how can this work? Yeah, I mean, it works in this case just because the the solutions can be easily verified.
00:05:55
Speaker
I can have a large language model solve a bunch of math problems, and then it's quite quick and cheap to check which solutions are actually correct often. And so then at the end of that process,
00:06:07
Speaker
you actually just have a bunch of new, like correct solutions to these problems and also a whole chain of reasoning that leads to that solution. And that's super good training data.
00:06:18
Speaker
And there's there's nothing circular about it. It's just because... rests on them being easily verifiable. So you would expect the domains that are not easily verifiable to be less been less as useful of domains to use ah reasoning models in.
00:06:34
Speaker
Like for example, I'm thinking writing a a fiction, writing a novel, for example, that You might get pretty, it's it's difficult to get feedback on whether the novel is good. It's difficult to, is there even something that can be kind of formally verified about the quality of the novel, something like that?
00:06:53
Speaker
ah Will we see this divergence between domains that can be be easily formalized where we have strong progress and domains that can't be formalized where we perhaps don't have a strong progress?
00:07:04
Speaker
Yeah, and that's what we've seen in the last year where there's been a huge divergence in the kind of like hard scientific domains. There's been way more progress than any others. Yeah, and I think looking forward, you could almost see this as the key question of kind of like forecasting AI progress is how many domains will be amenable to reinforcement learning.
00:07:26
Speaker
Will we just be able to ride the current techniques to, you know, superhuman levels performance and ah across most tasks? Or will it just be limited to math, science programming?
00:07:38
Speaker
Yeah, I mean, there's a few things that go into that. One is that at least seems true to a little, ah small extent, at least, that if a model gets really good at math and science, it does actually get a bit better at everything else.
00:07:51
Speaker
it is it is learning some type of like general general logical reasoning that is useful, but it kind of remains to be seen how big that effect will be. And then the other thing is like, how good can we make the reinforcement signals in these more nebulous domains?
00:08:07
Speaker
And yeah, that's how that's going is getting a bit out of my expertise. But, you know, I understand with things like, with something like writing, you might be able to use some AI models to rate intermediate output outputs.
00:08:21
Speaker
So you could have like an evaluation model, which checks and then you could use that as a reinforcement signal. You can also use human feedback, though obviously that's much more expensive to gather that type of data.
00:08:32
Speaker
And then there's the kind of final feedback that

Feedback Loops in AI and Accelerated Development

00:08:34
Speaker
comes from whether the novel sold a lot of copies, though. That's a very long horizon thing. So that's you can't get as a fast iteration cycle with that type of thing. Yeah, that's actually that's an interesting point. Does this mean that when we're looking at something a question like, does this piece of code compile correctly? or does this does this model do what I want it to do? What's the accuracy of this model? Something like that. Those are questions that can be answered rather quickly in ah and a kind of fast feedback loop.
00:09:02
Speaker
When you're interacting in the world with the world at large, you're interacting with human systems that are moving slowly. And so the question here is whether there will be a wall where reasoning models perhaps agents won't be able to interact with with the human world as well because the feedback is simply too slow the feedback cycle isn't fast enough yeah it's more that yeah you won't be able to rapidly train models with those feedback signals but i mean there could be it's possible that you'll be able to kind of break things down again into like much smaller tasks that's
00:09:38
Speaker
where you can get quick feedback and chain them together. that Yeah, so I think this is really, remains to be seen how well all this will work and is really central question about the next couple of years of AI progress.
00:09:49
Speaker
Yeah. There's another quite central question, which you mentioned before, which is something like, how much progress do we get if reasoning models are only good in programming, mathematics, and the hard sciences?
00:10:04
Speaker
how do How much progress would you expect from models being good and in and only those domains? I think in terms of economic growth, it's possible it would be quite small because quite not much of the economy is kind of difficult scientific reasoning.
00:10:19
Speaker
But I mean, if I was going to make a more bold case, it's possible those systems could be very useful for accelerating certain parts of scientific research. And then those scientific discoveries could then cause a lot of economic growth.
00:10:31
Speaker
The kind of strongest case for acceleration would be like, well, actually, yeah, these models are still not very good. They're not good at social skills. They're not good at business strategy. They're not good at physical manipulation. Lots of things you would need to have maybe a very general agi But if they're super good at programming and maths research,
00:10:49
Speaker
that could be really useful for doing AI research in particular, doing ML research. And so then and then that could then unlock the next paradigm or wave of progress after that.
00:11:00
Speaker
So think that would be like the strongest case for rapid progress based on this. And why is it that AIs are particularly well suited to do AI research? Well, I mean, the biggest thing is what we've just been saying, because, you know, ML and programming are domains where you can get this reinforcement signal.
00:11:19
Speaker
So then the current models are becoming really good at exactly those types of tasks, which... and is exactly the type of thing you need to do AI research well. But yeah, I mean, a few other factors.
00:11:30
Speaker
One is like, it's fully virtual. So you can just do loads of experiments virtually, but without having to say like, wait for lab results or wait for something to happen in the real world. It's the other thing you were just saying. And then there's a kind of other big factor, which is it's also what the people doing AI research understand the best, how to do AI research. So it's very natural for them to try and use the things they're developing to help with their own work.
00:11:53
Speaker
Yeah, isn't there a big barrier here in terms of training runs being incredibly expensive? So there's probably some kinds of information about machine learning research or results in in that field that you can only get by running experiments that are very expensive.
00:12:10
Speaker
Totally. So the extent to which that's true is, in a way, kind of the key way of seeing whether there's going to be something like an algorithmic software feedback loop and intelligence version based on that or not.
00:12:21
Speaker
So yeah, you can kind of think if we get AI, virtual AI researchers, and so AI, AI researchers, that's, you can think of that as really expanding the labor pool of people doing AI research.
00:12:33
Speaker
But there's two main inputs into AI research. There's the labor or the kind of like the researcher time. And then there's compute, which you need to, you run all the experiments and compute will stay the same in the short term because that's just determined by how many chips we have in the world.
00:12:48
Speaker
So even if you increase the labor pool a lot because computers staying the same, it might not, that's a big reason why there might not be that big an acceleration of AI research. But yeah, I mean, the large training runs still only take about three months historically. So you can still, you know, in a year you could still, you know, you could train in theory ah a bunch of much more, advanced you could do three three whole generations in a year if you were maxing it out, which is still about 10 times faster than we've had in the past.
00:13:17
Speaker
And then I think but the biggest thing on that is that, you know, because in this reinforcement learning paradigm, you don't actually need to run these massive training runs necessarily. you they're They're using much less compute to do this reinforcement learning on top of the large pre-training run. So you can get much faster iteration cycles in this. And apparently this has been a big trend in the AI labs recently is they've been preferring to distill distill the models into these like smaller and cheaper models.
00:13:46
Speaker
which are a bit less powerful, but then you can iterate with them way faster. So you could have like 10 generations in the time when previously only had one generation. And then you can actually end up ahead even if your kind of starting position is a bit worse.
00:14:01
Speaker
Explain that. why why is it Is it just because the model is cheaper to run? Yeah, so you can just do way more experiments with the same amount of compute. All of the AI companies are still gunning for very expensive, ah very large training runs.
00:14:16
Speaker
Do you think anything fundamental has changed with reasoning models? and And if so, why are we still kind scaling compute in this this very ambitious way? I want to distinguish between total amounts of compute spent on all forms of training, including post-training, and then like a large pre-training run.
00:14:36
Speaker
And I think the large pre-training runs like training GPT-5 and GPT-6 Those have been delayed, i think, compared to what we would have guessed a year or two ago. and But instead, that compute is now being used for reinforcement learning or just increasing inference, so more test time compute.
00:14:54
Speaker
And then soon, I think it will also be used a lot on these like eight getting agent experiments going and getting agents to generate data as well. So you think there's actually been a move away from kind of large traditional kind of foundation model training runs to spending that same compute, doing more inference time, you know, you're using it at at inference time and using it for experiments instead?
00:15:19
Speaker
Yeah, definitely in the last year, like previously the Metaculous forecast was for GPT-5 to be released in and like around now in March. But that's, I think when I last checked, they think now it's going to be the summer. So like July or something like that. And then instead, yeah, all the recent models that have been released have been reasoning models. So there's been a clear shift recently.
00:15:40
Speaker
Exactly what happens going forward is not clear, but like my guess is the returns from improving the reasoning models or working on agency will be bigger going forward than just doing another 10x or 100x to the the pre-training run.
00:15:57
Speaker
Oh, that's interesting. So so this this actually there's actually might mean that we we have kind of crossed some some level of quality for the foundation model where it's now more efficient or that this more low hanging fruit in in running that model in a mode of a reasoning model?
00:16:19
Speaker
Well, my, yeah, my thinking was just the reent that the reasoning model paradigm is still right at the start. So you're on a relatively sharp curve still. Whereas like most people think GPT-4 to GPT-4.5 was kind of not a game changing amount of change.
00:16:37
Speaker
So I mean, still, i I think it's been slightly overstated how exactly how bad it was because, you know, GPT 4.5 caught up with O1 on a bunch of reasoning things, but it like doesn't have to do the reasoning part, which actually seems quite good.
00:16:53
Speaker
One useful thing we should touch upon is how likely we are to get a positive feedback loop in AI research. So you can you can lay out the different kinds of of feedback loops we we might experience.
00:17:07
Speaker
Yeah, with different types of positive feedback loops. The one that is kind of most concerning and has also had the most attention is a purely algorithmic feedback loop where if you get to the point where you have an AI that can substitute for people doing AI research, you can do a bit of back of the envelope estimates of how many of those would we be able to run in 2027, say, or end of the end of the decade if we used all of our compute to run those.
00:17:37
Speaker
And Those estimates tend to be between, say, 1 million and 100 million equivalents in terms of how many tokens of output they can produce to humans. So if the quality is also similarly good, then it's kind of like expanding the AI research workforce like at least 100-fold.
00:17:56
Speaker
now. But there's the factor that we just said, which is the amount of compute wouldn't increase at that time. So then you have this question, if there were 100 times more AI researchers, how many how much more how much faster would AI algorithmic progress actually be?
00:18:11
Speaker
And that's quite a difficult question to model. You can try to estimate historically, as inputs into AI research have increased, how much has, say algorithmic efficiency increased?
00:18:21
Speaker
And one key factor is Each time you double inputs, do you get more than a doubling of algorithmic efficiency or or more generally what we care about is algorithmic kind of quality overall. And the past record is a little bit ambiguous about that.
00:18:36
Speaker
But EPOC has this paper where they look at some estimates and they kind of conclude, it's around the threshold, it could be below, it could be above. So I think that that kind of means as a very high level estimate, I'd be kind of like, well, it seems like it's kind of a fifty fifty whether it would actually become a feedback loop or not.
00:18:53
Speaker
And then once the feedback loop starts, you could also have increasing diminishing returns. So that can also dampen out the feedback loop quickly. Yeah, what why would that happen? The idea is, as you make more discoveries, it becomes harder and harder to make more discoveries.
00:19:08
Speaker
because the easiest ones have been taken. To some degree, that's taken into account in the past estimates, because that's also been happening in the past, as we've doubled each time it's become harder it to do the next doubling.
00:19:22
Speaker
But as you get, say, closer to a fundamental limits, you might kind of expect the diminishing returns to, you didn't expect them to a kind of increase even more than they have in the past. So then, yeah, weighing all of these different factors and figuring out what will happen is is kind of it difficult.
00:19:37
Speaker
But um Tom Davidson has a new paper where he looks through all the dynamics of all these. And I think his bottom line is we would see something is something like a 3 to 10x.
00:19:48
Speaker
but We'd see like 3 to maybe 10 years of AI progress in one year. So probably not more than 10 years in one year. he think that's He thinks that's relatively aggressive. A couple of years of progress in one year seems like seems kind of like ah reasonable place to be at.
00:20:03
Speaker
which Which is a wild thought because AI progress just say in 2024 is already pretty fast. Yeah, and you also have to picture this happening. This is at a point when AI can already basically be doing AI research, so it's already very good.
00:20:17
Speaker
And then it suddenly goes like three more years of progress in one year. So yeah, that could be pretty crazy. Actually, to tell us how it could be crazy. Maybe paint us a picture of the impact of of a feedback loop like that.
00:20:32
Speaker
So if you just look in the past, algorithmic efficiency has been going up 3x per year. So that means with the same number of chips, you can basically run three times as many of the same model.
00:20:43
Speaker
So if you get three years of progress in one year, then that's 27x increase in algorithmic efficiency in one year. So it means if you have those, say, 10 million automated AI researchers in at the start of the year, by the end of the year, you can then run 300 million, ah so 30x more on the same chips, so nothing else has changed.
00:21:04
Speaker
And that's an underestimate because that's just algorithmic efficiency. In reality, you'd also have three years of improvements in post-training techniques, um so reinforcement learning type stuff or whatever they're doing at that point.
00:21:15
Speaker
And you could also, you could almost train a whole extra generation of, do a whole generation of pre-training because, well, it would be like, it would be half a generation, roughly,

AI's Impact on Chip Design and Industrial Growth

00:21:26
Speaker
30X.
00:21:26
Speaker
So you'd also go from like GPT-6 to 6.5 in one year. And so all those would happen at the same time, yeah. What about chip design? Because ah there's there's an algorithmic improvements, there's the improvements to the to the AI researchers doing AI research themselves, but there's also improvements to the hardware. what how How would the chip design fit into this picture?
00:21:50
Speaker
This is a slightly underappreciated aspect of the situation, which is even if you don't get this algorithmic feedback loop, it seems... much more likely that we do get a feedback loop in chip design.
00:22:03
Speaker
And there's kind of two levels to that. One is that AIs could help with doing chip design itself. And I mean, NVIDIA is already using AI a lot to help with its chip designs.
00:22:15
Speaker
And so then maybe, you know, you get, again, a similar type of thing where you get several generations of chip design progress in one year type thing. you'd You'd need to kind of do the maths on on exactly how fast it would be.
00:22:27
Speaker
But then there's the second level, which is just simply producing more chips. Historically, the kind of this key parameter, if you've doubled all the inputs into the semiconductor industry, how much more compute do you get out?
00:22:40
Speaker
Historically, it's been much more than a doubling. So the kind of empirical case for this feedback loop working out is much stronger than then the algorithmic one working out. On the other hand, it's a bit less risky, but or a bit it's a bit easier to deal with because it will be slower because each generation you have to produce all the chips and ship them. And that takes that takes some significant time. It's not that you can just have like three generations in one year.
00:23:04
Speaker
It would probably take three years or something, but it would still be super fast compared to like normal economic growth. Yeah, you you describe this, the impact of of a feedback loop in AI as an industrial explosion.
00:23:18
Speaker
If you think about kind of AGI level AI plus robotics, perhaps, what does that look like in your mind? Well, yeah, and in a way, that's the kind of like the third level of feedback loop, which is so you have algorithmic feedback loop, chip design and production. And then the third level is when you can automate industry in general. So a kind of a complete loop of production.
00:23:43
Speaker
which would require robotics. And that one is in a almost the one with the strongest empirical support because if you double the number of workers and factories, you'll roughly double the amount of output.
00:23:54
Speaker
And in fact, it's more than that because as things scale up, they get more efficient. So you actually get more than a doubling. And that would mean you get faster than exponential growth for a while until you hit some type of diminishing returns.
00:24:09
Speaker
Again, EPOG have just released a new economic model trying to look at this and they actually you know they see growth accelerating over like a 10 or 20 year period so it's not even it's not this like one-off oh we get a big loop leap and then it's kind of flat it's like things could keep accelerating to maybe like very high rates.
00:24:31
Speaker
Like the the end question is just like, what's the whole, what's the complete doubling time that you could achieve with like, ah ever if everything was fully optimized, how quickly could things things double? And it seems at least that's possible that could be, you know, more than 100% year.
00:24:47
Speaker
What I fear isn't really fully coming through when I have conversations like this is how kind of crazy the world what would become if something like this happened.

Mainstream Perception of AI's Future Impact

00:24:57
Speaker
Why isn't this always the kind of front page news, do you think?
00:25:02
Speaker
Why isn't this just even the possibility of thinking about this? And we can discuss how likely it is, but even the possibility should receive a lot of attention, but perhaps...
00:25:13
Speaker
isn't it or it isn't receiving as much attention as as I think is warranted. you know if you If robots and AIs could, say, produce the solar panels and chip factories to like make enough chips to to double the number of AIs and robots within a year.
00:25:29
Speaker
So on the Earth, we're only using about one ten thousandth of the solar energy that's coming in. If you get that to one percent of solar energy, which is still maybe not that high, then that's 100x more energy use would be possible.
00:25:44
Speaker
And so this doubling thing could quite quickly go to say kind of like 100x the output of now. That would just be getting started because... With the sun, there's another, forget the exact figure, but I think it's maybe four or five orders of magnitude more energy.
00:26:00
Speaker
And so say within like, don't know if you can do, how many doublings do you need to get a hundred a i don't know if you know your powers of two, eight or something. So within eight eight years, you're like at 100x. But then like then after that, we're suddenly like in space, constructing solar panels around the sun.
00:26:16
Speaker
which is not, on the scale of things, not that technically hard. And so we could literally go from current society to Dyson Sphere is being created in a span of, say, like 10 to 20 years, which, yeah, I think is a kind of radical change of the economy that people are not really at all taking seriously.
00:26:36
Speaker
Yeah. Even people who are pretty into into ai Humans are really bad at kind of extrapolating forward things that haven't happened before. And... Yeah, COVID was a great example of this, I think, where you could pretty clearly see an exponential curve of cases in say, January or February.
00:26:55
Speaker
And basically, very few people took any action about it until it was completely hitting them in the face. And like hospitals were overwhelmed and just had to shut everything down. And this is, like in a way, a much more abstract and weird thing to think about than just, oh people are getting ah disease.
00:27:12
Speaker
I mean, in some sense, the conversation around AI and AGI and superintelligence and all of these terms have become much more mainstream since the chat GPT moment in 2022.
00:27:25
Speaker
But still, it seems like we as a society, we're not grappling with some very important questions around this. Is this fundamentally a social problem? Or is it just that people ah perceive this to be kind of wild speculation. And, you know, I'll see it when I believe it.
00:27:44
Speaker
I'm looking out my window. i can't really see anything that's changed. And so you're you're predicting all of these radical things. There's been a lot of people ah throughout history that has predicted radical changes.
00:27:56
Speaker
And so, yeah, do you think that do you think there's a concern about seeming weird if you actually believe and act, or crucially act on on beliefs like this?
00:28:07
Speaker
Well, just as a quick caveat, I'm not predicting this is like definitely what will happen. i think, you know, all of these feedback loops, there's a chance they don't work or AI doesn't advance to that level in time, that type of thing.
00:28:19
Speaker
Yeah, I mean, i think, It's interesting because I think even with myself, I believe some of these things intellectually, it still takes me a long time to actually internalize on a more gut level that this could be really happening.
00:28:34
Speaker
I feel the same way. Yeah. I still don't internalize a lot of it fully, for sure. I've like internalized more over time. ah kind of like but I think feeling the AGI is actually a big spectrum.
00:28:47
Speaker
And I like feel it more and more over time, but still I don't fully feel it. And yeah, I think a lot of it's just to do with this. Yeah, until something is completely hitting you in the face, it's pretty hard for humans to get motivated to do anything about something.
00:29:04
Speaker
There's also the question, there's a question about whether we can actually kind of fully internalize these beliefs and and feel it in our, feel the AGI in our guts, to it so to speak, where we've, we're just, we are not evolved to, to handle questions like this properly. I think we're not, we're not used to dealing with, with things that are moving this quickly and, uh,
00:29:26
Speaker
timescales that are this short. So but the question is whether whether we will learn to internalize beliefs that are that are accurate about our situation before we are severely kind of overwhelmed by the situation.
00:29:42
Speaker
Yeah, I think that really and that really remains to be seen. It could well be that most people wake up after it's already quite a bit too late. Though, yeah, I do think there will be some whether you want to call it warning shots or just like very powerful demonstrations that, as as we've seen already, many more people are taking it seriously than they did in the past as better capabilities has happened. And i think that will keep happening and there'll be more and more waves of people realizing this is a big deal.
00:30:10
Speaker
Which, yeah, i mean, just as a kind of ah aside for someone thinking about career planning, I actually think still in many ways, this is quite an early, it's still quite early. even It is a very weird situation because it does feel like everyone is talking about AI a lot.
00:30:24
Speaker
But the number of people really working full time on tackling this, especially the a lot of the risks, is still probably under 10,000. But I think you know between if we're on this timeline where the the techniques do just keep working and ai keeps improving to its transformative level before the end of the decade, then by that by that point, so but between now and five years from now, AI is going to go from what it is now to being just like the number one economic, political, social issue.
00:30:53
Speaker
It'll be like the front page every day will be to do with AI. And that's a very long way from where we are now, where when oh the O3 results were released, which showed that this new reasoning model paradigm was yielding really impressive results, that wasn't reported in any of the newspapers.
00:31:09
Speaker
And in fact, the Wall Street Journal was running an article about how GPT-5 was behind and disappointing on that day. Which is really missing the point because even if GPT-5 is a bit disappointing and behind schedule, it doesn't matter because we've got this even better thing now that's like completely taking off.
00:31:25
Speaker
And that just is like totally missing the mark reporting to be focusing on, yeah, the the old paradigm. Yeah. Yeah. And as you mentioned this before, but there's also phenomenon where if i ask one of the reasoning models, an incredibly difficult problem in programming, mathematics or physics, I'm not really in a position where I can accurately evaluate how well it's doing just simply because I don't know the domain well enough.
00:31:50
Speaker
And very few people are, i would actually, yeah, I think it's true to say that very few people are good enough at physics, programming, and mathematics to accurately evaluate, you know, is this output a genius or, you know, it's difficult to distinguish between outputs at a high level if you don't if you don't have a have a deep understanding of the domain.
00:32:13
Speaker
Totally, but though I actually think the even bigger thing is people are still using only the free version JAT-TBD, doesn't even include O1. So they're actually still using like a one or two year old model and being like, oh, it's not got better.
00:32:27
Speaker
Yeah, yeah. that's You always have to account for the kind of yeah for problems such as that. That's true. right. I think one of the things that could serve as a form of warning shots or something that could make people much more interested in AI is if they see robots moving around physically in their environment.
00:32:47
Speaker
Where are we with robotics? Do you think we are mostly limited on the hardware side or mostly limited on the software side?

Hardware vs. Algorithmic Challenges in Robotics

00:32:54
Speaker
Yeah, I haven't heard a clear answer to that. My super rough read is that algorithms of a bigger bottleneck. Making really good robotics is a much harder challenge in some ways than language models because for one thing, we don't have the data set and it's quite expensive to build the data and build a really large data set.
00:33:16
Speaker
So, yeah, I have heard other people saying that there are still some kind of hardware limits around like really, really precise motors. Like if you think about how complex a hand is, it's not just the extremely subtle manipulation it can do, but it's also say all the sensors like that we have in a hand so that you can, you know, if you can hold an egg, but not crush it, you need be able to feel like the exact pressure and in your hand to do that.
00:33:43
Speaker
And so having all of these cheaply in a package, I think is also a bit of a bottleneck. But my my my main sense is like, if we just had a big leap in algorithmic ah progress for robots, a lot more stuff would start working.
00:33:56
Speaker
How quickly do you think we can scale up our production of robots? One of the things with with production that you that you mentioned is that as you mass manufacture something, it it decreases and in price, sometimes quite radically.
00:34:11
Speaker
So there's a question of how quickly we can ramp up production to get those ah decreases in cost. Really depends on how good robot capabilities are at that time. So in my post, I imagined that we just had a sudden transition where like humanoid robots start working.
00:34:29
Speaker
And then the question is like, from there, how quickly could you scale it up? But that's not... That's not exactly what will happen in the real world. In reality, it'll be like a more gradual thing, at least for a while as kind of things get gradually better.
00:34:42
Speaker
Yeah, one thing i looked at to try and answer that question was imagining that car manufacturing capacity was converted to robotics. And yeah, you can do a very rough back of the envelope based on this because...
00:34:55
Speaker
A car is about a ton of kind of industrial material will put together. And a robot is about a tenth ah ten of that. Actually, it's a little bit less. Like you could say a robot is, a humanoid robot is 80 kilograms.
00:35:08
Speaker
Maybe we'll actually have a bunch of smaller robots that are specialized for particular things. So they'll they'll actually be like 40 kilograms something. Cars are about a ton and a half. let you You could say, well, robots will be more complex to make.
00:35:20
Speaker
So we shouldn't kind of convert one to one. But even if you convert, say, like half or a third, current car manufacturing capacity could produce something like a billion robots a year. That's a lot. And we we should also we should also remind ourselves that modern cars are, in a sense, robots.
00:35:38
Speaker
they are They're much more complex than they were, say, 50 years ago. They contain a bunch of chips, a bunch of sensors. Think of like ah yeah a modern electric car that that has all kinds of cameras on it and so on.
00:35:50
Speaker
um so So, yeah, of course, robots are probably even more complex than that, but modern cars are complex. Yeah, though, I also had someone saying that cars are also hard to manufacture because you're dealing with these big, heavy parts, whereas as robots, you'd be dealing with much smaller and lighter parts. And so that's like one respect in which it's easier.
00:36:07
Speaker
I agree, probably the kind of sheer complexity of, say, making a robot hand would be higher. So, yeah, I think overall, it's not a crazy comparison. You have some kind of like estimates of how cheap ah robots could become in our production costs, how cheap it could be to run them and so on. And those those are those are quite interesting. Maybe you could you could tell us what world we might be in there.
00:36:31
Speaker
Yeah, I mean, the the main estimate is just based on what you mentioned earlier, that a typical industrial scaling curve is roughly every time you double production, it becomes 20% cheaper.
00:36:43
Speaker
And that's what we saw with solar panels. It varies a bit from industry. It could be 40%, it could be 10%. But if you assume a similar cost curve on robotics, if you say roughly now they cost $100,000, some of the most recent ones are actually a bit cheaper than that.
00:36:57
Speaker
If you then imagine a scale up to a billion robots a year, it should cost at least 10x less. So that would be $10,000 per robot. That also roughly, well, yeah, so i I think actually it could even go beyond that.
00:37:10
Speaker
One other way of limiting it is to think, again, do a comparison with a car. So if a car costs about $10,000, but a robot is only a tenth as much material, you might think in the long term it would be more like a tenth the cost of a car.
00:37:25
Speaker
Maybe a little bit more because of the complexity. So that would be a couple of thousand dollars per robot. And then, yeah, if you imagine those last for a couple, they they can work for a couple of years, 24 seven, then yeah, a robot, then it's it's under cents hour for the hardware and then yeah so actually the yeah The maintenance could be about the same again, maybe electricity. that would actually Electricity prices could go up a lot if we're making all these robots and all these AI chips. But at current electricity prices, that would be something like three cents given like current current power consumption.
00:38:03
Speaker
So yeah, you end up with a total cost per hour of maybe like 20 cents at full scale. Yeah. And again, this is a wild conclusion that that's is difficult to to kind of fully absorb.
00:38:14
Speaker
But I mean, if we imagine having a robot that's able to solve a bunch of tasks in the physical world, that's able to work 24-7 for 20 cents hour running costs, say,
00:38:25
Speaker
also in in running costs say Just, you know, say $1 an hour, that would be revolutionary, right? There there would be and incredible amount of demand for that. I could i could use 10 of those robots just to do things around the house or to help me with things. So maybe this is a, maybe this question is, is...
00:38:45
Speaker
It's dumb in some sense, but would there be demand for for such robots, do you think? Do you think people would resist buying them out of nostalgia for human labor? Do you think they would be, you know, maybe they become illegal, maybe maybe they're resisted by unions and so on? Do you think...
00:39:01
Speaker
In some sense, demand would be there, but do you think in actual kind of in in in practice, that demand would be allowed to be expressed in the market?

Security and Societal Acceptance of Robots

00:39:10
Speaker
It does seem like once we get to the point where there is a lot of ah lot of automation and people are actually losing jobs on a big scale, both from AI and from robotics, it seems like there's going to be some type of huge backlash at some point against that.
00:39:24
Speaker
And it seems hard to predict exactly what the result of that is. On the other side, there will be these kind of huge economic forces in favor of if it costs, say, like $20 an hour to have a cleaner, clean your house now, but a robot could do it for 50 cents an hour.
00:39:40
Speaker
You know, people are going to really, really prefer the robot. I mean, also with the robot, you don't have to worry about like privacy everything. They can be available twenty four seven and like many other advantages potentially.
00:39:51
Speaker
I mean, there are also some other disadvantages. like It does seem like cyber attacks become much more dangerous when there's robots everywhere. Because if someone actually can take over your robots, then you know they could they could kidnap you in your own house while you why you sleep.
00:40:07
Speaker
That sounds absolutely horrifying. and but i'm i'm um'm ready this is actually So of course of course, they would be useful to have around the house, but... A human factor like that, that's if that's a real worry that you might be murdered by your own household robot.
00:40:26
Speaker
I mean, this seems like ah like a scene from Black Mirror or something. this is do you think Do you think factors like that, worries like that, maybe legitimate worries from people could hold back adoption of robots?
00:40:39
Speaker
ah I mean, definitely, yeah it's going to hold a lot of people probably would be creeped out or worried about that. But again, I think it is very hard to say how it will go, because it might just mean that people take cybersecurity way more seriously.
00:40:50
Speaker
And maybe if there's very few instances of this happening, or just people... You know, they just get used to the robots being around and take it for granted. And you do also have to consider the other sides because, you know, like humans are not perfectly safe either. Like there's a chance that a human cleaner steals from you or like, yeah, very, you know, all kinds of other stuff. So eventually people would just have to make the, make an overall trade-off. It seems like with self-driving cars, people get used to them pretty fast.
00:41:19
Speaker
And it's a bit weird at the start. And that of course they can malfunction and accidentally kill you. But statistically, they're already about 10x safer than human drivers. And that would just increase over time. And so it seems like at least in that case, it's a pretty clear win in favor of self-driving, i think.
00:41:37
Speaker
And that that's perhaps a case study where adoption of technology, where people prefer to adopt adopt technology because because it's just so convenient to ride and in ah in a self-driving cab.
00:41:49
Speaker
And if it's if it's also more safer, that's that's a win-win. I'm thinking, I've become more interested in in human factors limiting adoption of technology. Just... Think of something like augmented reality glasses or kind of early stage glasses that you wear around.
00:42:04
Speaker
One thing that I think has prevented adoption of such glasses is just that it's perhaps it looks weird to wear them. It's not it's not fashionable. Perhaps people are concerned about being recorded.
00:42:16
Speaker
Very kind of down to earth human factors that are not predicted by and model of the the kind of pure economics of the thing. um So yeah, I am.
00:42:28
Speaker
becoming more interested in whether adoption of technology would be limited by by human factors. But i think I think, as you mentioned, that the economic incentive to adopt robots would be so enormous that that these kind of concerns would be swept aside, especially in in manufacturing, right? Especially in robots for for manufacturing goods.
00:42:51
Speaker
Yeah, or think about an industry like mining or like oil wells or something like that. It's quite dangerous. I mean, but I do, just to step back a bit, I do agree with the general point that in so many jobs and industries, deployment of AI and robotics will be very slowed down by a lot of these types of concerns.
00:43:10
Speaker
And this is actually why I think, I think we might be headed for, again, quite a weird world where AI actually advances to very capable levels before most of the economy has actually changed at all.
00:43:23
Speaker
Especially if you can get this algorithmic feedback loop. Like you could have several years where most of the AI is being used to do AI research. And so suddenly you've gone in a couple of years, you've gone to kind of like super intelligent levels of AI, but most jobs are just continuing as they were before.
00:43:39
Speaker
And i and that this is one reason why people might wake up quite late to go back to our earlier earlier point. Like your daily life might seem exactly the same, but open and over an open AI lab, you know, maybe within coding, maybe a lot of that's automated and that's generating enough revenue to pay for the training runs.
00:43:55
Speaker
But yeah, but then suddenly like OpenAI has basically like super intelligent levels it just hasn't been deployed yet. And then deployment could be very fast because you now have extremely capable AIs that can help through the deployment.
00:44:07
Speaker
but It seems like a scary world, right? It seems like we would want to have public information about the quality of the best available AI models so that we have at least some time to react.
00:44:19
Speaker
But if everything is becoming more internal to the AI companies, that's maybe that's not happening. I was not even imagining that it's not, it's not that they're not being transparent. It's just that it's so hard to internalize until you see things hitting you in the face.
00:44:35
Speaker
And in this world, because like when I walk around the street, everyone's still doing their jobs, just like before. But superintelligence exists

Economic Pressures and Human Decision-Making

00:44:42
Speaker
somewhere. I know that intellectually, but many people won't take it seriously till still until they actually see the the real world impacts.
00:44:50
Speaker
Yeah. There are also many jobs and you're thinking something like lawyers, doctors, and so on. It might be the case that I can diagnose myself using an AI model quite well, but I still need a doctor to prescribe me medicine.
00:45:06
Speaker
Or I still need a lawyer to go through the the kind of formal steps of of having a a document delivered to court. And that can only be done by a human and only be judged by a human judge and so on.
00:45:19
Speaker
How much... Do you think factors like that will play in into into adoption of AI? I mean, I think i think a lot. Yeah, I think that will be in there'll be some significant transition period where people are using AI advisors, but regulation and just people not wanting to have AIs making decisions.
00:45:40
Speaker
will mean there's still like humans in the loop from a lot of things. But then, yeah, over time, the main pressure on that is if you if you say, so you could imagine a world where you say, well, every company still has to have a human like set of board of directors who can officially like veto things that the AIs do.
00:45:58
Speaker
But then that means that those those human decision makers become the key bottleneck in the economy because that's like the one bit that i can't be sped up by AI. And so then you end up with huge economic pressure to take them out of the loop of more and more things so that you can unblock the whole cycle of production.
00:46:16
Speaker
Competitors will be thinking about replacing their board. And so maybe you now need to think about whether you need to replace your board and so on. So it's kind of standard competitive pressures. Weigh in on less and less things.
00:46:28
Speaker
Yeah. And like the same, if you think about the lawyer case you mentioned, well, that, know, paying that human will not only slow things down, but that's kind of an extra expense compared to like the AI lawyer will be basically free, but if like a human wants so free, but without actual power in the legal system. And that's that might be a key issue.
00:46:49
Speaker
I'm not expecting this to be the case over the the long term. And here long term might be 20 years, right. But I could expect some As you talked about, i could imagine the AI economy, ah so to speak, moving and at incredible speed and the human economy to be limited by by by kind of the way we've been doing things by law or by convention.
00:47:13
Speaker
for for for quite some time, where it is you know in many countries, it is simply legal to have an AI hand in documents to to ah to a court and and and definitely have an AI judge a case in a court.
00:47:29
Speaker
and And I just don't see a change to something like that would have to go through parliament, and that takes years and Yeah, and there are this is just this is just one example. And and there are in in many industries, there are many examples ah such as this.
00:47:44
Speaker
So if you agree with that picture, do you see a world? yeah What does a world look like where we still have kind of the legacy human systems, but AI is moving very fast?
00:47:54
Speaker
Well, yeah, like a a lot of it was, I think it really remains to be seen how long that type of situation would actually persist. Because like like I was saying, there would be these huge economic incentives to take humans out of the loop from more and more things.
00:48:09
Speaker
if If one country is able to do that better than another, like that country could quickly get ahead economically. So I don't know whether that would actually be stable for like a 20 year period.
00:48:20
Speaker
It might just be more like a couple of years. Is that is that your your the way you expect things to go? That that's a system like that is is unstable and and it collapses under competitive pressures rather quickly?
00:48:33
Speaker
I mean, I yeah do think it is really hard to say because I guess the point on the other side would be there does seem to be quite a homogenous global elite culture in some ways. and And so the idea that, say, pretty much all countries would just not want go down this path of letting the AIs make all the decisions, that shouldn't be totally off the table. And so maybe even though it's like not a kind of competitive, it's not an equilibrium from a kind of strictly game theoretic point of view.
00:48:57
Speaker
it does seem like the world does sometimes manage to just coordinate into situations like that. Agreed. Okay, I want to talk about how an ordinary person can prepare for AGI. You have an excellent essay about this on your sub stack.

Preparing for AGI's Future Impact

00:49:12
Speaker
So first of all, let let's get some some issues with this question out of the way because people in my audience will think that, you know, does this even make sense to ask that question? Preparing for a world of AGI is is like preparing for the industrial ah revolution, but the industrial revolution happens in in in three years instead of however long it took.
00:49:33
Speaker
The worry is that the world is going to be transformed to such an extent that your actions simply don't matter. Do you think, yeah, why why isn't that the right frame to think about this question? Yeah, the main thing I would say is that that might be correct. It might be that we're just completely powerless in the face of this.
00:49:50
Speaker
And just just to clarify, i'm I'm talking about here from a kind of a personal perspective, not from a socially, what should we do to tackle this? There's a lot that society could do to better prepare. But yeah, from an individual point of view, like one way this sometimes gets...
00:50:03
Speaker
summed up as like death or abundance. Like either there's an exorisk and we all die and there's nothing I can do to not die in that exorisk. Or it's just some massive abundance utopia where everyone just has more than more than everything they need. So nothing I do really makes any difference to that.
00:50:22
Speaker
But yeah, the main kind of, my main pushback against that is Yes, there might be nothing we can do, but in terms of what, from a personal preparation point of view, what you should focus on is the scenarios where what you do now can make a difference.
00:50:36
Speaker
And you can kind of ignore, unless you think they're like 99% of the probability mass, you can kind of ignore the scenarios where you just can't affect the outcomes. And like all your chips should be put into preparing for the scenarios where what you do now can have some effect.
00:50:53
Speaker
Also, by personally preparing, you might be able to put yourself in a better situation to help the world. So this is not exactly an exclusively kind of egotistical idea. right this is This is also about kind of creating people that are that are able to adapt to so the changes ahead and and might be able to to help the world adapt to.
00:51:18
Speaker
So yeah, let's dig into your advice here. Your first piece of advice is to find the people that are in the know, seek out the people who have some clue what's going on. how do how do you How do you do that? And where are those people? The first problem, of course, is you know there's disagreement about who's in the know here.
00:51:36
Speaker
How do you go about finding those people? Not exactly you know setting aside who exactly those people are. Yeah, I mean, in in some ways, it's a very deep question there about like who should you trust?
00:51:47
Speaker
But, you know, I do think there are a lot of people who are tracking AI very closely. There's people who've been prescient in the past. And it makes sense to at least read what those people are saying.
00:51:59
Speaker
But I think, you know, it's even better if you actually have some people you know personally. who are more in the loop about this type of thing. Many of, say, like the past guests on your podcast would be qualified here. But I mean, I guess some of the things I i some ah i read, and I mean, know obviously I listen to Dworkash.
00:52:14
Speaker
I read Slate Star Code, ah astra Astro Code X10, Zvi's newsletter, the 80,000 Hours podcast. So yeah, I mean, there's actually a lot of great sub stacks now that are tracking things.
00:52:27
Speaker
attracting AI. And then, and then, yeah, knowing some people in the industry, knowing some people, especially people who take the more transformative scenarios seriously, because I think that's still a big thing kind of lacking from the broader AI discourse.
00:52:40
Speaker
Even if you can look at these trends and see big changes coming, it it might be difficult to act on that information. i wrote to you in in preparation for this conversation about the fact that I learned about COVID ah somewhat earlier than than society at large, but that I felt like I couldn't really act on the information.
00:52:59
Speaker
Maybe that's ah that's just a failure on my part to kind of act with conviction when I when i have some information. But it seems it seems to me that there are a lot of people, at least I get emails from people like this, that think ah big changes are about to arrive, but feel like that's probably there's probably nothing to do about it. right there's probably There's probably nothing to nothing to act Yeah, I mean, it could be with COVID. It just was a case where you kind of got unlucky, where you had the information, but it didn't turn out to be useful.
00:53:30
Speaker
But I think we should have a very strong prior that in general, more information is better, even if a particular case it doesn't work out. And then, i you know, I think and there was some stuff that people could do in COVID.
00:53:40
Speaker
And I mean, I know ah I didn't manage do this myself, but a lot of people managed to hedge their investments and save a lot of money before the the downturn. I did manage to move to the countryside, which then made...
00:53:52
Speaker
the next year, much more pleasant for me than I think if I'd stayed in London and and got that done in time. And I actually think if I had been able to act on COVID, say even a week earlier, it would have been valuable.
00:54:03
Speaker
i was running 80,000 hours at that point. And we we've prepared a lot of material about ah what's going on with COVID and how you can personally help with it. But it kind of like, we didn't quite get it out early enough to really get as much attention and be as useful as it could have been. But if we could have done that a week earlier, I think that would have it would have been much more useful to people.
00:54:22
Speaker
So I almost kind of wish I'd actually acted a bit sooner in the COVID case. So that's from a social a social impact perspective rather than personal crap one. Yeah. and Another piece of advice is to save as much money as you can.
00:54:36
Speaker
Why is that useful? You often hear the opposite advice. Sorry to break in, but you often hear some some something like the opposite advice. If we get AGI, if we get ah perhaps even superintelligence, money will become irrelevant, right? we'll We'll live in such abundance that money isn't isn't the problem anymore.
00:54:54
Speaker
Maybe talk about why that's that is not exactly the case. There's a few things to say about this. One is just, if you put some more uncertainty into the equation, that pushes you back in favor of saving again. So if you imagine, say, if you're like, okay, well, it's definitely death or abundance, then yes, obviously spend all your money now.
00:55:12
Speaker
But we're not sure that AGI will arrive with 100% certainty soon. And if you spend all your savings, but AGI doesn't happen soon, then you're you know you're significantly worse off because now you don't have a pension and so on.

Economic Dynamics and Wealth Distribution with AI

00:55:27
Speaker
But if I've just spent 20% more money per year in the next five years, that's not going to make that much difference to my well-being. Like maybe I go on an extra holiday or something. But so there's a kind of, but if you consider that asymmetry, it actually means, and you can model this formally. There's this thing called the the Merton share, which Merton's portfolio problem, which is about like how much to save given your discount rates and so on. And you can model this out. And if you even if you put in quite a big discount rates because of the chance of money not being useful anymore, it doesn't say you should spend all your money. It's just like, you should spend a little bit more.
00:56:00
Speaker
Like, 1% or 2% more compared what you would have done normally. But then, yeah, I think the even bigger issue is there could be a third scenario where money is still useful in the future.
00:56:12
Speaker
And well, firstly, AI will probably make returns on investment go up hugely because there's going to be this like mother of all investment booms as we build out all the infrastructure to run this AI and robotics.
00:56:26
Speaker
So capital will be really scarce for a while. And that means the returns on capital will be really high. So if you save money now, that could turn into like 100 times more money in you know past post-intelligence explosion, maybe maybe a lot more.
00:56:44
Speaker
So you have to consider you're firstly getting way more money. And then on the other hand, you'll then be able to buy things that you just can't buy now. so I can't, however much money I have, I can't buy life extension, like I can't buy life extension technology, but maybe that will be possible to buy in 10 or 20 years.
00:57:01
Speaker
And if you consider, yeah, a key way of seeing the situation is does your, does for all of your goals that you might have in life, do they just completely flatten off with a certain amount of money or can you consider keep buying more of what you value with with additional resources?
00:57:19
Speaker
Yeah. And like, you know, right now, there is quite a big difference between being a millionaire and being a billionaire in terms of your lifestyle or your ability to achieve your values more broadly, like considering that it's not just about your comfort, but you might have other preferences for things like social goods or, well, life extension, I think is maybe the best example where if you can just buy more years of healthy life, I think, you know, many people would just want to buy as many of those as they could.
00:57:49
Speaker
It's interesting now, in today's world, there are some goods where i can basic I can probably have the same smartphone as the the richest people in the world. I can read the same books.
00:58:02
Speaker
I can watch the same TV shows. And there of course, they can have much more influence on the world than I can. But i it is true that something like live extension is it' a technology that might require more money in the future and and and might kind of like there might be a a separation between the rich and the not rich in in our ability to afford it unless it becomes much cheaper over time as it's as it's more widely available the hope on the other side would be that things would just be exponentially getting cheaper and so even if you're poor you just wait a few more extra years but yeah i mean there's some things that are just scarce so like land on the earth there's a fixed amount
00:58:41
Speaker
and So however much money you have is like how much land you you would be able to have in the future. And this be where land is becoming much more expensive as well because land could be used for robot factories. Yeah, I agree. I've heard this line of reasoning before that that land this is kind of like very quite simple economic argument that land is scarce and therefore you know it's it's limited in the way that many other...
00:59:04
Speaker
many other things you can buy aren't, and so it'll become much more expensive. and But will it, is it the case that, for example, something like farmland would be much more valuable? Exactly because you can use farmland to build ah robot factories.
00:59:18
Speaker
Whereas ah in today's world, we have certain cities where land is in incredibly expensive because of various social factors, because of regulations and so on, limits on how much you can build.
00:59:34
Speaker
Would you expect real estate and in San Francisco to to skyrocket? Or is it more something like... land in the middle of Arizona where you can get a lot of sun and you can build factories?
00:59:46
Speaker
I've been thinking about the buying versus renting question in light of ai and but also in general recently. And i think, I mean, yeah, I would treat these as two separate markets. so the kind of the rural land one, that would be ultimately driven by how much solar energy is falling on that land.
01:00:02
Speaker
And then also, could that land be converted into some um much more productive use in the future? The land in the center of city, that's That's essentially a luxury consumption good.
01:00:13
Speaker
And so the question there is, will people with future AI wealth want to be able to have a house in the center of the cities the cities we have today?
01:00:24
Speaker
And I mean, that seems quite likely to me. I do think there could be a move out to the country, out to... be more spaced out when it becomes like much more cheap to get around with transport and people have much more, they can build like really fancy new houses very cheaply out in the countryside.
01:00:43
Speaker
And there's no economic reason to be in the city anymore. Like your work isn't there. But yeah, but there would still be social reasons. And, you know, especially if you think of land, say like along the River Seine in Paris, well, you could see why people would still want to visit that and spend time in that type of environment, even if We were way way more wealthy. I mean, maybe maybe even more so than today, because as you get wealthier, you probably value leisure time and social time more than we do now.
01:01:10
Speaker
That makes sense. so So land in very desirable cities would become like a luxury good, like luxury clothing or handbags or expensive cars or something like that. It's also finite.
01:01:23
Speaker
so and Yeah, exactly. It's finite in a way that that isn't true. Okay, which which skills would you expect to have most value going forward? Of course, this is like almost an impossible question. And and this is bound this is bound to change radically over time. But you have a guess as to as to which skills would be valuable?
01:01:45
Speaker
Yeah, i mean, this is, a yeah, like you say, a very big question. If we are actually heading towards this world of intelligence explosion, then it could be eventually that pretty much everything gets automated. From a personal planning point of view now, the question is more about like, how do you stay one step ahead of the current wave of automation and then earn a bunch of money while that's happening, which you then save and then you can live off even if all the other skills go even all the skills get automated.
01:02:12
Speaker
So just there, the question is like, what's the, over the next say transition period, what will be most valuable? And I think one way to sum it, nice way to sum it up is like, you either want to get as close to AI as possible or as far away from AI as possible.
01:02:24
Speaker
And as close to AI as you're working on improving ai or deploying it, And you can see now those skills already extremely well compensated.
01:02:36
Speaker
And yeah, they're the pretty difficult skills to have, but clearly they're going to be

Valuable Skills and Human Roles in an AI Future

01:02:41
Speaker
very valuable. And that's basically because they're a complement with this AI automation. And then on on the other side, it's just things that the AI is going to be worse at.
01:02:48
Speaker
And those those things will become the bottleneck in production. And so their value will also go up. And it's like a thing that people often don't. appreciate like all the stuff that AI is bad at, those things will increase in value over time as AI gets better because those will be the the the the things that are still needed for humans to Figuring out what those are is harder, but we have touched on this already in the conversation.
01:03:12
Speaker
Any task that's amenable to reinforcement learning, we're going to see AI get a lot better at that in the next couple of years. Also, any task where you can take a big data set of examples and then use that to pre-train a model. Those those will be the things that are best covered.
01:03:26
Speaker
And then the things which will be hardest will be the cases that are least like that. So basically, these much more vague, like long time longtime horizon, undefined type tasks.
01:03:40
Speaker
And what would be examples there? I think a lot of like entrepreneurship management type kind of High level planning, coordinating lots of things, figuring out what to build in the first place, and then like setting up lots of AI systems to actually do all the well-defined chunks.
01:03:56
Speaker
But yeah, basically breaking the things into the well-defined chunks in the first place. But I mean, a lot of like kind of social relationship stuff could also be more like this. That's also an area where we have a lot of, we might have a lot of preferences to do it with a person.
01:04:09
Speaker
for for a while. So any jobs were just like relationships is really a key part of it. Examples I often hear about and include something like ah jobs that are in the physical world, dealing with people, and that is but but involves a lot of variety of tasks.
01:04:26
Speaker
So something like physiotherapist or nurse or ah tour guides, something like that, where it's where it's a mix of people skills and moving it around physically, and you're doing a bunch of, you don't know what your, which moves you're going to do when you come into work that day.
01:04:43
Speaker
Yeah, like unpredictable environments. I mean, partly you're also pointing out that that robotics is is lagging a cognitive knowledge work type stuff. So anything involving physical manipulation would also It'll be good. the The trouble with that one is like, as we discussing, that could change quite fast, but there could be this transition period. I think like Carl Schulman talks about this, I think in his episode with Dorkesh about there could be this transition period where loads of people are just employed, and they have like an AI just telling them what to do.
01:05:13
Speaker
But they're being they're like building a factory. So they're kind of being used as it's like for that physical manipulation skills becomes the most valuable thing they they offer for a while. But that that only lasts until really good robotics is developed.
01:05:27
Speaker
And if you had something like that, you would probably also be able to record their movements in specific ways, use that as training data. So so again, it doesn't seem like a so a situation that will hold for a long time.
01:05:41
Speaker
Yeah, i think I think that seems right. Whereas, i mean, yeah, something kind of like someone who does like a luxury travel experience where they take you to like a private kitchen and you taste lots of food with them in like Moroccan tent in the desert or something like maybe, you know, people would really like that value that type of experience. Yeah.
01:06:04
Speaker
they'd really want the human, the human touch would be a big part of it. Yeah. And this might also be the case for something like, this is not really a career path that's available for many people, but being a famous person, right?
01:06:16
Speaker
there Famous people can often get paid because they are themselves. And that's something that that can't really be out ah can really be outsourced to AI. You are beginning to see kind of automated influencers where they're they will lend their their kind of physical appearance and voice to to be recreated, and then fans can interact with a model of them.
01:06:38
Speaker
But I still think there's probably tremendous value in being a person that's that's known and people want to but to meet the actual ah they're actually famous person. Yeah, and there's a that's a really interesting kind of general phenomenon, which is if you think of AI as making...
01:06:55
Speaker
labor and then eventually robotics makes like physical manipulation less valuable. It then makes all the kind of other resources that you could have more valuable because they remain, they those remain like important resources that aren't being cheapened by AI.
01:07:09
Speaker
And that, yeah, that we've talked about capital as one because we'll still need the capital to build all the robots in the factories and chips. But then another is like these other resources like relationships or fame, which, yeah, potentially become like a bigger part of the economy over time. Yeah.
01:07:26
Speaker
and And remain valuable. Yeah. Yeah. And another one you mentioned is citizenship, which is also something that's you ah recommend that people get citizenship of a country that ah that has ah an lot of AI wealth or that will have a lot of AI wealth.
01:07:42
Speaker
My first question there is isn't isn't the process of becoming U.S. citizen, say, extremely slow? I often hear about people who have been living there for, for you know, 15 years and contributed to the U.S. economy, but aren't actually U.S. citizens yet.
01:07:57
Speaker
And so is this is this something that that matters on the timescales we're talking about? I think it's quite a bit faster than that. I forget the exact time horizon, but I think, you know, if you enter now on a work visa, I thought it's more like a kind of a five, five, seven year period. And then you can apply for citizenship.
01:08:13
Speaker
Yeah, you're probably right about that. i The 15 years, is like an extreme example. I mean, immigration is terrible. So they could, I'm sure there could be things that have knock someone off that that timeline.

National Economies and Welfare in an AI World

01:08:24
Speaker
That's if everything goes well.
01:08:25
Speaker
And then, yeah, it comes down to what you think you're, what what what are your timelines? But it would only be 2030 by the time you might be able to start applying. So I think there could still be time. I mean, I also think if there's an intelligence explosion happening and you're already a work, you already have work permission in the US, the intelligence explosion itself will take several years. So you firstly have to get to AGI or that can do AI research. And then you have to have the whole intelligence explosion.
01:08:52
Speaker
And would you be thrown out of the country? Like, hopefully not after you've been there that long. So I think that could still be, that could still be time. i mean, I'm not doing this personally, but that's partly I just like, I'm, I'm too, it's like, I'm too lazy. It's too much of a personal sacrifice to move to the US now, but I don't know, maybe maybe I will regret this.
01:09:14
Speaker
There's also, I mean, the question of citizenship is and is interesting because your citizenship determines your piece of the the cake and in the in kind of like a national economy and countries that will do well doing a kind of run up to AI or AGI.
01:09:31
Speaker
will be able to redistribute more in absolute terms just because they'll be so much more wealthy. And again, this is, of course, speculative, but do you do expect kind of welfare programs to hold during a transition into AGI?
01:09:47
Speaker
Or do you expect that they won't be able to honor their the obligations that they have to their citizens, these programs? In this world, the economy is growing very fast. So I think it actually becomes easier to honor your obligations.
01:10:00
Speaker
Yeah, I mean, my best guess is there well there would still be significant welfare. One factor is just there's inertia. Like the US, I forget the exact figure, but you know I think it taxes something like 30% of GDP.
01:10:11
Speaker
And then a lot of that essentially ends up in welfare programs. That's like the biggest federal expense. So if that just carries on as it is, you actually end up with a lot of a lot of redistribution.
01:10:22
Speaker
But then the even more important point is just there would be enormous political pressure for this because if everyone is like having their wages pushed down by AI and then there's like a couple of, a small number of tech elites becoming trillionaires, people are really going to ah want to tax that AI wealth and not just let everyone starve. i think So I think that that would only really, you'd only really get the bad scenarios where no one's getting any redistribution if there was some very like well-locked-in type of like authoritarian government, which was able to just ignore the will of its population.
01:10:55
Speaker
But I think with, say, a country like the US currently, it would be it would be politically untenable to that. I mean, I suppose if the change was fast enough, maybe it could be, yeah. Yeah, or if power was concentrated enough in, in say, one or two, perhaps even one company reaching superintelligence first and then, you know,
01:11:17
Speaker
and being able to but becoming basically masters of the universe before the government is able to respond. yeah no Yeah, and then we have a lot of problems. course Of course.
01:11:28
Speaker
You advise that we should make ourselves more resilient to to crazy times. This is something that's more easily said than done, I think. it's I mean, we have we've now lived through the COVID times that were somewhat crazy, but not anywhere near as crazy as I would expect an intelligence explosion explosion to be.
01:11:48
Speaker
What are the lessons you've taken from from kind of trying to be resilient during COVID? Yeah, someone once described to me as well with the intelligence explosion, you could imagine it being a bit like, you know, that in two years, COVID is going to start.
01:12:03
Speaker
like the first few weeks of COVID and then it will just never stop. Because it's not like a one or two year thing. It's like, no, it actually gets faster and faster maybe until everything is ah totally unrecognizable. So as a kind of frame for the how to spend the next couple of years, that can be quite useful.
01:12:18
Speaker
Yeah, I think I don't and don't have anything super innovative to say about how to be more resilient. so I just say... Do the kind of normal basic things. So have some kind of like healthy routines that help you be less stressed.
01:12:31
Speaker
Like make sure you get lots of time with friends. You do exercise. I think finding a good therapist is helpful or some type of coach type person who you can talk to about things. Yeah, finding things that help relax you, whatever they are.
01:12:45
Speaker
Yeah, having like an environment, a nice... I mean, yeah, personally, i kind of like the idea of being based in the countryside through a lot a lot of this stuff because I feel like I would be less stressed because there would be nature and i would be able to tell myself that...
01:12:58
Speaker
If there was like a bio threat or a nuclear threat, I'm i'm a bit safer. Yeah, I think i think those those types of things would be the main ones on my mind. There's kind of maybe a trade-off between how good you feel in your everyday life and how relaxed you're able to be and then your your level of engagement with the world. So one way of relaxing is to disengage, right?
01:13:21
Speaker
Now I want to walk around in my garden. I want to talk to my friends in real life. I want to take walks. and it's It's too stressful to follow along what's happening in AI, and it's too stressful to follow along the news even.

Managing Information and Personal Resilience

01:13:34
Speaker
Is there is there a strategy for kind of strategically engaging with the world, getting the information you want, getting all of the actionable information, and then perhaps disengaging also or having periods of of disengagement with the world so that you're not in this loop of scrolling social media and getting, you know, trying to follow along and having this feeling of europe you're productively kind of getting new information, but really you're just stressing yourself up?
01:14:03
Speaker
Yeah, i mean, I think how to do that practically will differ from each person. But think thinking about exactly the things you're saying there seems like very useful to think about. Like, how do how do you get information efficiently? Like, rather than just generally scrolling Twitter, can you find five sources that you think cover the basics and just read those once a week or...
01:14:23
Speaker
once at the end of each day, some type of, yeah, I think batching is a really big, mean, it's hard to do in practice because this stuff is so addictive. But yeah, the more you can be like, have periods of true rest where you're actually unplugged and then periods where you engage and how to do that will vary a lot by person. Like, do you want to have a kind of a Sabbath type day where you take one day fully off the phone each week?
01:14:46
Speaker
Or do you prefer to say like, you know, I sometimes go on meditation retreats and then take a whole week off totally unplugged. Like i think what type of routine works without vary a lot by person.
01:14:57
Speaker
Yeah, I guess one issue here is that, as I expect things to go, many things will feel like this is the one exception, this is the one emergency that you you absolutely need to follow.
01:15:09
Speaker
But there'll always be three of those things happening at the same time. And so, you know, There's a question of how do you stick to the systems and how do you have a sense of proportion of how big of a deal many of many of the issues... Maybe I should be more concrete here. What I'm imagining is something like...
01:15:30
Speaker
you know OpenAI announces a new breakthrough. You try the model. It's it's exceptional. Two weeks later, China decides to invade Taiwan. Now there's a new open source model that's that's perhaps better than the model from OpenAI. and it's just You're not able to sit down and understand what's going on before the next thing is happening.
01:15:50
Speaker
It seems that we are we're just not equipped to think about the amount of information we're getting at the speeds that that we're getting them in a productive way. So do you do you have to limit your the information you get to an extreme degree in order to be productive?
01:16:07
Speaker
I mean, I think even you just saying all this out loud is already helpful for people. Just imagine this is the world we're going into and then like think about how you might... respond to that at the time. And also, what could you do now to make yourself better prepared to navigate it?
01:16:22
Speaker
And I think in the thing you were just saying there, I think having some type of good information network would be very helpful. Like ideally, you want to be able to just ask someone, okay, how good is the open AI model, really? And then they can basically tell you. And then that's, think that's one big piece of navigating that type of thing. Yeah.
01:16:40
Speaker
I think the other one would just be, you need, mean, people do this now where they just follow random crises in the news that they can't do anything about and then they feel bad, but it's not actually, and I guess this will just become a much bigger issue, but yeah, it's always asking yourself like, what am I actually able personally to do for both like my own, ah my own goals and also from a social impact point of view.
01:17:05
Speaker
and really trying to focus on figuring out those questions rather than just... generally following things. I think that's very good advice. I had an experience like you just described with Russia's invasion of Ukraine, where I'm following along.
01:17:19
Speaker
I'm unable to do anything, but I feel like I need to follow along. And that's that's quite unpleasant and and also just not productive for the world, not actually helping by following along. There's a lot of information out there about everything now. So it's easier to follow along by the minute in these kinds of situations.
01:17:35
Speaker
One particular thing on that is I find Metaculous very useful for these types of things because Often there's just like some kind of key parameter that matters. So like, I think during Ukraine, I was trying to figure out like, what's the chance that new London gets nuke? And there were kind of like forecasts that would look at this and I could see like, if that was spiking up, then maybe I should leave town. and But I could kind of like not follow the news besides tracking that forecast.
01:17:59
Speaker
And there's this really cool group ah called Sentinel run by Nuno, who basically track a bunch of different potential catastrophes and then do ah do a roughly weekly update on them.
01:18:10
Speaker
Yeah, that also seems useful to follow. So you get like the or the everything you need to know. You don't need to to read headlines. you just You look at this number that, at least in theory, has kind of condensed all of the available information into one actionable number about how big of a deal something is.
01:18:28
Speaker
That one was actionable to me, but yeah, it would it would depend on... you know if i was If I was Trump, then I would be tracking like very different metrics because I'd have different goals. Yeah, yeah of course. course You also advise us to prioritize things you want to have in place before we get to AGI.
01:18:47
Speaker
i'm actually As we're speaking, i'm still having trouble understanding what it is exactly you mean there.

Delaying Actions in Light of AI's Advancements

01:18:53
Speaker
Is it that you want to have certain experiences that are only available before AGI What are the types of things that that you would advise us to have in place before ADI?
01:19:04
Speaker
Yeah, I'm trying to just put point at a very high level heuristic, which is if there's something that AI would be able to do much better than you in five years time, then you should delay doing that thing until those five years.
01:19:17
Speaker
that I think that's a big thing. So, I mean, ah an example for my own career planning is I was, you know, I was wondering, should I write more about AI or should I write more about effect altruism? It's like an example you might have.
01:19:28
Speaker
And I thought, well, clearly I should write about AI now because if we're about on the brink of intelligence explosion, that would be super valuable. And if we're not, then I can always write about effect altruism later in the more normal timeline.
01:19:41
Speaker
And so that that was like a case where I thought, yeah, it was better to delay the effect altruism case. I think maybe another personal life example. This one's a little bit controversial, but if you if you think in a normal world, if you would be kind of indifferent between having a family now and starting a family in five years, ah so many people aren't in that situation, like waiting five years would actually be a big cost.
01:20:02
Speaker
But supposing that you're in one where you're relatively neutral about that. then it does seem quite tempting to me to then delay, delay, make that delay. Because if we're in the AI soon world, there could be all these like very urgent things you want to do to prepare, like earn more money, or maybe you want to just work on AI safety and like help.
01:20:21
Speaker
It could be the most impactful time in history. So it really makes sense to focus on social impacts the next five years. And you might also want to see what's going to happen before having a family it's if it's trying to get a better sense of whether it's a good or a bad scenario.
01:20:35
Speaker
So that that was one where I thought that type of thinking, you can think, yeah, like what what stuff is like urgent will put me in a better position before AI versus things that theoretically could be done later.
01:20:49
Speaker
I think it's ah an interesting thing to reflect on. And in that vein, there are there are also projects that that it might make sense to abandon. So for example, I think you mentioned this to me in preparation for this conversation about ah whether you should write a book or spend and years writing a book. maybe Maybe some of the same reasoning goes for whether whether it makes sense to start out right now trying to become a mathematician.
01:21:14
Speaker
I don't actually know whether the situation there are so extreme, but I could imagine a world in which AI in a couple of years is just fundamentally better than humans are at at mathematics. And so so this this is also about abandoning the projects, correct?
01:21:29
Speaker
Yes, exactly. Yeah. Yeah. I mean, there

Scaling Challenges and AI's Growth Timeline

01:21:31
Speaker
could also be a role for the thing you just said, the kind of bucket list thing where if you think, well, maybe there is a chance that it does all go badly and these are the last five years, maybe there's also some things you want to do before that.
01:21:43
Speaker
But yeah, that that wasn't, I think there's like a bunch of different framings here that are all useful to think about. Last question here. You write about how the intelligence explosion is likely to begin ah in the next seven years.
01:21:57
Speaker
And if it doesn't do that, it will take much longer. And that we will have much more information about which world we're in in in the next three years. Why is it why is it we can we can say, we can make statements like that with such precision?
01:22:11
Speaker
Which curves or which trends are you looking at? A key thing is most fundamentally, AI progress is being driven by there being more compute. Because more compute means you can run more AIs, you can train, you can do bigger training runs.
01:22:25
Speaker
It also means you can do more experiments to improve the algorithms. And then secondly, by more labor going into AI research. So more AI researchers. human ones. Both of these things are increasing very fast now, and we're getting very fast AI progress.
01:22:40
Speaker
But if you look at projecting these trends forward, basically around 2030, the exact time is you know, depends on the bottleneck, but let's say between 2028 and 2032, it just becomes very hard to maintain the current pace of increase of both of those things.
01:22:55
Speaker
So basically, the amount of compute and algorithmic progress we have will start to flatten off around that point. It could be quite a gradual speed slowdown, in which case it could last well into the 2030s, but at a slower rate.
01:23:08
Speaker
Or it could be a relatively like quick diminishing, say, of just profits aren't large enough on the AI models. People might be like, well, we're not going to but not goingnna buy the next round of chips to scale, so we're stopping here. That could also happen.
01:23:19
Speaker
But yeah, just it's kind of it's a bit of a weird, bit weird in a way that all of these things seem to, all of the bottlenecks seem to roughly line up around 2030. I think current rates can be sustained for the next four years relatively confidently.
01:23:33
Speaker
And then the kind of four years after that, so 28 to 2032, less clear, probably slowing. And yeah, there's some precision around that. I mean, there is also, maybe we just get another paradigmatic breakthrough, like deep learning itself.
01:23:49
Speaker
And that, that maybe that's better thought of as something that could happen at any time. So maybe if we if we got that in 2030, then maybe everything carries on for another while in a new paradigm? so so So it's basically either the the current paradigm stagnates, and we can see that it's not sustainable to keep giving giving the inputs the to the to the scaling that we're doing now for for many, many more years.
01:24:13
Speaker
So either the either the current paradigm stagnates, or we get something like an intelligence explosion rather soon. The chance of finding a new paradigm depends on like how many people are doing AI research.
01:24:26
Speaker
So to some extent that just fits into this model where if we have exponentially increasing AI research workforce, then when the chance of finding a new paradigm is roughly constant per year.
01:24:37
Speaker
But if the workforce stops increasing, then also the chance ah finding a new paradigm decreases a lot too. And, um, Yeah, just to and ah ah was to make the point about compute more concrete, GPT-6 probably costs about maybe $10 or $30 billion dollars to train. It will it will cost that in 2028.
01:24:53
Speaker
That seems like we're pretty much quite close to having chip clusters that will be able to do that training run, just given what's already in the pipeline. But then, you know, going to GPT-7 would then cost another 10x more. So then we're talking about over $100 billion, dollars which is like still affordable, but is getting much harder.
01:25:13
Speaker
that's kind of like a whole year of profits from Google to fund that one training run. and the In the scale of like the future of human civilization, is's it's not ah it's not that much money.
01:25:24
Speaker
Yeah, I mean, interestingly, it would be like bigger than the Apollo program. And like as a percentage of GDP, maybe like it's kind of getting up to yeah Apollo and Manhattan program levels.
01:25:35
Speaker
the The thing that maybe there's there's a few other things that could stop you so... By that point, pretty much all of TSM's leading Taiwan Semiconductor, their leading nodes will be used for AI chips around by then.
01:25:49
Speaker
And then that means we can't create more AI chips unless they actually build new factories, which isn't the case now. and The case now, they're just replacing mobile phone chips for AI chips. So that can be done very easily.
01:26:00
Speaker
You'd also be going, like, we we'll be, something like 4% of US electricity would be used on data centers, say, in 2028. But then if you want to go another 10x, you have to go to 40% of you have electricity on.
01:26:13
Speaker
So you have to build a lot of power stations, which is is totally doable. Like you can

Conclusion and Follow-up Resources

01:26:18
Speaker
just build gas power stations in two or three years. Yeah. And there'll be huge economic incentives to do it if we're on this trajectory.
01:26:25
Speaker
But it's like, it's definitely becoming a lot harder than it. It is now each each um order of magnitude of scaling. it's It's exciting and it's scary to to see what's what's going to happen here. Do you want to refer listeners to your sub stack? How can they find out more about what what you're thinking about?
01:26:41
Speaker
Yeah, following my sub stack or on Twitter is the best place to to stay up to date. And the what you can do about AI guide I'm writing will be published, but also I'll be writing about a lot of the other topics we've we've talked about.
01:26:55
Speaker
Fantastic. Thanks for chatting with me. It's but it's been a lot of fun. Great, yeah, thanks for having me.