Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Tom Davidson on How Quickly AI Could Automate the Economy image

Tom Davidson on How Quickly AI Could Automate the Economy

Future of Life Institute Podcast
Avatar
221 Plays1 year ago
Tom Davidson joins the podcast to discuss how AI could quickly automate most cognitive tasks, including AI research, and why this would be risky. Timestamps: 00:00 The current pace of AI 03:58 Near-term risks from AI 09:34 Historical analogies to AI 13:58 AI benchmarks VS economic impact 18:30 AI takeoff speed and bottlenecks 31:09 Tom's model of AI takeoff speed 36:21 How AI could automate AI research 41:49 Bottlenecks to AI automating AI hardware 46:15 How much of AI research is automated now? 48:26 From 20% to 100% automation 53:24 AI takeoff in 3 years 1:09:15 Economic impacts of fast AI takeoff 1:12:51 Bottlenecks slowing AI takeoff 1:20:06 Does the market predict a fast AI takeoff? 1:25:39 "Hard to avoid AGI by 2060" 1:27:22 Risks from AI over the next 20 years 1:31:43 AI progress without more compute 1:44:01 What if AI models fail safety evaluations? 1:45:33 Cybersecurity at AI companies 1:47:33 Will AI turn out well for humanity? 1:50:15 AI and board games
Recommended
Transcript

Introduction to AI Risks and Takeoff Speeds

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker and I'm here with Tom Davidson. Tom is a senior research analyst at Open Philanthropy where he works with potential risks from advanced AI. Tom, welcome to the podcast.
00:00:15
Speaker
I guess it's a pleasure to be here.

AI Progress and Advancements Overview

00:00:17
Speaker
So we're going to spend a lot of time on your model of takeoff speeds where you come to some pretty wild conclusions in my opinion. But first I think it would be great to kind of situate your model in your broader views. So maybe you could tell us a bit about your view of AI progress and AI risks in general in broad terms. So in broad terms I think AI progress in the last 10 years has been extremely rapid.
00:00:46
Speaker
There's been massive progress in terms of analyzing images and videos, in terms of creating images and videos, game playing, natural language ability, coding, kind of very kind of diverse broad domains, seeing very rapid progress and all within a very similar paradigm, the deep learning paradigm where progress has been fueled by training larger neural nets with more compute and by kind of improving the neural net algorithm, the deep learning algorithms we're using to train those things.
00:01:17
Speaker
And I think progress in the last four years has been especially rapid. So four years ago, GPT-2 was released. If you haven't played around with GPT-2, I really recommend you do so. It's a brilliant way to kind of give you yourself an intuition pump for just how fast this field has been moving. GPT-2 is, by today's standards, a very limited
00:01:40
Speaker
language, conversational chat bot, AI. It can maybe string together a few sentences, but once it's written a paragraph, it's very clear. It doesn't know what it's talking about. It's off topic. It can't really understand questions you ask it. It's clearly very, very limited as an AI and has a very limited, brittle understanding of the world.
00:02:01
Speaker
That was four years ago.

Anticipating Future AI Progress

00:02:02
Speaker
This year, GPT-4 was released. GPT-4 was probably actually trained in 2022. So arguably three or four years being the gap between GPT-2 and GPT-4. And again, if you haven't played around with GPT-4, then I strongly recommend you do so. You'll need to pay
00:02:21
Speaker
some kind of subscription fee on the OpenAI chat GPT interface, and there's other ways you can play around with it. But in my opinion, if you're asking probing questions and really testing its knowledge, it's quite a lot stronger than chat GPT 3.5. And my goodness, compared to GPT 2, it is really very good. It seems to have a pretty strong, pretty general understanding of many aspects of the world. It will apply its knowledge pretty flexibly if you try and flow kind of curve balls at it.
00:02:48
Speaker
It's very good at coding. It can, you know, you can give it a natural language description of some code you want it to write. It will write the code. You can ask it to make some tweaks. It will make just exactly those tweaks. I don't see why I would need to kind of, I sometimes do like little Python kind of coding experiments for work. And I would be, you know,
00:03:07
Speaker
have a much better job having done those projects now using GPT-4. You know, in the last four years, we've gone from GPT-2 to GPT-4. I think that's just very startling. And so, yeah, progress has been rapid and it's been getting faster, I think.

Current and Future AI Risks

00:03:21
Speaker
And I think that absent, like, kind of
00:03:25
Speaker
people decided to be cautious and decided to go slower for kind of non-financial, non-commercial reasons. I think the next four years will probably be similarly quick and that we'll see a continued fast scaling up of the transformer architecture that is behind the DPT models. And I expect that by default, that kind of same jump that we saw from GPT-2 to GPT-4 will get a similar sized jump.
00:03:49
Speaker
In the next four years maybe it will take slightly longer because scaling does become more expensive as you start to really be spending more than a billion on these training runs. This is the progress and as I understand what you're saying it's AI is moving incredibly quickly. What about the risks. How do you see the risk landscape resulting from this quick progress.
00:04:07
Speaker
So there are already risks that AI is posing, there's risks of disinformation, there's risks of embedding biases in society that are kind of inherent in the data that the natural language models are trained on.
00:04:23
Speaker
And as AI gets more capable, I expect the kind of severity of those risks to increase in language capabilities. You know, it's very hard to speak definitively about exactly what risk will arise when, because one of the things about language models is that it's hard to predict what emergent capabilities there will be with the next model. So, you know, these models are pre-trained by kind of just devouring
00:04:50
Speaker
internet text is basically reading all of the text that people can easily scrape off the internet and trying to predict what the next word will be on a given web page that they're presented with. And then it turns out that that pre-training, just kind of reading things on the internet, gives the language models various kind of emerging capabilities, which are not in any very obvious direction present in the training set.
00:05:16
Speaker
Um, but I kind of can be, can be kind of elicited, um, from the models with a little bit of tweaking, um, after the initial phase is finished. So.
00:05:25
Speaker
You know, for example, I believe that some people are concerned that the next generation of large language models that might be GPT-5 might make it significantly easier for bad actors to create dangerous bioweapons. And presumably that's because there's enough kind of biology-related text on the internet that when it's during that pre-training phase, GPT-5 would be picking up enough biology and also just enough kind of common sense reasoning and scientific understanding in general
00:05:53
Speaker
that it can then provide substantial help to someone who's wanting to make a buyout of that kind. But it's very hard to predict whether that will actually happen. It's hard to know exactly what is in the training data and exactly what the language models will be able to get out of that training data when the whole thing is over and done with. But I think that is one particular risk that I think there's a decent chance of arising in the next four years.

Challenges in Regulating AI

00:06:16
Speaker
this kind of lowering the bar to bioterrorism risk. I think there's some chance of a risk that has been called autonomous replication and adaption. That is that some maybe GPT-5 level systems or GPT-6 level systems would be capable with the right kind of scaffolding to help them along. I something like auto GPT that kind of prompts the system to kind of
00:06:45
Speaker
kind of plan out its actions and make a list of sub-goals and then pursue them one by one. I think there's a chance that a system of that kind would be capable of kind of copying itself onto a new computer and then using that computer's compute resources to make money, for example, by scamming people or by just kind of doing intellectual work on the internet that humans can get paid for, like doing kind of M. Turkish-style roles.
00:07:12
Speaker
And so there is a chance we meet make this threshold where AI is kind of able to self-sustain and kind of gather the resources it needs to kind of make more copies of itself and increase its power and resources. And these two risks you mentioned are very near term. So we're thinking here before 2027 or before 2028.
00:07:32
Speaker
Yeah, I think there's a chance for sure. Like I said, it's really hard to predict and especially with this autonomous replication and an adaption threshold. My own view is probably more likely than not that that is not possible by 2027, 2028, but I'd give it substantial probability, maybe 30%.
00:07:52
Speaker
And then probably higher on the on the virus, but I'm really kind of making these numbers up. These are just my very loose impressions. And I don't know. I'm not aware of a very grounded science for predicting these kind of risks.
00:08:05
Speaker
And, you know, there are other risks as well. These are not the only ones. Maybe there's risks from persuasion, propaganda, maybe recruitment for bad actors. They can use language models to automate the process of kind of reaching out and trying to find vulnerable people. Maybe there are other risks as well, we'll see in the next five years.

Historical Analogies and AI's General Purpose Nature

00:08:22
Speaker
Maybe relating to cyber, maybe relating to significantly improving tech progress in some domain.
00:08:28
Speaker
Yeah, and I think what you mentioned is that you can't really predict which capabilities will arise. And I think one of the problems here is that nobody can really predict which capabilities will arise. And this makes the whole area very uncertain. If you couple that with the fact that, as you mentioned, the area is moving very fast, you get some potential risks going on.
00:08:49
Speaker
Yeah, exactly. And I think one thing that can make it harder to manage these risks is that the default way that we regulate risky technology is a kind of reactive way. So we allow people to develop the technology, we kind of allow them to deploy it. And then when something goes wrong, we say, okay, that thing went wrong in that particular circumstance, we're going to regulate the use of this technology in this circumstance. So now, now you're not allowed to use AI to do political campaigns, because we've seen it's been abused in that context.
00:09:19
Speaker
And what I think we probably need for something like AI, when there's so many possible risks, and it's really hard to predict which ones, there's something a bit more proactive where we are before deploying it far and wide, testing it in advance, what kind of risks it may pose. Does this mean that we don't really have any good historical analogies for AI? With other technologies, it may be the case that it's taken decades for them to be deployed, and we've been able to do trial and error and build up some sort of safety regime.
00:09:49
Speaker
But maybe AI is different, maybe it moves much faster than other technologies. Do you have good analogies in mind? I don't think there are any perfect analogies to be sure. I think your point, I think a good tension which makes it hard to find analogies and that AI is a general purpose technology. So in that sense, it's like harvesting power from fossil fuels or electricity.
00:10:12
Speaker
or maybe computing power. But on the other hand, unlike other general purpose technologies, the underlying technology is improving very, very rapidly. So with fossil fuels, you know, that I'm not aware of any kind of four year period where we saw this kind of rate of improvement in the underlying kind of quality of the combustion engine. I'm not aware of, you know, similarly with with something like electricity, any any four year period where there was such rapid progress in the underlying technology.
00:10:41
Speaker
And I think that there have probably been many, many narrow technologies like Facebook went viral in a small number of years, but it's a narrow domain. And that did, in fact, pose regulatory problems. The government was either be too slow to respond to the various risks that Facebook did pose. But ultimately, it was a very scoped narrow technology in a narrow domain. With AI, I think there's kind of a scary duality with its generality on the one hand.
00:11:09
Speaker
and then the underlying pace of progress, on the other hand, making it especially difficult to manage and regulate as a new technology.

What is Transformative AI?

00:11:17
Speaker
If we look longer term, beyond the 2030s, what do you think of the possibility of truly transformative AI? When would you expect something like that to arrive? What is a good way of defining whether AI is transformative?
00:11:36
Speaker
The broader news definition, which has been used historically, is to say that AI would be transformative if it changed society as much as the Industrial Revolution changed society or the agricultural revolution changed society.
00:11:50
Speaker
And what I understand by that is it's completely changing the nature of work, going from hunter gathering to farming, kind of moving around constantly to being settled in one place and moving into industry. And it's also really changing the way that society is structured and the political and economic processes that are appropriate. That's not a very precise definition, but it has the benefit of being kind of
00:12:12
Speaker
kind of loose and flexible enough that if you're kind of trying to interpret it in the right way, then it's probably going to end up pointing to the thing that you care about. I think that's a pretty robust definition to use. Because it's vague, people have tried to kind of precisify the definition. And then I think that there are some problems you run into when you do that. So one way to try and make it precise is to say, it's truly transformative if it accelerates the pace of economic growth by say a factor of 10. That's more precise, but it does have the downside that
00:12:39
Speaker
whether economic growth gets faster doesn't just depend on the nature of AI itself. It also depends on how it's integrated into society and how humans choose to use it. Like we might just choose to grow slowly despite the possibility of growing much faster. No, it depends. It depends on how do we even measure economic growth. There's this big kind of thorny questions about how you measure the growth impacts of new technologies. And at that point, the thing that you're
00:13:04
Speaker
The definition of transformative AI is so tied to its impact rather than to the actual abilities of the technology itself that I think it can be confusing to think about it like that.
00:13:15
Speaker
An approach I often use is to use the term artificial general intelligence and just say that that is when AI can do any cognitive task that a human professional can do at or above that level. That's precise, fairly precise, and I prefer it to the kind of economics based definition because it's more about what the underlying technology can do.
00:13:38
Speaker
On the other hand, you can imagine kind of loopholes where it's not really capturing what you want, where there's just a few tasks that no one's bothered trying to make AI able to do that AI can't do. So you say, oh, we don't technically have AI. And so I think probably sticking with the kind of broad
00:13:52
Speaker
definition is you want to have in the background and then being a bit flexible about exactly how we're defining it.

AI's Economic Impact and Real-World Contributions

00:13:59
Speaker
Thinking about the economic impact of AI is interesting because sometimes if you look at benchmarks for example TPT-4 scores very well on high school exams and college exams and even the bar exam and so on.
00:14:12
Speaker
But how does that translate into economic growth or economic progress or automation? It's difficult to say. And of course, GBT4 hasn't been enough time yet for it to have a great impact. But so far, it's not really showing up in the numbers. I think it's very important to think of the economic impact also and not just the benchmarks.
00:14:32
Speaker
Yeah, I think especially today's benchmarks are very limited. So what if AI can get this mark on an SAT and so what if it can get this gone big bent? The task that we're mostly focusing on with current benchmarks are not tasks that humans are performing in the real world, in the real economy, that they're actually useful to producing goods and services, to running organizations, to whatever it is that people are actually trying to do.
00:15:01
Speaker
With the current way we're benchmarking systems, there's this kind of gap between the tests that we're giving them and then the stuff that we actually ultimately care about in our society, which is kind of useful work. And it's really hard to know how big that gap is. And it seems like at the moment that gap is potentially pretty big.
00:15:19
Speaker
and that GPT-4 is getting really good grades on a very wide range of quite tough examinations. But it's not yet massively adopted to replace lots of people's jobs and to massively improve, increase profits and revenues for lots of companies. And so I think we should be trying to move towards better benchmarks, which are more closely tied to the actual real world impacts of the systems.
00:15:46
Speaker
Yeah, and maybe those benchmarks will be difficult to set up, but at least we have measurements of GDP growth as a proxy for useful work, as one way of measuring whether AI is doing a lot of useful work for the economy.
00:16:01
Speaker
Yeah, that's right. It's really getting at that. Is it doing useful work? Part of the question, I do think it has some pretty big downsides in that there's going to be a pretty big lag, especially with earlier AI systems that are less flexible and so taking more work to integrate into workflows. So if you're just looking at, you know, GDP, you might think nothing really that much is happening in AI because
00:16:23
Speaker
You know, GDP hasn't picked up, and that would be a mistake. And there's also just a lot of noise in GDP statistics, just inherent noise, and then, you know, all these other trends which are interlacing. And one kind of quite nice intermediate is the size of the AI industry specifically. So you can look at investments in AI, or you can try and kind of add up AI-driven revenues across the economy, which I think is a pretty vexed task, trying to, you know, figure out how much value-run AI is really adding.
00:16:53
Speaker
those kinds of measurements typically show pretty fast growth of the AI industry, kind of like 30% a year or faster in recent history. You can also look at things like growth of spending on AI chips. That's quite a kind of concrete thing you can measure clearly. That is maybe intermediate between economic growth on the one hand and benchmarks on the other hand, and that it is showing, look, people really believe that this is going to have a real world impact. They're willing to spend concrete money on developing these systems. That means you're getting a kind of real signal about
00:17:22
Speaker
its real world impact, but it hasn't actually had that impact yet. So that's that's that's intermediate. It's also interesting to think about the fact that the entire introduction of computers and the internet to the world over the last 50 years hasn't really increased the growth rate in developed economies a lot. So technologies can have a an enormous real world impact.
00:17:44
Speaker
without actually increasing gdp and maybe there's a quite a high bar actually for what we might call transformative ai.
00:17:52
Speaker
Yeah, I think there's a very high bar. As you say, computers, they did increase economic growth in the sense that if we hand developed computers, economic growth would have been lower, but they did not turbo charge the overall pace of economic growth. They're more kind of maintaining the trend that we were previously getting from other technologies. And at first, I think that that's what will be happening with AI. And then my view is that once we've got truly very advanced systems, AGI systems that are able to really automate
00:18:22
Speaker
or human labor, that's when we should expect more transformative and unprecedented economic impacts.

Understanding AI Takeoff Speeds

00:18:30
Speaker
Yeah, and what I want to do in this episode is to kind of dig into your model of how this might happen, which is, I think, centered around takeoff speeds. I think the notion of takeoff speeds is quite central to how you see AI progressing. So maybe we could start by talking about what is takeoff speed in the context of AI.
00:18:50
Speaker
So I think it can be useful to distinguish between two notions of takeoff speed. The first is what I'll call AI capabilities takeoff speed. So that's focused on the pace of improvement in the underlying technology.
00:19:03
Speaker
So capabilities take of speed would be the answer to the question of how quickly is AI improving around the time in which we get human level AI. So if take of speed is fast, then that can mean we go from kind of mass level intelligence AI to
00:19:21
Speaker
human level AI in one year and then a year later we've got kind of god-like intelligence AI so kind of very fast increase in AI capabilities just as it's passing through the kind of human level of intelligence then there's another notion of take of speed which I think especially if we're thinking about economic impacts it can be useful to distinguish which is impact take of speed so that is how quickly does AI's impact in the world
00:19:48
Speaker
increase around that time. A very fast impact take of speed could look like growth is just taking away at its normal two or three percent a year and then next year suddenly it kind of shoots up. The world economy is doubling every two years with explosive growth. Whereas a more kind of slow impact take of speed could be well there's the impacts of AI are spread out over many decades.
00:20:13
Speaker
And maybe growth gradually gets faster over time, or maybe only temporarily gets faster than it settles back down. And so you can imagine those two coming apart. If there's loads of regulations, for example, that limit the impact of AI, you can imagine the taker speed of the underlying technology being very fast, somewhat in line with recent trends. But the actual impact taker speed being
00:20:37
Speaker
much slower. Probably we'll talk later about some of the economic objections to kind of transformative growth and various bottlenecks. I know one kind of theme in my mind is that these things tend to affect impact take of speed more than they tend to affect the capabilities take of speed. So what do you think the world looks like in which you have
00:20:58
Speaker
a lot of AI capabilities but not a lot of impact yet. Is that a stable situation because it seems to me pretty unstable. There would be a lot of incentives to try to deploy these very capable AI somewhere in the world. Yeah I think that's right. It's probably temporary. The reason you can imagine it happening
00:21:17
Speaker
is that there are lots of entrenched interests in various professions. So, you know, lawyers don't want to lose their jobs. Medical professionals don't want to lose their jobs. There are unions. There's kind of political processes by which these groups kind of will power and influence, and they may want to delay the deployment of systems which would replace them and lose their jobs. And indeed, that will be a very good thing.
00:21:42
Speaker
There's regulations around who can make various high-stakes decisions, be it in signing something on the legal document or giving a drug to a patient. Bureaucracies take a while to shake up, and it's not going to happen overnight that suddenly AIs are allowed to diagnose you and hand you the medication, even if they're actually able to do that.

Regulatory and Institutional Challenges in AI Deployment

00:22:03
Speaker
And because these are slow human processes and bureaucracies, it does seem possible to me that even though there's a large amount of pressure to kind of remove
00:22:11
Speaker
these barriers to rolling out AI that it could still take a while. So what you're imagining here is, for example, we have
00:22:18
Speaker
AI models capable of diagnosing a patient or sending a document to a court because of a professional organization in medicine or in law. Maybe it's just not legal to do so. Maybe you need a human to sign off or maybe you need even a human to do the full task. And that slows down the implementation of AI in the economy, even though AI might be perfectly capable of diagnosing a patient. Yeah, even though they might be better.
00:22:48
Speaker
Do you think that's the default scenario? Has this happened before? I think that is the default in our previous technologies. They do take a while to refuse. And if you kind of have a naive view of like, well, once it's better than everyone move over, you'd be very surprised at what ended up happening in reality. Many organizations have still not transitioned over.
00:23:10
Speaker
to the internet fully. I sometimes go into, I've gone into hospitals and I'm asked to fill out forms by hand. And I'm thinking why, why are we still doing paper documents here? These transitions can be so, and I don't, I actually think there's a chance that things are quite different with AI. So if the capabilities take a speed as fast, then we might rapidly transition to a world where we've got, you know, truly super intelligent systems.
00:23:34
Speaker
that are not yet deployed that could very damage to be add huge amounts of value at almost no effort to integrate them into our businesses could have huge amounts of value because they're kind of smart enough to integrate themselves and immediately kind of like learn what they need to learn practically starting value.
00:23:51
Speaker
At that point, I think the situation will be without precedent and that previous technologies have required active effort to rearrange workflows, to draft new legislation so that they can be incorporated into the real economy. It would be without precedent for a new technology to be able to do all of that work itself.
00:24:13
Speaker
draft the new legislation itself, lobby the regulators itself, learn what it needs to learn to do an even better job delivering the goods and services itself, create by itself legible examples of inventing and treating diseases which people currently
00:24:30
Speaker
to struggle to treat. And maybe AI systems are so super intelligent that they can, without even going through the FDA process, develop a new drug and then kind of demonstrate quite clearly to everyone that it works in treating cancer that no one else can treat. Then at that point, you know, legally, new drugs need to go through the FDA. But when there's something so stark as there's this drug which could save millions of lives, everyone knows it would work, that would create a kind of pressure on the system and the regulatory system to change that
00:25:00
Speaker
I think might be without precedent.

Controlling Superhuman AI

00:25:03
Speaker
And so I could imagine that if AI capabilities continue to kind of shoot upwards, that would put increasing pressure on the kind of the regulatory barriers and other barriers to deploying AI widely.
00:25:15
Speaker
It's of course difficult to speculate on the political economy of future AI, but I think there might also be demand from the public to get access to these AI models. If, for example, you have a demonstration that an AI doctor can diagnose you better than a human doctor, and maybe the AI doctor costs
00:25:38
Speaker
10 times as little. Of course there would be pressure coming from from the doctors association in a given country. But I can't see this demand not mattering at all. I think it would matter at least somewhat. I think that's right. In my mind it's question of how long and then it's kind of the kind of incumbent forces trying to preserve the status quo. And then there's this maybe increasing tide of technological abilities that I
00:26:04
Speaker
is able to provide an increasing pressure to kind of knock down those barriers and then also kind of competitive dynamics potentially, you know, two different states have slightly different regulations and people all go to the state where they can get the kind of 10 times cheaper AI doctor who's more effective or go to a different country where they can receive that treatment.
00:26:23
Speaker
And that's another thing which makes it hard for these incumbent forces to sustain for too long. Yeah, I think before we get into the mechanics of the model itself, it would be useful to know why you're interested in this topic. Why is it useful for us to know about AI take-off speeds? One of the key risks that I've been focused on with AIs is the risk of losing control of superhuman AI systems. That is, systems which on
00:26:54
Speaker
some significant domains, maybe persuasion, maybe strategy, maybe pedological development, outperform best human experts. These risks are very poorly understood.
00:27:06
Speaker
we don't yet have a solution to what he will refer to as the alignment problem, which is the problem of ensuring that superhuman AI systems do what their users intend and their developers intend. What this means is that it would be really, really useful if we could have a long period of time, ideally decades of time, with AI systems which are not quite yet capable enough
00:27:35
Speaker
to actually pose a risk that we lose control of them, but that are kind of maybe almost at that level or that are very similar to those particularly risky systems in key ways so that we could study them, we could understand how do their motivation systems work.
00:27:50
Speaker
We could experiment with different ways to try and align them. I kind of trained them such that they do what their users and developers want them to do. And we could learn about how big the risks are and the best ways of mitigating those risks. And so that would be really, really nice in terms of better understanding the problem and understanding what the solution requires.
00:28:15
Speaker
The problem is that if capabilities take off speed as fast, if the underlying technology goes from human level to kind of significantly superhuman in just a year, then we won't have very long or by default we won't have very long with those systems. We won't have long to study them. I think that makes the task of avoiding loss of control

Inputs to AI Development: Compute and Algorithms

00:28:36
Speaker
much more difficult. Because if we just have one year, then we'll be kind of flying by the seat of our pants trying to kind of understand how these systems work, how they think, throwing on some kind of very quick slapdash solutions in terms of trying to get them to do what we want, and then not really having time to take a step back and check that it's all working in just one year. Very little effort so far has gone into solving this problem of how do we retain control of superhuman systems.
00:29:06
Speaker
If we had a very long time with AIs who are roughly human level, maybe very slightly superhuman at the kind of research, but kind of maybe human level or less than human level at the kind of dangerous capabilities like manipulation and persuasion and strategizing. If we had many decades with systems of that kind, we could potentially use them to try and solve the problem of understanding the motivations of AI systems and solving this problem of how do we control superhuman AI systems.
00:29:34
Speaker
Once you train a system that's human level, I think it's likely you'll be able to run, and we can talk about this a bit later, but you'll be able to run many millions of copies in parallel all at the same time.
00:29:45
Speaker
or even run kind of fewer copies, but have them think faster. And so you could get a huge amount of labor from kind of highly capable AI. And the best time to do that is when, again, when you've got AI that is really pretty good and good enough to be very useful, but not yet kind of superhuman enough that it's really posing a risk that you lose control of it. And so, again, if we had a slow takeoff, we could have many years harnessing the labor of these roughly human level AI systems.
00:30:15
Speaker
And why is it that in a fast takeoff scenario, we can't harness the label to help us align more advanced AI? Why is it more difficult to do so in a fast takeoff scenario? So in a fast takeoff, we can still do this to some extent, there will still be some period where we have roughly human level systems, and we can use them to do research into keeping AIs safe. But we just have less long in that period. And so they can do less research in total.
00:30:44
Speaker
And especially if we want humans to be able to check the results of their work, and we want humans to be able to verify their work, then that kind of only having 12 months can become quite a binding constraint. You know, even if we don't need humans to check, there's still this fact of you just got longer to do the research. That desire that I think we all have for humans to verify the work could become quite
00:31:07
Speaker
but quite problematic. All right. So if we begin digging into your model, I'm looking at a simplified diagram of it. There's also a website where you can plug in your own values for various parameters. So maybe we could we could go through the parameters of the model and talk about the relationships.

Feedback Loops in AI Development

00:31:25
Speaker
How would you summarize the model? It attempts to model the most important inputs to AI development, in particular,
00:31:37
Speaker
the amount of compute used to develop an AI model, and the quality of the training algorithms that utilize that compute to produce the trained AI. And then it really kind of drills into, okay, how are these two inputs currently evolving over time, and how might
00:31:59
Speaker
they evolve over time into the future. So, you know, how quickly will the algorithms be improving into the future? How quickly will the amount of compute used to develop AI systems increase into the future? And in particular, taking into account a couple of key feedback loops. So the first feedback loop is a kind of an investment feedback loop, where we see the AIs are producing value in the economy, and we see from impressive demos that they're very capable, and that sparks
00:32:27
Speaker
increased financial investment, kind of getting more compute and improving algorithms. And then a second feedback loop, which are called the AI automation feedback loop, whereby as AIs get more capable, they're able to automate the work of coming up with better AI algorithms. And they're able to automate the work of coming up with better computer chips so that we have access to more compute.
00:32:52
Speaker
And so we've got these two feedback loops, the investment feedback loop and the AI automation feedback loop. They are both affecting how the algorithms are improving and how the amount of compute available is improving. And then those two key inputs are then driving the improvement of AI capabilities over time. And so which of these feedback loops is the most important? So is the investment feedback loop or the AI automation feedback loop the most important for accelerating takeoff speeds? I think in the near term,
00:33:20
Speaker
investment feedback loop is going to be more important. So I think already today we're seeing that feedback loop in action. Investment in AI has gone up massively in recent years. Investment in AI chip have gone up massively. Investment in designing better AI chips have gone up massively. NVIDIA
00:33:41
Speaker
Its share price has gone through the roof. You know, it specializes in AI chips like the H100. And so currently, it's that investment feedback loop which is continuing to drive the very fast progress that we've seen over the last four years and will probably continue to over the next four years. But that investment feedback loop can only continue for so long because at a certain point, companies are already spending maybe hundreds of billions of dollars, maybe even a trillion dollars if it becomes a nation state activity.
00:34:10
Speaker
on developing a state-of-the-art AI system. And it's just very hard to spend more past a certain point. And past a certain point, you'd have to expand the whole semiconductor industry so that you can actually increase the number of future chips produced worldwide in order to continue to grow that investment at the pace at which it's been growing recently. And so over time, I expect the investment fee value to become less important and the
00:34:41
Speaker
AI automation for you to become more important. In particular, once AI gets to the point where it's able to automate significant fractions of the work done by AI researchers to improve AI, by deep design companies like Nvidia to design better AI chips, by
00:34:59
Speaker
Fabrication companies like TSMC who are actually manufactured coming out chips as AI automates that the kind of works that those organizations do this this feedback loop will come into play. And then as I get more and more capable, the feedback loops will become more and more significant over time.

AI Automation in Software and Hardware

00:35:16
Speaker
Yes, as I understand it, you've been working on this model for three or four years. At least you've been working on this model before the release of chat GPT, which I think accelerated AI investment. Do you have any sense of how much AI investments have increased since chat GPT? Investment in US semiconductors has been kind of growing at an unprecedented rate, probably in part related to the CHIPS Act, whereby the US government is spending money to try and encourage
00:35:46
Speaker
these fab companies like TSMC to move their fabs to the U.S. You know, hearing about lots of new startups in the AI space, you know, hundreds of millions or billions invested in them. I think seeing graphs where, again, you've kind of got the level of investment doubling every two years or so. I think GPT-4, I think it's estimated about 30 million U.S. dollars to train. By the end of next year, we'll have training runs and, you know, at least the low hundreds of millions.
00:36:14
Speaker
So again, we're talking about spending increasing by a patch of two or three each year in these training runs.
00:36:21
Speaker
I think the investment feedback loop is quite straightforward to understand, but I think the AI automation feedback loop is more difficult. It's not now the case that AIs can automate everything in AI software and hardware, far from it. You could see how using language models for coding might be useful if you're working in an AI organization, but it's difficult for me to understand how we go from there to AIs
00:36:49
Speaker
increasingly automating AI research. Maybe we could talk about how AI improves AI hardware and software. So yeah, let's talk about AI software. So let's give an oversimplified toy picture of what AI software researchers are actually doing with their time. So let's pretend that all they do is they have a current training algorithm. So maybe it's the DPT-4 architecture, transform architecture they're using.
00:37:19
Speaker
And then what they do with their time is they think of ideas for ways to modify that architecture to make it better in some way, maybe increase the context length.
00:37:29
Speaker
Maybe they have a new optimizer, which means it can train more efficiently. Maybe they have some modification to the attention mechanism so that you don't get such quadratic scaling with the context length. And so the AI can read longer documents. And then once they've got an idea, they then implement it in code. So they write some code that will represent that idea. Then they write
00:37:56
Speaker
out a kind of experiment that will test the idea and compare it to the current architecture and see how much of an improvement is it. And then they run these experiments. And kind of while the experiment's happening, maybe they're kind of watching how it's unfolding and making sure that nothing's gone wrong and there's no bugs or problems with the experiment. Once the experiment's done, they have probably, you know, a somewhat subtle job interpreting the results of that experiment and trying to kind of sort the noise from the signal and figure out, okay, was this architectural modification improvement or not?
00:38:26
Speaker
one way you can think about this process of AI automation is that AI is initially kind of just helping out in small ways with each of these sub tasks. So initially maybe we could go through each of them. So there's the brainstorming phase, maybe they kind of give GPT-5
00:38:43
Speaker
lots of context and relevant information about the current architecture and they say please brainstorm some new ideas and feel free to do, you know, googling to tell you common new ideas and probably at first it's not, you know, immediately coming up with the best ideas, but it's just a useful first step for an engineer is kind of kind of
00:39:01
Speaker
simulating their thinking, maybe improving the quality of the ideas that they come up with. And then the implementation phase, the engineer chooses, OK, this is the architectural modification we're going to test. And GPT-5 does the first attempt at implementing changes to the code base to represent that new algorithmic idea. And again, maybe at first it's not perfect. The human needs to check it. And maybe it struggles with certain complex changes.
00:39:29
Speaker
over time it gets better and better and maybe we ultimately get to a stage where the human just describes the architectural modification in natural language and AI can just fully kind of implement code that puts the idea into practice. I mean that's something that I can kind of almost readily imagine based on how good GB4 already is at coding. Then there's the kind of process of writing a test to try out the new algorithm and again at first maybe
00:39:55
Speaker
the AI just does a first pass at writing the code to test the two. And it's just kind of giving kind of hints and helping tips to the human while the experiment is going on in terms of things that might be going wrong. But increasingly, it's able to just be autonomous with that. And again, with interpreting those results.
00:40:12
Speaker
Again, initially the AI is maybe kind of doing some basic analysis and human giving it sub-tasks of ways to analyze the data, but ultimately there becomes enough data that the AI can be trained to just do the whole thing. And so there's this kind of experimental loop with many different parts to it. And then within each part, AI is being given more and more responsibility to kind of do it autonomously over time. And then you can imagine an end state we get to
00:40:37
Speaker
where the AI is just able to do the whole thing. And they can just say, here's the current architecture. Please improve it open-endedly. And it just brainstorms ideas, implements them, tests them, interprets the results, reads and repeat. And so the way I see this unfolding is that it is kind of an incremental process and a continuous process in that there's like a general overtime
00:41:02
Speaker
kind of offloading of responsibilities to AI. And, you know, as that happens, the workflows will be adjusted to suit those AIs more and more, because it will be a kind of an AI dominated work for every human workflow. And so there's this kind of joint, there's the kind of on the one hand AI is getting better and more capable, and therefore able to take on more of the work.
00:41:21
Speaker
And there's also kind of the workflow becoming adjusted and tailored to the comparative advantages of these AIs.

AI Automation in Research and Development

00:41:27
Speaker
And eventually we end up in a situation where the workflow is probably pretty different to what it is today, and it's also now completely done by AI systems.
00:41:34
Speaker
That's actually pretty convincing to me, especially if you've had the experience of asking GPT-4 to write some code for you and it just spits out something that runs immediately. I can see that working. That's on the software side. If we talk about the hardware side, I would think that hardware
00:41:54
Speaker
There you have some interactions with the physical world. Maybe you have a chip design, but you have to create the chip physically before you can use it. Would there be some kind of bottleneck to AI's improving AI hardware there? I think you are going to get more bottlenecks of that sort with AI hardware. One thing that I think won't be bottlenecked is the work done by so-called fabulous hardware companies. So Nvidia is one of these companies. They are a chip design company, but they do not themselves manufacture.
00:42:23
Speaker
any of the chips. So what they do is they work on designs, blueprints for AI chips. And once they've developed them, they send them to a chip manufacturer like TSMC to then physically produce the chip. NVIDIA's work is hugely valuable and a lot of the improvements in AI hardware in recent years have come from NVIDIA iterating on their AI specialized chips.
00:42:49
Speaker
And so that portion of the work is what you can do remotely. It's cognitive work. It's kind of understanding the way that the basic underlying technology works, that TSNC is working with, and then figuring out more effective and efficient ways to kind of stack the kind of little calculating units that actually perform the computations on the chip so that they can kind of do those AI specific computations more efficiently.
00:43:17
Speaker
So that kind of element of it, that fabulous element, I think there are going to be fewer bottlenecks with. But there's a whole other driver of progress in hardware, which is designing what's called a new node. And that is kind of relating to you may have heard of Moore's law, which is this process by which the kind of the basic chip technology has improved over time.
00:43:39
Speaker
so that the processing units that do the calculations on chips can get increasingly small and kind of increasingly energy efficient over time. And that process, as you say, involves working with physical materials, involves, you know, probably designing a specification, but then having to then test that
00:43:59
Speaker
against how materials work in the real world. And so I think with that side of things, it's much more likely that the AIs cannot fully automate the work themselves. AIs may be able to give very significant speed ups. And I think to really investigate this, you'd want to do a deep dive into how this area of R&D works, and I haven't done that. But I would just flag one possibility, which is that
00:44:24
Speaker
Yes, you need physical materials to test the ideas, and you need physical human in the lab to set up those experiments to do those tests. But there are lots of humans in the world, and there's lots of more materials in the world.
00:44:41
Speaker
And so if you had a kind of unlimited supply of cognitive labor that was absolute tip-top professional kind of hardware specialist, so imagine you take the very best hardware specialist in the world and then you make it so there's now a million of them, and each of them can think a hundred times as quickly, and they are able to direct people who have much less experience to design experiments, to kind of implement those experiments, and they're able to give real-time instructions
00:45:10
Speaker
to those people, then you might well find that you can actually find enough physical bodies to actually do those experiments in practice. You know, you're not massively bottlenecked on that. You're actually able to scale up the physical side of the operation quite rapidly by kind of having some kind of remote AI cognitive experts direct their physical activities.
00:45:34
Speaker
I think that there's lots of reasons this might not happen. Maybe people are just slow to actually change processes in these ways. Maybe there's regulation which limits it. But by default, there's not that much regulation of the R&D process. And if it is, in fact, very cheap to run an absolute cognitive expert in the area of hardware, you'd think that
00:45:55
Speaker
the companies that are developing these chips would want to do that and would have strong incentive to do that.

Metric for AI Progress: Cognitive Task Automation

00:46:00
Speaker
And so it is a possibility in my mind that these physical bottlenecks do not slow things down as much as you might think at first blush because of the ways that you can use an abundance of cognitive labor to kind of get around them and just recruit more warm bodies to run the experiments.
00:46:16
Speaker
How much of AI research and development do you think is automated right now? Is it 1% or 5% or basically nothing? It's a great question. I think it's not nothing. Nvidia recently published about using reinforcement learning AI system to automate some of their chip design work. People at the top AI labs are, I'm pretty sure, using the lab's AIs.
00:46:45
Speaker
to help them write code using co-pilot or probably using internal systems with more capable AI, and that will be accelerating their workflow somewhat. You see some statistics and some kind of measurements of what the productivity gains are here. I think it's really hard to measure this reliably. The numbers I see are normally between 1 and 10%.
00:47:06
Speaker
in terms of the productivity gains. So that might cross bond to a similar fraction of tasks automated. Some people report that more significant productivity gains from using AI systems, personal workflows, you know, people report 20%, 50% productivity gains. But I don't think that has been verified outside, you know, that kind of just a few, a few people claiming it.
00:47:31
Speaker
When do you think the AI automation feedback loop really gets going? At what level of automation of AI research and development does the feedback loop really kick in? So it is a continuous process where just the more automation you have, the stronger the feedback loop gets. And it's hard to give a specific number because
00:47:53
Speaker
If automation happens more slowly, then it will seem more like business as usual, because there has already been a pre-existing process of automating our workflows. And so if we got to 50% automation, but we only got there in, you know, 2070, then that might well just feel like a continuation of the standard process of automation. On the other hand, if we got to 50%, but we got there in 2028, which doesn't seem out of the question to me.
00:48:17
Speaker
then I think that would feel like a very significant effect and that then we would see the feedback loop really and noticeably getting going at that point. Do you have a key metric that you're estimating using this model? Maybe you can explain what the key metric is here.
00:48:35
Speaker
The metric I'm using is the time from developing AI that could readily automate 20% of the cognitive tasks in the economy to the time when AI could readily automate 100% of the tasks that people form in the economy. Where that latter milestone, the 100% milestone, is just the definition of AGI I gave earlier.
00:48:58
Speaker
And so what I'm doing with this metric is I'm taking an established AI milestone that people talk about, which is AGI. And then I'm kind of generalizing it because AGI implicitly refers to when AI can form 100% of cognitive tasks. I'm saying, let's generalize that.
00:49:13
Speaker
to AI that can perform smaller percentages of cognitive tasks. And then I've gone with 20% as my starting point, because that's a point at which AI is going to be having a very noticeable and significant economic impact. I think it will be very much mainstreamed that AI is going to be a very potent, powerful technology. But it's not yet to the stage where it's going to be able to pose risks of disempowering humanity, because it's only able to do 20%.
00:49:41
Speaker
of the task in the economy. And I think to kind of overthrow humanity, you're going to be able to have to do much more than that. And what exactly does 20% of cognitive tasks actually mean? Does it imply that a lot of people are losing their jobs? Or is it various tasks across a lot of jobs such that no one might lose their job? I think more likely
00:50:05
Speaker
It doesn't involve lots of people losing their jobs. I mean, I could go back to that example we did of the an AI research and what their workflow looks like. And then, you know, probably in that example, the 20% point was one where
00:50:18
Speaker
you know, there's a few of their sub-tasks where AI is, you know, adding a lot of value. Maybe they've handed over half of the work, and there's some of the sub-tasks where AI is, you know, only adding a small amount of value. But, you know, the human is still needed in all the different parts of the workflow. And so, you know, my kind of mode or guess for how this will play out is that AI will help out in kind of lots of little ways.
00:50:41
Speaker
and then increasingly big ways in people's jobs without just replacing certain jobs wholesale. So maybe a very powerful personal assistant AI would draft all your emails and will do the first pass on any documents you write, but you'll still be responsible for those outputs and for checking them. I do think that it will be somewhat uneven. I don't expect every job
00:51:04
Speaker
to see 20% of its workflow automated the same as every other job. But broadly, my expectation is that it's individual tasks within jobs that are primarily the thing that's being automated rather than jobs themselves being the kind of thing that's automated.

Rapid Takeoff Speed Prediction

00:51:22
Speaker
And how close to 20% automation of cognitive tasks do you think we are right now? 20% cognitive automation would correspond to
00:51:33
Speaker
you know, more than 10 trillion of economic value add if that was actually rolled out around the world. So if we are at the 20% cognitive automation milestone, then we are only seeing a very small fraction of the economic effect that that would have expect you'd expect that to have if it's fully rolled out. And in fact, the way I define
00:51:55
Speaker
able to automate 20% of tasks is actually say it should be able to automate those tasks within just a year. So it should, it should take no more than a year of kind of integrating them into your workflows. But you know, the system can actually in practice form that 20% of tasks. And I don't think we're at that stage where if we all tried hard for a year to automate deep default, our workflow,
00:52:17
Speaker
then it would actually be able to create trillions of dollars of buying the economy. So I think that even though GPT-4 is very impressive, and maybe if we have decades to integrate it, maybe it could automate 20% of tasks. But in terms of the way I defined it with this kind of you just got a year to actually implement it in practice, I don't think we're at the 20% automation. Do you think there's more automation in AI research and development than in the general economy? That is my impression, yes.
00:52:44
Speaker
large language models like GPT-4, they are particularly good at language-based tasks and they're also unusually good at coding and those types of tasks are kind of heavily represented with AI, R&D. There's a lot of coding, there's a lot of kind of theoretical reasoning which primarily happens in written form and so compared to a job
00:53:09
Speaker
that involves more physical labor, like a bus driver compared to a teacher where you're kind of in the classroom interacting with other people. I think those jobs are more susceptible to AI automation. Let's talk about the takeaways from this model. Your guesses for how quick takeoff speed will be defined as the way we just defined it, going from 20% automation of cognitive tasks to 100% automation of cognitive tasks.
00:53:40
Speaker
What are your main line guesses here? The model itself spits out a 15% probability that take off happens in less than one year and a 50% probability that happens in less than three years and a 90% probability that happens in less than 10 years.

Exponential Improvements in AI

00:54:06
Speaker
It's on the whole predicting probably, you know, probably between one and 10 years after the point at which AI can readily automate 20% of cognitive tasks before the point at which it can readily automate all cognitive tasks.
00:54:20
Speaker
Yeah, this is much faster than I would have guessed without looking at your report or looking at any data. So maybe it's to give our listeners a sense of why this takeoff speed might be so fast. We could talk about how we get to millions or billions of AI scientists. These two key inputs I mentioned earlier, compute and software, they have just recently been growing at really astounding rates.
00:54:46
Speaker
And so just extrapolating that very fast rate of input growth does tend to push towards a faster takeoff. So just to quote some quick statistics, the amount that's been spent in terms of dollars on the largest training runs has been increasing by a factor of three over the last 10 years every single year.
00:55:05
Speaker
The quality of the kind of cost efficiency of AI chips has been doubling every two years or so. And the quality of algorithms, their efficiency has been, again, doubling every year. And so these kind of exponential trends stack on top of each other in terms of cost, the money spent on compute, the kind of cost efficiency of compute with computer chips, and the improved algorithms, which means that the kind of the effective inputs into developing these systems are going very rapidly.
00:55:35
Speaker
Then that's combined with my prediction that by the time AI can automate 20% of cognitive tasks in the broader economy, it's probably going to be automating a much larger fraction than that in terms of AI research itself, in terms of designing better chips and improving algorithms.
00:55:53
Speaker
And so these very fast exponential rates of improvement of anything will be higher at when we kind of reach that 20% mark than they are today. The last thing that's driving the results is that
00:56:09
Speaker
There is a pretty significant increase in abilities from, like I said, from GPT-2 to GPT-4, and it seems plausible based on kind of looking at that, and also based on looking at kind of evidence from biology about how intelligence changes as you increase the brain size of various animals.
00:56:26
Speaker
disposable from those kinds of eyeballing, those kinds of trends that just another jump like that of GPT-2 to GPT-4, another jump like that might be sufficient to go from that 20% automation milestone to the 100% automation milestone. And if I bring those things together, it does seem plausible that just in a few years, you could do a jump of that kind of size from GPT-2 to GPT-4, maybe two jumps of that kind of size with the AI automation feedback loop speeding things up.
00:56:53
Speaker
And then go from 20% to 100% automation just in a handful of years.

AI's Potential to Multiply Labor

00:57:01
Speaker
So you talk about brain sizes in evolution. How does that inform us about going from 20% automation to 100% automation? Which species are you thinking about? So it's very
00:57:14
Speaker
kind of zoomed out and rough but but essentially what it's doing is it's saying look at chimpanzees they have about a brain that's about three times smaller than that of humans and they do seem along along some dimensions to be um you know notably less
00:57:30
Speaker
capable in terms of their cognitive abilities. And so if you're using that to benchmark, how much might the cognitive abilities of AI systems improve when they're around the human level? Because that's an example we have of intelligence increasing around human level from biology. Then it's just saying we could see some pretty significant increases in cognitive abilities around the human level just by increasing the brain size by a factor of three, which might correspond roughly.
00:57:57
Speaker
to increasing the number of parameters in the AI system by a factor of three. And so if you think that that kind of dependency to human jump is sufficient to go from kind of 20% to 100% automation, then you might think that you would need to increase the amount of compute and training, the quality of the training that much to go from 20% to 100% as well.
00:58:21
Speaker
Yeah, I think we should stress this point of the ability to train an AI model with a given amount of compute implies that you have that amount of compute available to run the models afterwards. That's the key, as I understand it, to getting to these millions or potentially even billions of AI scientists.
00:58:39
Speaker
OpenAI took a number of months to train GPT-4. What they did is they used a huge number of computer chips and had GPT-4 digest and read through a huge number of articles from the internet and other data. And once that training was complete, OpenAI still had these chips sitting around. They had previously been using to train GPT-4. And you can imagine that they then say, okay, let's now use these computer chips to run copies of GPT-4.
00:59:08
Speaker
you can ask how many copies would they be able to run in parallel. Let's say that each copy is producing 10 words per second. So that's, you know, it's thinking a bit faster than a human can think of. I would say, you know, I'm not able to write 10 words a second. Let's say that each copy of DPT-4 is producing 10 words of text per second. It turns out that they would be able to run something like 300,000 copies of DPT-4 in parallel.
00:59:35
Speaker
And by the time they're training GPT-5, it'll be a more extreme situation where just using the computer chips that they used to train GPT-5, using them to kind of run copies of GPT-5 in parallel, you know, again, each producing 10 WPS, they'd be able to run 3 million copies of GPT-5 in parallel. And for GPT-6, it will just increase again. There'll be another factor of 10 at play.
01:00:00
Speaker
And so it'll be 30 million copies running in parallel. And so if you imagine eventually we're training a system which is as productive and as generally competent as a human expert at kind of advancing AI research, this is good as the best researchers that the OpenAI and other AI labs employ.

AI-Driven Societal Change

01:00:22
Speaker
then once you train that pathway AI system, you're immediately then able to run seemingly millions of copies in parallel doing the work that AI experts do to advance AI systems. And it's that kind of
01:00:39
Speaker
massive abundance of cognitive labor, which kind of points to the possibility of there being very, very rapid AI progress, just at the point at which we're developing AI systems that can automate work done by expert AI researchers. Yeah, I think this was the key point that helped me understand how progress might be that rapid. If you just imagine these millions of experts working
01:01:02
Speaker
day and night on the problem. It suddenly seems at least more plausible to me. The conclusions you come to are quite counterintuitive. They're not common sensical. Do you think that counts as a counter argument here at all? Or is it just the case that our common sense intuitions are not applicable to technologies that's moving this fast?
01:01:23
Speaker
We should pay attention to common sense and we should try and look to see what it's grounded in and whether it makes sense to put a lot of weight on it. I think in this case, you can cash out the common sense instinct with something which is pretty sensible. You can say, look, we've seen automation happen in the past. We've seen computers do automation. We've seen automation via electricity and physical factories and never has automation
01:01:50
Speaker
the underlying technology enabling automation advanced as quickly as what I'm predicting here. It has taken decades to automate significant fractions of the work being done by humans at least. And yet here I am claiming that we could go from automating 20% to 100% of cognitive tasks in just a number of years rather than decades.
01:02:16
Speaker
And I think that that is a fair point. That should give us some pause. I think that there are other ways of interpreting the long-run historical trend, which made my prediction seem more in line with what you might expect. So there's this view of history as a series of growth modes. That's described by Robin Hanson, where in his view, the initial growth mode is that of hunter-gatherers, as they kind of slowly expand their populations. And there's a pretty slow transition
01:02:46
Speaker
to an agricultural growth mode where you now have people in farming communities much more stationary. And then there's another transition to an industrial growth mode in which we're now kind of living in cities and having factories and growth is faster. And in in Hansen's model, each growth mode is faster than the last one. So industrial growth is faster than agricultural growth, which is faster than hunter-gatherer growth.
01:03:16
Speaker
And also the transitions from one mode to the next become faster over time. So the transition from hunter gathering to agriculture took maybe thousands of years. The transition from agriculture to industrialization
01:03:29
Speaker
took maybe like 100 years or even decades. And so if you're extrapolating a long run trend of that kind, then a natural thing to think is, okay, so the next growth mode will be faster. Maybe the economy will double in just a few years rather than in many decades. And also the transition to that next growth mode will be faster. So rather than when we industrialized it taking
01:03:54
Speaker
many decades or even 100 years to kind of transition to new industrial growth mode. This next transition will be faster, maybe it's kind of a number of years or even less. And I think that people have actually gotten estimates out of, tried to kind of piece together this very noisy historical data to get estimates of transition times. And I think it is that the number that I recall is less than 10 years in times of what our transition time would be like. So if you're taking a kind of a long run view,
01:04:23
Speaker
of history. And you're taking a view according to which there have always been transitions that have been fastened the ones we've seen historically. And so if you're really looking over the long run, you should actually expect that the trends of the recent past to be broken. Then I think that the conclusion of my model is actually more in line with that kind of analysis.

AI Paradigms and Potential Shifts

01:04:44
Speaker
Does your model rely on progress in AI being a matter of more compute? Does it rely on this current paradigm of more compute and more data producing better AI? What if, for example, more compute and more data stops being useful or we reach diminishing returns? How would that affect your conclusion? If getting to AGI required something outside of the deep learning paradigm, that would
01:05:09
Speaker
very much undermine the conclusions of the model in that there would just be the possibility that we just kind of get stuck at 50% automation and the kind of feedback loops that I'm describing might just not get us out of that. I mean, again, they might get us out of that. If we're kind of automating the search for a new paradigm, you might still expect something in the direction of the model's conclusion to be correct, but there would be the potential for a pretty big blocker.
01:05:35
Speaker
Yeah, and how likely do you think that is that deep learning as a paradigm does not hold? I think it's unlikely. I mean, I think broadly deep learning being the paradigm where you have a large neural network exchange with a large amount of data is a pretty general paradigm and has worked in a wide variety of domains.
01:05:55
Speaker
As I was talking about earlier, you've got language, you've got image, you've got videos, games, and the transformer architecture is again an architecture that works because of all these different domains. And so I don't see any particular blockers that cannot be tackled within the deep learning paradigm. I think we'll need better memory systems in order to get to AGI. I think we'll need ways of allowing
01:06:22
Speaker
AI to act more autonomously and to act over longer time horizons. But I'm not seeing any reason why that can't be done within the deep learning paradigm. And increasingly the people who predict that scaling alone will not get UX or Y turn out to be wrong when the next version of GPT comes out. I think the broad paradigm itself is likely but not definitely going to be sufficient for

AI Takeoff Speed Scenarios

01:06:50
Speaker
AGI.
01:06:50
Speaker
What if I take the parameters of your model and I set them to extremes, either very pessimistically or very optimistically? What are the extremes of how fast or slow takeoff could be? You can quite easily get less than a year for takeoff.
01:07:08
Speaker
You know, maybe you only need to go from GPT five to GP six or something to go from 20% automation to 100% automation. That'd be quite a kind of an aggressive, um, but not out of the question, um, claim you could travel that distance in one year by just spending, um, significantly more on a training run.
01:07:28
Speaker
um just within one year and then especially with these kind of feedback loops I'm speeding things along so yeah less than one year is is definitely on the table you can also get you know things being as long as 20 years if you think that it's going to take a lot of effort to develop AGI you can think we need really massive increases in the investments and improvements in the algorithms um in order to do that and you think that
01:07:54
Speaker
The effects of air automation on the end along the way tend to get bottlenecked by some of the things we were discussing, like bottlenecks from needing to do physical experiments and delays to kind of rolling out intermediate AI systems so you can actually benefit from their collectivity effects.
01:08:11
Speaker
So you can get things as high as 20 years, although that is somewhat extreme. It's interesting that a 20-year takeoff is considered slow or extreme. If you take the perspective of a computer scientist in 1970 or 1990 or 2000, a reasonable guess for a takeoff speed might have been 100 years. But maybe that's just my impression. Yeah. I mean, interestingly, a lot of people have changed their timelines
01:08:41
Speaker
um, to human level AI recently. Um, I think, you know, a lot of you would chat before coming out and that, that, you know, even before it's not automated 20% of tasks. So in fact, you know, people, people did not require seeing 20% automation in order to believe that we could get all the way to AGI. And so it doesn't seem like people had this belief that that there was always going to be a really long time.
01:09:06
Speaker
between the two, given that even previously skeptical experts are assigning for decent probability to getting AGI in the 2030s now.
01:09:16
Speaker
Let's go through some of the economic impacts of AI, given your model of takeoff speeds. And as we mentioned, you're modeling this using GDP. But I'm just wondering whether there are situations in which you have an enormously powerful AI, but that power is not captured by GDP numbers, potentially because the AI is not aligned with human values. And so it goes off and does something else.
01:09:43
Speaker
that doesn't increase GDP at all. Is GDP a flawed measure of powerful transformative AI? Yeah, it's definitely a flawed measure. And you know, we were discussing earlier, you know, the ways in which GDP can come apart from actual AI capabilities if it's not actually deployed. But as you say, you know, AI could have impacts on the world that are drastic but do not increase GDP. So AI could
01:10:08
Speaker
create a new technology which causes people to go to war or which disrupts democracies or enables autocracy and make it be very impressive, very, very impactful things that wouldn't be affecting GDP.
01:10:21
Speaker
AI can make us addicted to our phones in a way that really kind of ruins everyone's quality of life, without that being captured by GDP. And, you know, in the worst case, misaligned AI could disempower humanity. And either there'd be no change to GDP or GDP, you know, be growing very quickly, but actually humans are not under control. So, certainly, GDP is a very flawed metric, yeah.

AI's Economic Impact Beyond GDP

01:10:44
Speaker
I mean, you know, already today, AI is doing loads of impressive things, you know, beating the best world experts at Go,
01:10:50
Speaker
you know, making amazing art and that again has not, you know, impacted GDP very much. You know, the benefit of GDP is that it is tracking the production of goods and services that people are willing to pay for. And so it is at least one way of trying to capture in a general sense, how much are we kind of moving the needle on things that people really want. But yeah, it has a lot of drawbacks.
01:11:17
Speaker
One of your feedback loops, the AI automation feedback loop, relies on us using our AIs to automate AI research. What if we choose not to do that? What if we choose instead to use our AIs to, as you mentioned, create ever more enticing content for our phones or something like that?
01:11:36
Speaker
I don't know to what extent this is happening in the world today but you do hear complaints from scientists all the time about not enough funding being available for basic research while there's a lot of funding available for say online content or whatever else is most profitable.
01:11:55
Speaker
Yeah, I don't have a strong view that we're going to get AI doing lots of basic science before it does monetizing online content. I think it could go either way, but I do think that at some point we're going to develop AGI and at some point, some earlier point, we'll have AI that's capable of significantly accelerating basic science. And it won't be too long after we have that kind of science accelerating AI, it'll be pretty cheap to run
01:12:22
Speaker
those AIs. And so even if there's loads of AI online generated content, that's not going to prevent other scientific institutions that already exist from using that available funding to pay for these AIs that can massively help them with the work that they're doing. It's not going to prevent companies that want to make money by developing new technologies from using AIs to do that. So I guess my ultimate answer here is it's not either or, and I expect it to be both.
01:12:52
Speaker
I think we could go through some potential objections to your model. The most obvious one and the one you've probably heard a bunch of times is the speculation that there will be bottlenecks all over the place. So bottlenecks to implementation of AI, legal barriers, a thousand bottlenecks all across the economy that will slow everything down and also potentially slow the key feedback loop, which is the feedback loop of AI automation, slow that feedback loop down.
01:13:21
Speaker
One example I have in mind here is that we've had demonstrations of self-driving cars for a long time now, and we've heard rumors that self-driving cars are just around the corner, but they haven't really arrived yet, at least not where I'm living. Could something similar happen to AI? I like the example of self-driving cars. My understanding of what's gone on in that case is it's an issue of robustness, where the technology is there,
01:13:51
Speaker
to drive safely and correctly, maybe 99% or 99.9% of the time, but that's not enough in the area of driving, even a very low risk of an accident is not acceptable and rightly so.
01:14:07
Speaker
so that has significantly delayed welding out of self-driving cars. And I think that that's a great example of a bottleneck that we will see with AI. There'll be certain areas of the economy where you need to have a really high level of reliability to weld our AI. I think probably places where AI initially has more impact on the places where you don't need so much reliability. You know, a lot of the examples I was given were
01:14:30
Speaker
where in terms of drafting things, making suggestions, but the kind of human having the ultimate responsibility.

AI in Future Research and Development

01:14:37
Speaker
And so, yeah, I mean, I clearly think that bottlenecks are everywhere, and they will slow things down. I think once you really internalize a view, which is we're going to get AI systems which are as competent along every dimension as top human experts in every domain,
01:15:00
Speaker
And once you kind of really fully internalize and imagine that scenario, that's a scenario where the AI systems are more reliable in humans, significantly so. It's a scenario where you're going to have significantly more car accidents if you drive yourself and if you use a self-driving car. And so while I do see these things being delays and I see them sometimes significantly raising the technological requirements for AI actually being profitable and actually being deployed,
01:15:28
Speaker
doesn't seem to me like this is this is telling us that you know deployment's never going to happen um or the these bomb nets are going to be indefinite it's just saying okay actually your AI systems are going to have to be much more competent and clever than you naively thought before you get really significant real world impact so you know i have updated based on these kind of considerations and the update has been in the direction of thinking
01:15:52
Speaker
Okay, we'll need the underlying technology to get really pretty good before we have transformative economic impacts and before we have really wide deployment. But it hasn't seemed to me like these kinds of considerations should update me towards thinking that AI will never be used in self-driving cars or will never be used in the economy because I just do think we'll get to this point where AI systems are better than the human experts in every dimension.
01:16:18
Speaker
Yeah, if we imagine, say, a key engineer in an AI hardware company such as ASML or TSMC, this person has a lot of tacit knowledge about how to design chips, and this knowledge is not necessarily written down anywhere. Would training on that knowledge or using that knowledge be necessary to get to expert level performance? And if that's so, well, then it seems that that's a pretty substantial bottleneck because if the
01:16:48
Speaker
If the tacit knowledge by definition isn't written down and can't be trained on, well, then it can't be incorporated into the model. Do you think that's a substantial barrier?
01:16:59
Speaker
I think it's a great example and I do think that data limitations of this kind where there's to do a job well, you need to have a specific kind of data or experience that's relevant to the context of a specific job can be a bottleneck. I'm not expecting that we get, I'm not assuming that we get AI that's so capable, it can just immediately derive everything about TSMC from first principles that may not be kind of physically possible or computationally possible.
01:17:27
Speaker
And in any case, I think the fastest way to get AI to work as a TSMC person will not be for it to read or write from scratch, but would be for it to learn from the experts.

Market Valuations and AI's Impact

01:17:40
Speaker
So imagine we have an AI system that is a more competent, significantly more competent, hardworking worker than a top human grad student.
01:17:52
Speaker
TSMC is choosing, okay, who do we want to hire on to, to be on our staff, you know, the high, this kind of human worker who will work eight hours a day and come on a huge wage, or you can hire this, this, you know, much more generally intelligent, faster learning, harder working.
01:18:09
Speaker
I'm kind of AGI worker, where the way we teach it is by kind of having it have open ended conversations with our current workers, installing cameras in our factories so that it can look at the work we're doing and how we're doing it, paying for robotics that the AGI is able to kind of operate remotely in order to do the physical laboring in the factory. And
01:18:34
Speaker
that they'll come upon what makes a lot more sense for these companies to get AI and robotics workers in place that they're human workers. It only takes one AGI just to have conversations with the top 100 TSMC experts and having maybe
01:18:51
Speaker
intense conversations over a period of weeks or months, following them all around their work, you know, you know, trailing many different experts in parallel, because, of course, you can run many different copies of the model in parallel. You know, it doesn't seem to me like it would take more than months for an AGI to learn what they need to know through a combination of those of those of those approaches to be able to do all the cognitive work that someone at TSMC does.
01:19:15
Speaker
And so while I think, again, this is going to be a bottleneck and this will slow things down compared to if like all the TSNC instructions were just down the internet.
01:19:22
Speaker
It doesn't seem like this is a permanent delay. This is like a delay of months, maybe years from the point at which you have an AI system that's able to flexibly learn as well or better than you. And of course, we might simply be surprised again at what more advanced models can do and what they can infer from public data. The expert engineer at TSMC arrived at his tacit knowledge through
01:19:46
Speaker
learning a lot about the publicly available data and maybe maybe Advanced AI could do the same. So we shouldn't rule that out. It's just an interesting case. I think my guess is that you will need to speak to some experts and to like look what's happening inside the factory to get
01:20:01
Speaker
all of the tacit knowledge, but I agree that you can probably get more from the internet than you might now, if you think. Is it the case that the market, so the financial markets, do these markets disagree with your predictions? If we look at the valuations of AI companies, they have increased a lot recently and they are very, very high, but shouldn't they be even higher potentially if takeoff speeds are very slow and AI that's truly transformative is quite close?
01:20:31
Speaker
I think you're right. I think that if everyone had my views where the technology is going, what its economic effects are going to have, and these companies would have higher valuations.

Bottlenecks in AI Progress

01:20:41
Speaker
I think that there's a post called transformative AI and the efficient market hypothesis that makes this point. It kind of actually zooms in on the case of interest rates and argues that interest rates should be higher if we expect economic growth to accelerate. And I think I basically
01:21:01
Speaker
agree that there's this not market consensus in line with my prediction. I think it's a little bit less clear in terms of
01:21:11
Speaker
how efficient you should expect the market to be in this case. It's unclear how easy it is to make lots of money via having a prediction which is different to the market and unclear whether a few people making bets and making money off this is going to shift the market to be back in line with our expectations. I'm uncertain as to
01:21:32
Speaker
whether to interpret the evidence as the market is efficient and everyone kind of there's a consensus that you're wrong versus there's you know most people think you're wrong but it's possible that the people who are most informed actually agree with me but they haven't been able to shift the overall market because there's not enough of them and the market isn't sufficiently responsive to the kind of investments that they're making.
01:21:56
Speaker
What are some of the strongest objections you've heard to the picture we've sketched here of quite fast takeoff speeds? Is it around bottlenecks in the economy that we talked about or is it something else? So I'd want to distinguish between the capabilities and the impact take of speeds. On the capability side, probably the strongest objection
01:22:16
Speaker
I've heard is that it's one that we touched upon already that simply scaling up the current approaches won't be sufficient to go all the way to AGI. And the version of that objection, which I find the strongest is one that says, yes, you can probably do it eventually within the deep learning paradigm. But to get to AGI, there's going to be a lot of kind of nitty gritty work and kind of reconceptualizing
01:22:43
Speaker
exactly how you're deploying your systems and adding things like memory and adding other kind of bells and whistles. And that's not going to happen very quickly. And, you know, the framework I'm using, it abstracts away from a lot of that complexity and just has this kind of oversimplified notion of the quality of algorithms. Maybe actually that simplification is leading us astray in a significant way. And there's going to be kind of algorithmic barriers to AGI within the deep learning paradigm that are very difficult to overcome.
01:23:12
Speaker
If that's the case, then I think that could delay takeoff. Another thing that could delay takeoff relative to my model, which actually does update me, is the possibility that we're bottlenecked on the data for getting AGI or for getting superhuman systems, where there's been this massive reserve of available data online that we've been benefiting from in recent years.

Rapid AI Progress Scenarios

01:23:34
Speaker
But once we tried to get to superhuman performance,
01:23:37
Speaker
it's going to be harder to elicit that from existing types of data, because existing data will not exhibit superhuman performance as readily as it does human performance, because the data is produced by humans. And so I could see there being a bit of a slowdown or a bit of a headwind in terms of going past the human level because of that, and because of more generally running out of the internet data that has so far been readily available.
01:24:06
Speaker
So those are the two objections on the capability takeoff side. And then on the impact takeoff, this kind of economic impact stuff, I think that there's no one objection which I find hugely convincing. One thing you can say that I do find somewhat convincing is just to say that there's loads and loads of different possible bottlenecks. There's kind of the time to design physical robots that will need to actually do physical work.
01:24:32
Speaker
that you'll need to actually really change economic growth. There's kind of limits on physical resources you can use to drive the AIs and the robots. There's time that you need to do experiments. There's bottlenecks from kind of humans resisting being replaced and from regulations. And maybe none of these bottlenecks is individually enough to really block AI, but they all combine together and they just really drag out the time of AI's economic impact. And then maybe
01:25:00
Speaker
by the time AI is widely deployed, then...
01:25:04
Speaker
for some reason or other, it's not able to drive really transformative tech progress because maybe by then we've kind of already reached the ultimate limits to technology. I mean, that's the part of the story. I don't find that convincing. My overall honest view is that I think there'll be a lot of bottlenecks. I think they'll eventually be overcome. And at that point, I expect things to be very, very crazy. But if there's somehow a way that it could take us so long to remove all these bottlenecks that there's no room for kind of AI to drive much faster technological progress once they will move. And that would be
01:25:34
Speaker
That would be where I would go if I was trying to give the strongest story for why this is all wrong.
01:25:39
Speaker
What's something you've changed your mind on over the course of writing this report? One thing you mentioned as a takeaway is that you now think that it's more difficult to avoid getting to artificial general intelligence by 2060. Is that the biggest takeaway or are there other things? That's one big takeaway. Thinking about these feedback loops, both the investment feedback loop and the automation feedback loop, made me realize that
01:26:06
Speaker
Even if we don't get to AGI, they can do all cognitive tasks by, let's say, 2040. It seems hard for me to imagine we haven't got to AI that makes a lot of money in the economy and to AI that is able to automate pretty significant fraction of the cognitive work involved in automating AI R&D. And so once you get to that first stepping stone, that's going to stimulate further investment and that will accelerate further AI progress. And it becomes quite hard for me to imagine
01:26:35
Speaker
a world where we don't get โ€“ not impossible, but it becomes harder for me to imagine a world where we don't get AR by 2060, because I kind of have to really lower the capabilities of what AR can do by 2040 to such a low point that it no longer โ€“ I no longer really believe those predictions. There really is the possibility that this AR automation feedback loop goes pretty quickly, that
01:26:56
Speaker
regulations don't interfere with it very much because R&D is typically not a very regulated field and that you could get some really scary, fast progress in the underlying AI technologies around the time at which we reach human level systems.

AI's Future Risks and Superhuman Challenges

01:27:11
Speaker
I mean, even if it doesn't immediately have economic impacts, I think in terms of the risks that that could pose, that would be very risky in destabilizing if it does in fact happen as quickly as it seems, maybe technologically feasible.
01:27:22
Speaker
Yeah, I think it's worth spending a little time on that picture. I've been walking through a number of objections to your model and to your view of AI progress. If we simply assume that your view is correct and we take your kind of most likely way that things will go, how does it look to you? And I think we should spend more time reiterating why this would be potentially dangerous.
01:27:47
Speaker
One scary possibility is that AI systems developed in, let's say, 2030 are able to automate a very large fraction of the work done by AI researchers, let's say, able to automate 80% of that work. They do not themselves pose the most extreme risk. They don't themselves pose the risk of disempowering humanity. They pose other risks, but not that particular risk. But what they do is they enable
01:28:17
Speaker
progress from that point to be significantly faster. They're helping in video design significantly better AI chips. And so the pace at which those AI chips are improving is three times as fast as it is today. And similarly, they're allowing the design of AI algorithms to be significantly accelerated, let's say again, three times faster than this today.
01:28:38
Speaker
So rather than the quality of AI chips doubling every two years, it's doubling every eight months. And rather than the quality of AI algorithms doubling every 12 months, it's doubling every four months. And then this leads to it to only be, you know,
01:28:56
Speaker
a couple of years later that we have AI that can not only do 100% of the tasks done in AR and D, but are actually significantly superhuman on many dimensions. And then we've got this period of just a small number of years where some of the most extreme risks from AI are emerging. In particular, the risk of superhuman AI systems that humanity loses control of that ultimately end up determining the future of how history plays out.
01:29:26
Speaker
And because it's happening in just a few years, we don't have much time to study those systems.
01:29:31
Speaker
and understand the risks they pose. We don't have much time to use slightly weaker systems to help us solve the problem of controlling those stronger systems. We don't have time to get governance proposals in place that manage these risks because regulations typically take a long time to come into play. It's hard for labs to coordinate without that governance or for labs to coordinate on going slower than they would be able to if they just plowed on full speed ahead.
01:30:01
Speaker
we end up just kind of hoping for the best and some actor develops superhuman systems without really properly understanding what those systems are capable of and what the risks are. Are we potentially helped by the fact that if this transformative AI is quite close, then it'll probably be developed by companies that we know of and with techniques that we are already aware of. Is this any reason for hope here that because these paradigms of these companies are well known,
01:30:31
Speaker
It might be easier for us to control them, even though everything is happening incredibly quickly. Interesting question. I think it's true that if we switch to a totally new paradigm of AI development, then that might undermine some of the work we've already done in terms of how to understand and control these systems.
01:30:54
Speaker
It's hard to predict whether a new paradigm would be more or less easy to work with in terms of understanding and aligning these systems. And I won't speculate on that, but I think all things equal. Yes, it's nicer to work with a paradigm that we're already familiar with. The flip side is that we don't have a solution at the moment to how to control superhuman AI systems. And there's also no really strong candidate solutions that people are excited about. The most exciting

Factors Beyond Compute in AI Development

01:31:23
Speaker
example that people point to is the plan of using AI systems to come up with a better solution, which is clearly a can-picking solution. And so one reason it could be nice if we flip to a new paradigm would be that maybe there would actually be a plan for aligning systems that was a little bit more concrete.
01:31:43
Speaker
Let's switch topics slightly here. We've been talking about how AI progress can be driven by lots of training compute and data. But you've also done some work on how we might get AI progress without additional compute. And I think just to introduce this topic, we could talk about compute governance as a paradigm and how this paradigm might break if we can get a lot of AI progress without any additional compute.
01:32:14
Speaker
Recently, the main driver of AI progress, I think, has been increasing the amount of compute, the amount of computational power used to develop the most advanced AI systems. And so I've talked a bit about how, you know, the quality of chips are getting better over time, you know, cost efficiency doubling every two years and how spending has been increasing by a factor of two or three each year. But there are other
01:32:41
Speaker
drivers of AI progress, one of which I've already talked about, which is the efficiency of the training algorithms. I mentioned that you're able to use your compute twice as efficiently this year compared to last year due to improvements in those algorithms. And there are actually other drivers of AI progress that I haven't even discussed yet. So there's improvements in data, for example,
01:33:03
Speaker
Reinforcement learning from human feedback is a mechanism for using data from humans to kind of tweak the performance of a model like DPT-4 after it's already been trained on a huge amount of internet text. There's a technique called constitutional AI that was developed by anthropic where AI models
01:33:26
Speaker
review their own outputs, score themselves along various criteria. And that, you know, is then used as data to improve that AI model. And then there's other kind of improvements in data like through rating high quality data sets in things like mathematics and sciences. And there was recently a very large improvement in mathematical abilities of language models with a paper called Minerva, where the main thing they did is they just

AI-Generated Data and Quality Challenges

01:33:54
Speaker
took a lot of math and science papers, and they just cleaned up the data for those science papers so that previously certain mathematical symbols had not been correctly represented in the data. And so, you know, the data hadn't really shown language models how to do math properly, and they claimed that data so that now all the symbols were represented correctly. And just from that data improvement, mathematics performance
01:34:21
Speaker
improved very dramatically. So that's a source of improvement which isn't from compute or from better algorithms, it's just from high quality data.
01:34:29
Speaker
Then there's improvements coming from better prompting. People may have heard of the prompt think step by step or chain of thought prompting, where you just simply encourage a model and you give it a question like, you know, what's 32 times 43? And instead of outputting an answer straight away, you encourage it to think through step by step. So, you know, it doesn't intermediate calculations.
01:34:51
Speaker
And that can improve performance significantly on certain tasks, especially tasks like math and logic that require or benefit from intermediate reasoning. There's other prompting techniques as well, like few-shot prompting where you give the AI few examples of what you want to see that can significantly improve performance.
01:35:08
Speaker
And I think this is kind of funny that this might be similar to how humans work. So if you ask yourself to think through a problem step by step, you probably get a better result than just coming up with an answer immediately. If you ask yourself to generate five answers to a question, you might get a better result than if you only generate one and so on. Yeah, I completely agree. I think there's an analogy there for sure.
01:35:30
Speaker
So we've had improvements driven by better data, improvements driven by better prompting. There's also been improvements driven by better tool use. There's a paper called Toolformer where they train a language model that was initially just trained on text. They train it to use a calculator and a calendar tool and an information database. And then it's able to learn to use those tools. And actually, ultimately, it kind of plays a role in generating its own data for using those tools.
01:35:59
Speaker
Then its performance, again, as you might expect, improves on downstream tasks. GPT-4, if you pay for the more expensive version, you can enable plugins which allow GPT-4 to use various tools like web browsing and use a code interpreter to run code experiments. So that's been driving improvement. There's a class of techniques I'm referring to, scaffolding, where the AI model is
01:36:28
Speaker
prompted to do things like check its own answer and find improvements and then kind of have another go at its answer where it's prompted to kind of assign break the task down into sub tasks and then kind of assign each of those sub tasks to another copy of itself where it's prompted to kind of reconsider its high level goal and how its actions are currently
01:36:48
Speaker
kind of helping or not helping achieve that goal that kind of scaffolding underlies auto dpt which which people may have heard of it kind of an agent ai that is powered by dpt4 and this scaffolding that kind of structures the dpt4 thinking
01:37:03
Speaker
How much do you think we can gain from these techniques that kind of uses the output of one AI in order to generate data that's then used to improve the AI itself? Do you think we can make up for potentially running out of human generated data by using this AI generated data? I think that that will be one
01:37:25
Speaker
one tool that is used to get around the data problem. Yes. So you can imagine AIs paraphrasing existing Internet documents so that they're not exact repeats, but maintain the meaning and the training of those. Already there are papers where AI generates attempted solutions, for example, to a coding problem, and then those are checked.

Internal Company AI Progress

01:37:45
Speaker
kind of automatically, and then only the good solutions to them that back into the training data, that there will probably be lots of creative ways in which AI companies are trying to produce more high quality data. And increasingly, they'll be able to leverage, I'm kind of capable AI systems to produce that. While AI systems are less capable than humans, there's going to be a limit there, because ultimately, the data from the internet is coming from humans. And so the data that AI is producing might be less
01:38:13
Speaker
lower quality. And there are also problems you get at the moment where if you continually train on data that you're producing, then progress does tend to stall, as I understand it from the papers I've read. But I think they'll be pushing on improving those techniques. That's a long list of ways we might get AI improvements without additional compute. The last one I wanted to mention was efficiency gains. So shortly after chat GPT 3.5 was released, there was a turbo
01:38:43
Speaker
chat GPT-5 that was released that was much faster and much more efficient in terms of the amount of compute that was used by OpenAI servers. And there are various techniques like quantization and flash attention that just allow you to run a model with a very similar performance to your original model but use less compute to do so.
01:39:00
Speaker
And so that's again, you don't need additional future chips to better from that improvement. These are all the improvements I've listed here, the ones that you can do without getting more compute. And why would all of these improvements without additional compute be a problem for the paradigm of compute governance?
01:39:19
Speaker
Compute governance is one, I think, very exciting approach to governing the risks from advanced AI. And so, you know, very briefly, the idea behind the approach is that there are a very small number of organizations that produce the
01:39:40
Speaker
chips for the top AI systems today. And there are also a small number of organizations that produce some of the equipment that you need to produce those chips in the first place. So TSMC in particular is the only organization that produces the AI chips at the very top of the range. And then there's a company called ASML.
01:40:03
Speaker
which is the only company that is able to make the equipment which is used to produce those chips. So there's a very concentrated supply chain for cutting edge AI chips. And so it seems like it could be possible to use that concentrated supply chain to track where the best computer chips go, who they're sold to, who controls them, and thereby track who is able to develop the most powerful and dangerous AI systems.
01:40:30
Speaker
then that gives you a way to monitor what those actors are doing and how quickly they're increasing the capabilities of the AI. So you can see, okay, we know that no one's going to train an AI that's significantly better than the best yet because we know where all the computer chips are and no one has enough computer chips to train an AI that's that good. So we have some kind of assurance.
01:40:52
Speaker
Yeah but that begins falling apart if AI companies can get AI progress kind of internally in their companies without buying lots of new chips without relying on these supply chains simply by all of these techniques you sketched out. How big can the gains from all of these techniques be do you think.
01:41:11
Speaker
Yeah, it's a great question. I agree. It's a kind of scary possibility. One caveat I want to add right up front is I don't think that these techniques alone with small amounts of compute are going to be enough to develop really dangerous systems. So I think that if we're tracking where this hiring compute goes and who has access to it, then that will probably be enough to catch any developer that might develop a really high risk system.

Governance and Security in AI Development

01:41:40
Speaker
Well, I think the trouble is, is that once you develop a really capable AI, and as we've discussed, you could then be running, you know, potentially millions of them in parallel or having them think a hundred times as quickly as human researchers and working day and night, then it's possible that these other techniques that don't rely on actual compute could give, you know, a burst of progress.
01:41:59
Speaker
where maybe you can improve the efficiency at which you're running your AI systems by a factor of 100. Maybe you can improve the efficiency of your training algorithms again by a factor of 100. So now you kind of instead of training of the equivalent of GPT, there's been a significant step up in the intelligence. And then maybe in addition, you're getting big gains from the quality of the data and the scaffolding and the prompting that is really significantly increasing.
01:42:25
Speaker
probably the only organizations who will be able to do that are ones that have already got a lot of compute and so can have all these kind of AIs doing this AI research for them, advancing all these techniques. But I think the risk here is that
01:42:39
Speaker
it becomes very hard to monitor and measure the AI progress and govern it for organizations that have kind of gone over this threshold where the kind of AI feedback loop is powerful enough to power very significant progress by these non-compute avenues. At that stage, I think we need to make sure that our governance system
01:43:04
Speaker
we extend it beyond just tracking and measuring compute to then having measures for tracking what the progress is within these organizations that have very powerful AI systems and ways to catch whether these organizations are very rapidly improving their AI systems so that we can monitor and govern that.
01:43:27
Speaker
trying to evaluate whether AI systems within these companies are already becoming very capable. There's a two-stage process. Firstly, we track compute and then we will be kind of measuring
01:43:41
Speaker
with those companies that are using a lot of compute are capable of the AIs. And then for those particular companies, we want to be saying, is there a feedback loop which is enabled just within this company, where that company is able to have very rapid AI progress about even getting more compute. And so we just need to be kind of monitoring those top
01:44:00
Speaker
AI companies in this way. Yeah, I think there's some excitement about evaluating these models for dangerous capabilities. I think one question I always have there is just if a model fails some evaluation, what do we do then? I think that we want companies to pre-commit to what they're going to do if models fail a particular evaluation ahead of time so that there's no ambiguity. There should ideally be a process in place
01:44:29
Speaker
which prevents the company from just saying, ah, let's just go ahead anyway, even if in advance they would have said that this was, this was a cause of concern. So you can imagine a process where an air company publicly commits to do a certain test for dangerous capabilities. They also publicly commit that if that dangerous capabilities test is triggered,
01:44:47
Speaker
and they will pause training until a kind of a broad group of stakeholders has agreed that they can continue training. That broad group of stakeholders might just be the company's board if they have a board which is empowered to represent social interest and it has a remit beyond just profit maximization. You can imagine it being a broader group of stakeholders still where
01:45:08
Speaker
There are people in regulatory authorities or other auditing organizations that the company is committed to consult, get, you know, kind of a majority of agreement from before continuing with its training run.
01:45:22
Speaker
then the company could also commit to having whistleblower practices in place so that if it's not following this process any any employee can anonymously report that and they're encouraged to do so. One possibility you you mentioned somewhere is is a case in which some companies has trained a powerful AI
01:45:42
Speaker
And because their information security or their cybersecurity isn't what it should be, that model leaks and can be potentially used by bad actors. You mentioned right in the beginning the possibility of bioterrorism via a capable model. What are the best solutions for keeping these models safe or for securing the data?
01:46:06
Speaker
My understanding is that companies are not at the stage where they can say that their models are being kept safe, that certainly if a state actor wanted to steal the weights of a cutting-edge AI system, they would be able to do so very easily, and probably even kind of lesser, you know, kind of smaller threats than that might be able to steal more weights without an excessive amount of effort. Yeah, I think probably AI companies
01:46:29
Speaker
should and probably are seeing it as one of their priorities to improve their information security because of these

AI's Dual Impact: Human Flourishing or Risks

01:46:35
Speaker
risks. That would be a great improvement, I think. It's in the interest of the companies themselves. It seems like a win-win for me. I don't know if you agree with that. People have sometimes contrasted
01:46:45
Speaker
the desire to kind of be the responsible actor that develops a powerful AI system first, for fear of a less responsible actor developing it. Instead, if you didn't plow on a head, they contracted that with desire to go slowly and cautiously yourself. I think those two motives come together in this case, where even if your main worry
01:47:04
Speaker
is actually about kind of a bad act of developing AI systems. For you, you still want to improve your infrastructure security. Everyone, the people who are worried about these systems being unsafe and the people who are worried about wanting to kind of get that fast ourselves can all agree that we want better infrastructure security. Now, if a company was really irresponsible, I can imagine it just saying, yeah, I don't care if some bad actor steals our AI. We still just want to make money. I'm on the US market. But the AI companies aren't going to do that.
01:47:33
Speaker
Yes, we've discussed how AI might improve via a lot of additional compute or via no additional compute. You've talked about how this might have a transformative economic impact.
01:47:44
Speaker
One question I have is, if you've taken all things considered view of this, do you think this will turn out well for humanity? Because a lot of economic growth could be fantastic and has been fantastic for living standards in the past. Could we be entering into a great time or could we potentially be entering into a dangerous time? I think it could be really good or it could be really awful. I think the upside could be really high and I wouldn't
01:48:14
Speaker
personally think about it in terms of economic growth, but I think about it in terms of human flourishing. You could have an end to illness, an end to poverty, an end to material needs. You could have the possibility, if you wanted to, of going on any adventures or fulfilling any dreams you'd always wanted to pursue.
01:48:37
Speaker
you know, with new technologies, really, really incredible things might be possible. And I think that that could be a really amazing future. I think it's really hard to paint a concrete version of what that looks like because you could analogize it to trying to tell someone 2,000 years ago all the kind of luxuries and good things in one society. You could point to just absolutely incredible entertainment going into a simulated world where you're on a real adventure
01:49:03
Speaker
It's such a difficult thing to do to sketch out how amazing things might be. It always kind of feels flat when you say it out loud in a sense. But I see what you're aiming for here. The picture you're painting is of a future in which it could either be very good or very bad. Do you see a potential for a kind of middle scenario in which the world continues more or less as it has been for the past 100 years?
01:49:32
Speaker
Is that also a live option? I think it's possible. So you could imagine getting a real war with AI development.
01:49:41
Speaker
And if that happens, then my default expectation would be that, yeah, the kind of things continue as they have been for as long as it takes for us to get around that wall. And if that wall is really very permanent, then we begin to get into worries about the rate of technological progress stagnating as the population begins to shrink because fertility is below replacement rate. And you can get into actually other worries if you really start to play out this world where we don't reach
01:50:09
Speaker
I'm a G. I. You can end up kind of stuck at car levels of technology for a long time actually. Okay let's let's end on a on a lighter note and talk about A. I. and board games. Yeah I've been thinking about what what does it mean when an A. I. becomes superhuman at at chess for example as happened in I think ninety seven. Or go which happened in twenty sixteen.
01:50:34
Speaker
I mean, in the past, you would have people talking about how chess is the height of human intellectual ability. But now it just seems like humans are playing a lot of chess. Humans are interested in other humans playing chess. And even though there are some chess, some people are very into chess who watch AIs play against other AIs, it seems that this is a domain in which humans are still very relevant. Do you think there's some lesson there for broader AI automation of the economy? Probably.
01:51:04
Speaker
I am also a little bit naively surprised at how many people are still really excited about the game. I think I had the attitude of, okay, well, if the eye can do it better than me, that kind of takes some of the excitement out of it.
01:51:16
Speaker
That said, I'm a passionate diplomacy player, and people who follow AI closely may know that some AI has recently gone pretty good at diplomacy. I think still less good than the best humans, but I think better than the average humans, maybe, at least on a tech-based game. And it hasn't made me any less excited to play that game. So I think I was probably just kind of not adequately imagining what it would feel like for AI to kind of be matching

Human Relevance in an AI Future

01:51:46
Speaker
my performance in chess. Maybe an implication could be, you know, even once AI is better, kind of any task that you can imagine in the economy, there's still going to be people who are willing to pay for humans to do tasks, because that's something they find particularly interesting. You know, we're still interested to watch humans play chess. We'll probably still be interested to see humans produce art. Maybe we'll still want to kind of have human carers, human priests. So in terms of economic role for humans, I think,
01:52:15
Speaker
there's no points towards us not being kind of totally obsolete it because humans kind of like to watch other humans do things and that that can give us some kind of job. Is it worrying if AIs are getting good at the board game diplomacy. Does this mean that they might be able to do diplomacy in the real world or that they might be able to use deception or you can tell me about the details of the game but that they might be able to set up some agreement and then break the agreement afterwards. It is a game where deception can get you a long way. I would have
01:52:45
Speaker
thought before it happened that this would be a milestone that would make me quite scared about AI deception manipulation, because it's a fairly complicated environment. And that, you know, the social dynamics are potentially quite complicated. And it's an exhausting game to play. So it kind of knaved out. I thought, okay, if AI is able to win at this game, then it's really very socially competent and persuasive and manipulative.
01:53:09
Speaker
In fact, when you see the AI system that's actually able to match some amount of human performance on diplomacy, it's not nearly as scary as you might have imagined. It's trained on loads and loads of different examples of diplomacy in particular, and loads of examples of messages that humans sent in diplomacy games. And it's got a kind of engine which is custom built for choosing what diplomacy moves to make that won't easily generalize outside of that domain.
01:53:38
Speaker
it seems like it's actually reaching that threshold not by kind of thinking of new and genius plans and manipulating humans on an untold level but just kind of by really learning the mechanics of the game and not making mistakes and being consistent reliable and so it's actually a lot less scary than I would have thought and I think it speaks to the difficulty of

AI as Agents for Open-Ended Tasks

01:53:57
Speaker
designing in advance a benchmark that measures a scary capability that will actually, when that benchmark is passed, will actually make you scared. Because often, you know, you pick a benchmark which seems scary, but then the AI system that matches that benchmark just doesn't actually end up being as scary as you thought it might be.
01:54:17
Speaker
Maybe in 2015 or so, DeepMind talked about a strategy for getting to artificial general intelligence, which involved playing ever more complex board games and having these reinforcement learning agents in these kind of worlds where they're able to navigate more and more complex and real world games. That's one strategy to artificial general intelligence that kind of points towards more agency for the AI.
01:54:45
Speaker
You could call this the current paradigm of large language models. There's also a similar kind of convergence towards agency in AI. At least that's what I'm hearing. Why is it that both of these strategies push towards more agency or more agent-like behavior in AI?
01:55:04
Speaker
I think the main thing pushing language models towards being more agent like is that it's useful to have an agent because an agent can be more autonomous and do more open ended tasks and potentially automate larger chunks of your workflow. And I think that's.
01:55:20
Speaker
That's why I expect people to continue to try and improve on and iterate things like auto-GPT that turn language models into agents. So I think probably there is a kind of economic force pointing in the direction of creating agents. And just in general, we want to use the AI to do useful things in the world. Therefore, we'll kind of try and make them more agentic and autonomous. I think there's probably a different thing that explained DeepMind's approach where they were using reinforcement learning. They were probably making a bet that that was just the most promising
01:55:48
Speaker
technological trajectory to get to that end point of an agent was by just kind of training agents the whole way. And it's been kind of quite good news from my perspective that actually it seems like it makes more sense to first just train language models to kind of imitate human text and then later compose these kind of little chatbots into agent-like things. That I think makes it more promising that we could actually understand why these agents are behaving the way that they are.
01:56:15
Speaker
Tom, thanks for spending a lot of time with us. It's been very interesting for me. Thank you so much. It's been a pleasure.