Introduction of Guest and Topic
00:00:01
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker. On this episode, I talk with Robin Hansen, who's an economist at George Mason University. Robin has also spent decades researching artificial intelligence, and that is what we talk about on this episode.
AI Development Metrics and Safety Concerns
00:00:20
Speaker
We talk about which metrics might be useful for measuring AI development,
00:00:26
Speaker
what we can know based on the current data about how AI development is going and at what speed. And we talk about whether AI development might be dangerous in a way that would make it rational for us to research AI safety. Here's Robin Hanson. Robin, welcome to the podcast.
Robin Hanson's AI Journey
00:00:46
Speaker
Hello, Gus. Fantastic. Tell us a bit about your history working on AI and debating AI.
00:00:56
Speaker
So as a graduate student at the University of Chicago in 1982 or three, I hung out in the physics library sometimes and noticed some journals that I would browse various journals. And I came across some articles in an artificial intelligence journal by Douglas Leonard.
00:01:23
Speaker
where he was describing his famous Urisko system, an improvement of his previous AM math system, and I was very impressed. I thought that that was really cool.
00:01:39
Speaker
And at a similar time, there were many news media articles about a revolution in artificial intelligence happening ahead of time and gushy sort of predictions about how AI was about to just change the economy radically and change jobs and all sorts of things.
00:02:00
Speaker
that was impressive. I was impressionable. I was young, naive even, and impressionable. And I heard that basically out in Silicon Valley, the future was being created in AI research. At the time, I was a graduate student in philosophy of science who had then switched into physics because I had to get a physics master's as part of my PhD in philosophy of science. And then I decided to just get master's
00:02:31
Speaker
questions that I had about philosophy of science. And so in the physics library reading about AI, I saw this bright future and I decided to go join it. So I took the two masters and went out to Silicon Valley to join the future of artificial intelligence and roughly believing the gushy forecasts in the media at the time. And
00:02:58
Speaker
I also happened to read about stuff about hypertext publishing, stuff Ted Nelson had written. I had been in contact with some people, I guess, who put me in touch with that. And I also thought that was very exciting. This was the precursor to the World Wide Web. But this was in the early 1980s.
00:03:24
Speaker
And so I left University of Chicago in 1984 with two masters in philosophy, science and physics and went off to Silicon Valley in order to pursue these two futures that I had seen. One was the World Wide Web was coming, famously only started in 1994, but this was 1984, 10 years ahead of time seeing the future coming and the artificial intelligence boom that was existing at the time that I wanted to join.
Historical Patterns and Future Predictions in AI
00:03:51
Speaker
So this AI boom, is this the boom where people were trying to implement expert systems? Surely expert systems was a big part of that, yes. Yeah, so this is before the current paradigms of machine learning. Long before, so basically if we look back now, which I didn't at the time, we could say that roughly every 30 years
00:04:15
Speaker
There's another big boom of interest and concern about AI, and that previously in the early 1960s, there had been a similar boom, and in the 1930s, a similar boom, and in the 1910s.
00:04:32
Speaker
a similar boom 30 years apart. And we are now sort of more toward the tail end of that like 2010s boom, or after the peak at least. But every 30 years for a while, we've had one of these bursts of interest and concern about AI. Interest in the sense there's investment in firms, there's people moving into computer science as students, trying to get jobs.
00:04:58
Speaker
our gushy articles about progress concerned articles about what could happen and go wrong those have happened again 1930s 60s or you know
00:05:09
Speaker
you know, late sixties, 80, 90s, 2010s, you know, roughly that rate. And we even had a burst of concerns in the early 1800s with the introduction of the industrial revolution. There was also a burst of concern about, um, where the, where that automation revolution taken, many people were concerned right then. And like the 1820s that in fact, machines would take all the jobs away from humans, even back then.
00:05:35
Speaker
So this is kind of like a cyclical view of artificial intelligence, booms and busts. And so this view predicts that we might be headed for another bust period in which there isn't that much interest, isn't that much funding, not that many papers published. But this is, I imagine that in the city, not as many, yeah. But of course, even during the low points in this history, there was a lot of activity going on.
00:06:04
Speaker
Is it fair to characterize you as a contrarian among contrarians?
Debating AI Advocates vs. Skeptics
00:06:10
Speaker
I imagine many people in your social circles believe that AI will progress quite rapidly in the coming 30 years, but you don't believe that. Are you pushing against contrarians? Are you a meta-contrarian?
00:06:29
Speaker
So there's always the trouble of judging attitudes and opinions on something where there's an asymmetry in how loud and active people are. So consider, say, 9-11 truthers, right? Now, most people are not 9-11 truthers, but some people are. And those that are, it's much closer to the center of their identity. And they spend a lot more time on it. And they will give you a lot more detailed arguments about it.
00:06:59
Speaker
And if you see a 9-11 truther talking to a skeptic and having a debate, the truther will sound like they know a lot more because they're studying this stuff and this is their center of their world. And the skeptic will just not be up on all the details the truther is, but the skeptic will still say, this just doesn't all sound right. And I can't be bothered to get it. It's like, sorry, I got other things to do.
00:07:23
Speaker
pattern across a wide range of areas where there's a small group of passionate people on one side and a lot of disinterested skeptics on the other side and that makes it hard for outsiders to judge who to believe in a situation like that because you know in any one case it does look like the the advocates no more they've got the details they've got the numbers and they can they can quote people and they've got all the specifics
00:07:50
Speaker
the skeptics are distracted and not that interested and they have some arguments but they don't that careful or thoughtful about them but there's just a lot more of them and those other people like getting other stuff done in the world right and so who to believe
00:08:07
Speaker
That's just a hard situation in general when we've got a small fraction of passionate advocates in a large world of somewhat indifferent skeptics. I think we have more leverage to say if we see that situation appearing over and over again several times, there's this general situation of skeptics, a small number of passionate, well-informed skeptics and a large number of indifferent people doing other things.
00:08:32
Speaker
But sometimes we see that pattern repeat a lot. That is where we see a small number of people passionately advocating the same sort of thing over and over again across time. And so I think we're in a better position to draw some conclusions there. We can say, well, look, all the previous guys, they were tended to be forecasting stuff happening pretty soon. And that didn't happen. So maybe we, you know, if this happened five times,
00:08:58
Speaker
this time is maybe no more than one in five has looked at all the previous times. One in five could still be big enough to be worth paying attention to, but you've got a pattern, right? But this depends on the quality of the data that comes before us. So maybe in the past people didn't have access to the same amount of the data and maybe they didn't think as clearly as we can think now. They didn't devote enough resources to it.
00:09:24
Speaker
That's just the general situation for contrarians, right? So the overall stats on contrarians is contrarians are usually wrong, right? Just do the overall data set, right? And so each set of contrarians knows that fact and they're each going to be trying to tell you how they're different.
00:09:39
Speaker
because they think their odds of being right are higher than this baseline rate. And so then that's what you have to judge is, OK, which of them are right? So I think they do vary in some
Evaluating AI Progress and Metrics
00:09:50
Speaker
ways. For example, we talked about UFOs. I might say, well, look, we look at, say, ghosts and fairies. The best evidence for ghosts and fairies just is not as good as the best evidence for UFOs. So I'm going to at least say UFOs look a little better than ghosts and fairies. It doesn't mean they're right. It means I can make some comparisons.
00:10:09
Speaker
Yeah, and hopefully I think you believe that very fast AI progress is even more plausible than UFOs, for example, or at least more plausible than ghosts and fairies. Sure, sure. I mean, I've tried to do a UFO analysis that is what's my prior estimate that UFOs would be aliens and I give it roughly one in a thousand. So I'll give you more than one in a thousand for a fast view of AI progress, although I'm not sure how much more than one in a thousand.
00:10:38
Speaker
But we'll get into that. Yeah, exactly. So maybe before we get into that, we should think about the metrics we're using to evaluate how fast AI is progressing. Well, let's ask, why are we asking how fast AI is progressing? Why is that the question? That's the question because, well, we were just
00:10:59
Speaker
debating what will happen in the future. And if we have some good measure of how AI has been progressing in the past, maybe we can extrapolate from that into what will happen in the future. I think that's the basic idea. So let me try to suggest a different framing. I'd say there are a lot of technologies we can envision happening in the future. And a lot of current technologies could have been envisioned in the past. And most technologies have some problems associated with things that can go wrong.
00:11:30
Speaker
And at some point, somebody should be thinking about each of those things that can go wrong. So I've got no problem with people worrying about things going wrong and trying to specialize that and work on it. So the major issues are the relative effort of people thinking about things going wrong, each particular thing, and also relative to trying to make things go right.
00:11:54
Speaker
There's an upside and a downside, and we shouldn't all be looking at the downside. Some of you should be looking for upsides and trying to work those out too. And then there's the timing question of when is the right time to look at each kind of problem. So if we look at the past, we can certainly say, if you could have imagined in the year 1000 envisioning the Industrial Revolution happening, you could have imagined tanks and airplanes and maybe nuclear bombs or something, you could have imagined some of the issues that might appear in the industrial era.
00:12:23
Speaker
car accidents, for example. And then there'd be the question of when is the right time to start working on those. And I think it's pretty obvious that for most of the things you could have roughly a vision far in the future, it was just too early to work on them until you could have relatively concrete versions of the problems in front of you to work with.
00:12:47
Speaker
You work on car accidents when you had a rough idea of how big a car was, how fast it would be going, how curvy would roads be, how much weather would matter. You'd want some rough idea of the basic context so that you could start to work on those problems.
00:13:07
Speaker
We should say something about the assumptions we're both working with here. So what we are interested in, or what I'm interested in at least, is the question of whether we could
00:13:18
Speaker
within the next, say, 30 or 50 years or 100 years, get to a point where AI is very, very powerful. Maybe it is general, so maybe we have artificial general intelligence, and maybe this type of intelligence is misaligned with human goals. This is why we are discussing progress. In your framing,
00:13:42
Speaker
It is simply too early to know what's interesting to work on in order to try to make sure that AI development will be safe for humanity. So alignment would be cashed out as there's
00:13:57
Speaker
people who have AI systems and the AI systems are assisting them in some ways and they are supporting the AI systems with resources and development and then the AI systems do things that on average usually sort of help people that's why they're funding them and making them and then the AI systems get out of control that is the purposes that they are achieving and the effects they're having
00:14:21
Speaker
deviate more and more from what the people had in mind and what they intended them to do and that out-of-control behavior becomes the problem. So that's, I think we can agree that's the scenario that you have in mind is that a system you have gets out of control. Of course that's
00:14:37
Speaker
What car accident is in a literal sense the car you're driving gets out of control and doesn't go where you intend something special and you use fashion to something else you lose control of the situation you usually control about like where your car is where it's going and the obstacles that it's gonna get around and the things that's not gonna hit it right and a lot of
00:14:58
Speaker
system problems can be seen as failures to control. So like a coup, a coup is a failure to control a government. One of the things that can go wrong with the government is it can have a coup.
00:15:09
Speaker
And then the military take over and then the people, the current government loses control and the citizens lose control through the government they have. And a coup is a problem. And so when you design governments, you like to prevent coups. So a coup and a car accident are both illustrations of particular kinds of ways systems can get out of control. And now the question is, if you're worried about a future system getting out of control,
00:15:34
Speaker
What do you need to know about it to be how useful in helping to prevent those future control problems? And my argument would be there's certainly a time when you're just pretty much too early, when you really just don't have much idea what sort of concrete forms these problems will take, and therefore you really can't help much with that problem. And then later on, of course, there'll be a time when you can help a lot, and maybe you waited too long,
00:16:04
Speaker
There might be a time when you were too cautious in doing anything and you did know plenty enough to do something and you just didn't do it yet, right? So this is the issue. When is a useful time for thinking about a problem? But one of the parameters I would suggest is just the concreteness of the versions of the systems you have available to work with relative to the problems you intend. They happen, right?
00:16:27
Speaker
If you're thinking about car accidents, you should see actual things like cars and roads and see the actual kinds of smashes they can have and damage that will cause and the kind of paths that would lead to that. If you're looking at coups, of course you have to have an idea of like what form of government you're even in.
00:16:42
Speaker
What kind of military technologies you would have? What would it take to form a coup? That is, what would you have to take over? Is it the television stations? Is it the roads? You need some rough idea what it would be to cause a coup so that you could try to prevent coups. But this is where we can return to the question of how we measure progress in AI because
00:17:03
Speaker
If we find a good measure, a good metric or a good proxy for progress in AI, and we find out that we have been making very fast progress, well, then that might mean that the systems we're dealing with today, the machine learning systems we have today, are closer to what could become an artificial general intelligence in the future. And therefore that we today have more
00:17:32
Speaker
of a concrete grasp on the systems that could become dangerous in the future. Do you agree on that framing? Well, there's a closer metric and there are farther metrics. So anytime we're collecting metrics, we should arrange them on a scale of sort of how close they are to the issue at mine versus how indirect. Their indirect metrics are useful. They can give earlier warnings, but they are, you know,
00:17:57
Speaker
usually a warning that a more direct metric will then later show itself and then that will be a clearer measure of what you're interested in. So I might say, well, the clearer metric is the actual lack of control. That is the, that you would have to, you know, the clearer problem is you have an AI system doing something and then the things that it is doing are things that if, if it did them badly or out of control, that would be a problem.
00:18:22
Speaker
Right. If, if the thing you're having a system doing isn't the sort of thing that really could cause problems have done badly, then you're not as close to the problem itself as you would be as if it's sort of like a self-driving car.
00:18:37
Speaker
auto accident world where a self-driving car could cause an auto accident. And therefore, you're more worried about it than a self-toasting toaster, right? I see. So you're saying that what is an indication that there could be a problem in the future would be to see examples of misaligned AI systems actually happening. So it's not enough for us to- Well, not just misaligned, but misaligned it.
00:19:05
Speaker
on applications that matter a lot, applications where there's a lot at stake and there's a lot that could go wrong. All right, but haven't we seen small-scale examples of misaligned AI systems? Here I'm thinking if you give an AI in a game some goal to achieve, then it might achieve it in a way that you did not intend. For example,
AI System Behaviors and Resource Allocation
00:19:30
Speaker
If you want a digital agent in a game to run very far, then this agent might cheat by running in circles because that's a fast way to kind of game the rules you've set up. So we have small scale examples of misaligned AI systems. Now we might be able to extrapolate and think about how these misaligned AI systems could work in much larger settings.
00:19:58
Speaker
So let's take the analogy of computer security, which I think is an unusually close analogy compared to other analogies we could make. We have a great many computer systems in the world and a great many of them have vulnerable computer security.
00:20:13
Speaker
But the degree of effort we put into security in any one system is a reflection of our concerns about that application and what could go wrong there. So we put a lot more effort into computer security in the military and in banking, and in say medical instruments, because there's a lot more that can go wrong there. We put a lot less computer security into say social media,
00:20:40
Speaker
or, you know, video games, as you're describing, exactly because we realize that if something goes wrong, there's not that much to lose there. So because of that, I think it would be unfair to take an example of things going wrong in video games and therefore claiming we've got a big banking problem or a big military problem.
00:20:59
Speaker
because we have been paying attention to those differences in our security. Similarly, you might say, you're walking around and you don't have very good security on the backpack you're carrying.
00:21:12
Speaker
And that means, like, somebody's going to break into your house and murder you, right? And you might go, well, like, I'm a, I have a certain amount of security on my backpack because of the kinds of environments it's in. This tends to be good enough. I don't have a lock on my backpack. Usually I could, they sell back, back to the lock. I just never bothered to put a lock on my backpack because it doesn't look like that's a problem, right? I do put a lock on my door. My lock on my door isn't well, certainly enough to prevent a military grade professional from getting in.
00:21:40
Speaker
but I don't expect military grade professionals to try to be breaking into my house and murder me. That's why I have the sort of lock I have in my door, right? So from this perspective, this is why we'd want to be seeing like problems
00:21:54
Speaker
close to the actual problem. Like if people were breaking into houses near me and killing them, that would call attention that I might need a better lock on my door, right? You would want to see misaligned AI systems in applications we really care about like finance or policing and so on. Maybe one problem with thinking this way is that if we need that kind of evidence before beginning to invest in AI safety research,
00:22:23
Speaker
Don't we risk being too complacent? Maybe by the time that kind of evidence surfaces, it's already very late in the kind of progression towards very dangerous systems. Well, so think about the house lock thing, right? If nobody had invented locks yet, then I might want somebody working on the very idea of locks so that if I ever wanted a lock on my door, I could put a lock on my door. Once locks are available, now it's a matter of which locks do I put on, right?
00:22:54
Speaker
So I think that's more the analogy people might make, is that there's a whole new kind of technology for dealing with problems that you might need to invent, as opposed to how you're going to particularly deal with a particular problem.
00:23:11
Speaker
But the problem with inventing all new technologies for dealing with problems is there are whole new technologies for dealing with problems and whole new systems you haven't seen yet. So we could say, you know, how do we invent brakes on cars before you've even seen cars? How do we know brakes are the right thing to do as opposed to like big bubbles stick out
00:23:27
Speaker
the front. Inside cars we have airbags and on the outside we have brakes. How do we know what the right technology is until we get close enough to actually having such systems? We could work on airbags and brakes just in general before we have any idea of what kind of systems we're dealing with. Of course, basically at the early industrial revolution, you just have a list of all the different ways machines are controlled.
00:23:48
Speaker
There's just a list of, you can have an off switch and you can have a regulator and you can have a shield. There's just a list of ways you could list the kinds of things you can do on machines to protect from machines breaking. But could you do much more than that? Is there an underlying assumption or set of assumptions here about how humans generally solve problems? So are you thinking that, for example, humans are better at adapting to a situation that has already arisen?
00:24:17
Speaker
than we are at preparing for a situation we haven't seen before, something like that.
00:24:25
Speaker
As we discussed in another podcast, there's this general question of whether people plan too much or too little. And that seems relevant here. I would say, I think we see a rough bias toward too much planning, at least in a lot of socially shared contexts where people get together. So for example, consider the inside review versus the outside view discussion. The observation is that in organizations, at least often,
00:24:55
Speaker
people will take an inside view with respect to something. A famous paper was about their curriculum design. Using what they think they know about the inside view, make some plans about how to do something, and then that just won't work very well compared to taking an outside view of sort of a track record of previous cases and just taking a more adaptive approach of, you know,
00:25:19
Speaker
trying things and changing them and trying things and changing them. So trial and error. So I think that's not just true in the inside view, outside view literature. It's also true in, say, the billion dollar projects in the world. If you look at all projects that cost more than a billion dollars in the world, they typically go very badly. They are over budget. They are over time. They don't work very well.
00:25:41
Speaker
And basically, when people think they can plan a billion dollar project, they tend to be wrong. That doesn't work. And this is also more common observation about many kinds of government programs. They complain that they have too much planning. And even sort of government programs to produce innovation. I learned this from Infinite and All Directions by Freeman Dyson, the book long ago where he complained about most attempts to cause innovation that was based on a lot of planning. That just didn't work very well.
00:26:10
Speaker
Innovation plans that worked well were trial and error. Try something, change it, try it again. Whereas when people tried to design, say, a huge fusion reactor from scratch and make a project, it took decades and didn't work.
00:26:24
Speaker
But of course, there's also the question of how many resources we are allocating to trying to make AI systems safe. Currently, the world is allocating a tiny, tiny amount of resources to this problem. And do you think that kind of marginally we should allocate less because trying to plan now is simply too early? But then I'm just thinking, if we're allocating this
00:26:49
Speaker
this tiny amount of resources, this seems to be a good way to hedge our bets, even if it's, in your view, very unlikely that dangerous AI systems will arrive soon. So let's talk about the kinds of resources you might allocate them and how you might allocate them.
00:27:06
Speaker
So one issue in the allocation is, do you spend the money an hour later? So you could allocate resources and put that in a fund that then grew over time, and then you'd have guaranteed a big pile of money later to spend when the time was right, when you might not be able to convince people later to spend the money then, and there'd be all this money. So allocated resources doesn't mean spending it now.
00:27:29
Speaker
It might mean just committing to have the resources ready when the time is right. And then if we think about allocating resources, you know, one thing we could do is just cause people to spend time writing white papers or something. But that's not the only resource we allocate. We also allocate the resource of sort of concern and fear.
00:27:49
Speaker
And regulation, that is, these sorts of concerns will often retard and prevent various kinds of developments, make it not only financially less attractive, but have more regulation that's in the way. We will have more distract people away because they'd be ashamed to work on that sort of thing. We can sort of sap away the enthusiasm that might be there behind things.
00:28:10
Speaker
These are other sorts of resources. So the best way to reassure me is to say, well, we're just going to spend a small amount of money and we're not going to get in the way of those other things. Because that might be the bigger cost, is that by talking so much about AA safety and making it so many people think it's just the biggest issue ever, that you sort of displace other images of the future, other concerns about the future. You take people who could otherwise work on other interesting things
00:28:42
Speaker
sap away enthusiasm, you create more support for regulation, you discourage organizations from pursuing developments. Those are some of the other major resources we allocate. So we should think of it as maybe among the group of people who are likely to work on big scale problems for the future.
00:29:05
Speaker
The problem of AI safety is overweight. So we're allocating too many resources to it within that group. That's a plausible concern. All right. Let's shift gears a little bit here and talk about, in general, what you believe about AI progress.
Concept of AGI and AI Development Trends
00:29:23
Speaker
So progress in AI capability.
00:29:28
Speaker
Do you think that the notion of artificial general intelligence is useful? And if you find it useful, do you have any forecasts about when such a system or collection of systems might arrive? First, we agree on what history we see. So computers got started being substantially active in the 1950s.
00:29:56
Speaker
where there was precursors were pretty small compared to that, but computers really started getting going in the 1950s. And so it's been 70 years since then. We've seen 70 years of computer history. In that 70 years, we have seen relatively steady hardware progress.
00:30:18
Speaker
We've seen changes in hardware technologies which dominate, like parallel hardware is more important now than it used to be, and a relatively steady rate at which technologies, the hardware is improved. And we understand that improved hardware is a causal thing that can unleash improved software.
00:30:39
Speaker
And in fact, when we look at particular algorithms, we see that particular algorithms, say, inverting a matrix or things like that, they have improved over time. And the factor of improvement in terms of efficiency is actually similar to the hardware one.
00:30:55
Speaker
Which is surprising because there's not this big industry you have to create to do the software improvements. And so a plausible explanation there that I think, you know, I find plausible and other people do too, is that there are many kinds of software approaches that you can't really try until you have cheap enough hardware. And so the software improvement over time is that people have
00:31:16
Speaker
with cheaper hardware, have been able to try a wider range of approaches. So every algorithm has a fixed cost constant plus some polynomial cost as a function of scaling. And we're able to explore larger fixed cost constants as we have larger, cheaper computers. And so that has produced improvement in algorithms. But we also see it's relatively steady, actually, improvement in algorithms over the 70 years in a wide range of areas.
00:31:47
Speaker
I think we can also, you know, look at computer innovations over the last 70 years in terms of their specificity or generality. And we could also categorize them in terms of their lumpiness or size. So, you know, there are areas where they're just lots, especially in hardware, just lots of little tiny improvements slowly over time that make it better and better. Thousands really of little changes that add up to making chips cheaper, say, and even thousands of little changes that add up to making certain kinds of software more effective.
00:32:18
Speaker
But once in a while, there are bigger lumps. So it's not an exact smooth exponential curve. It bounces around a bit. But the bouncing is mild compared to the overall trend, I think we'd agree. If, say, hardware or even algorithms has a factor of 2 every two years, well, not exactly a factor of every two years. Sometimes a factor of 3. Sometimes it's a factor of 1.5. But it's roughly in that range.
00:32:45
Speaker
That's roughly the fluctuations in the rates of progress over time. And we also see that most improvements are relatively specific to applications and areas. That is, there are some relatively general improvements, like inverting a matrix, sorting a list, right? But most in progress has just not been in those general things. Most progress has been in relatively
00:33:14
Speaker
you know, narrower things that have a more limited range of application. So that's the history I would paint so far. And that history is roughly consistent with the rest of technological innovation. That's not just the history of computer science. That's what railroads look like and, you know, nuclear reactors and everything else. It mostly has this character that, you know, there's hardware and software advances and they are somewhat similar and that the rate
00:33:45
Speaker
somewhat lumpy, but not that lumpy, and that most gains are specific to particular industries, and there are relatively few gains that are just, you know, work across all possible industries, and that's the history so far. And that's the history even though many people were repeatedly forecasting a deviation from that trend. That's this history of the every 30 years burst of concern,
00:34:09
Speaker
For a while, many people have thought, yeah, we've seen a steady trend so far, but we're going to see a breakout soon. We're going to see a way in which the steady trend is going to deviate soon, and that's going to be a really big change. We could also talk even about if such a big deviation were to occur, what sort of signatures might we expect to see?
00:34:27
Speaker
But if we take just what you've been talking about, the trends, the historical trends over the last 70 years, what happens if we extrapolate those trends? So now we're not talking about a prediction of a deviation in which suddenly hardware and software progress is much faster, just the extrapolation of those trends. Where does that get us?
00:34:51
Speaker
So the straightforward prediction of past trends, I think would be to expect that we have similar rates of progress that continue on for a long time. And now we less need to project rates of progress than distances. So in this case, the question is, well, at the moment, like most income goes to pay human workers,
00:35:16
Speaker
and mostly to pay their minds. That's what most income in the world economy is paid for. It's human minds doing stuff, right? Because human brains are pretty good. And when we look over time, we see that as automation and machines and computers have improved, we've seen a slow change in that mix, but a pretty slow change.
00:35:37
Speaker
So we might say even today most income is going for human labors and over the last say 50 or 70 years the fraction of income paying for computers and computer software is increased. Definitely. But it's still a pretty small fraction.
00:35:53
Speaker
And if you project those trends out, it's going to be a pretty long time until it becomes a large fraction of the economy, say, compared to human labor. So one milestone we might be looking to is at what point will the income going to computers and software be comparable or even larger than the income going to human workers? That would be a plausible marking point of an important transition. Certainly that's an important transition for when humans should get insurance about losing their wages.
00:36:22
Speaker
And then other sorts of developments might be of concern at that point too, right? So we might be trying to ask, well, when would we reach that point if past trends were to continue? And so for that, I gotta say, what you wanna basically ask people is, okay, if you look at how far machines would have to get in order to be able to do most everything humans do,
00:36:47
Speaker
If you look at, like, where they are now and say maybe how far they've come in 20 years, what's the ratio of, say, how far they've come in 20 years to how far they have to go? And so if, say, that ratio is 5, well, you'd want to multiply 20 by 5 to get a century is your prediction of how long it would take to get to that point. So that would be one way of trying to talk about how far do we have to go if we continue at previous rates of progress.
00:37:12
Speaker
So about one century from now, we would expect to have machines receiving half of the wages in the economy. If it was a factor of five. Now, I didn't say it was a factor of five, but that would be a way to try to do that estimate. So in the past, when I tried to do that estimate, I got predictions of several centuries, basically, that we were just a long way off and we have a long way to go. But still, several centuries is still short on a cosmological time, certainly, even on the time of
00:37:39
Speaker
recorded history is certainly something to be looking forward to, but it's not around the
Future Scenarios and Impact of AI
00:37:44
Speaker
corner. Is that your best guess that it will take several centuries to get to a point where we have something like artificial general intelligence or at least something that
00:37:53
Speaker
receives half of the wages in the economy. So this story isn't about AGI in the sense that we haven't even invoked the possibility of AGI yet. We've just been projecting previous trends, which are mostly about relatively specialized systems. And you're asking, when will those specialized systems take away, you know, take most wages? And there'd be some degree of generality, but that's ambiguous here so far, right? This prediction has to be weighed against other things that can happen between now and then, right?
00:38:22
Speaker
So this is the prediction if nothing else happens. And then we ask us what else could happen, right? So one other thing could happen is the topic of my book, The Age of M, we could achieve brain emulations. And then if we achieve brain emulations well before we had sort of computers taking more than half of wages, then we'd have this transition where brain emulations would take over the jobs that humans were doing.
00:38:45
Speaker
And then brain emulation would continue the task of automation, of improving automation. They would be writing the software and the machine learning systems, et cetera, instead of the humans, but they'd still be going down that path. But the rate would change enormously. I estimate the brain emulation economy, instead of doubling every 15 years, as ours does, might double every month or even faster. So it would be a radical change, but still they would be going down that same path just faster.
00:39:14
Speaker
Another scenario, of course, could be, you know, civilization collapse. The power that's producing computer innovation would be undercut and we would stop going down that path. That's another thing we could worry about.
00:39:29
Speaker
But these trends have been quite persistent. Depending on how far you go back, these trends in hardware and software progress have survived at least one world war and the Cold War and many highly disruptive events. But of course, we can imagine complete collapse that would end these trends. And then a third sort of scenario is some sort of a deviation from the computer trends we've talked about.
00:39:59
Speaker
a deviation in rate or generality or some combination of them. And this is the sign of scenario I've most heard about over the last decades. And that's that really the kind of star of people have repeatedly been concerned about every 30 years for a while. Is some innovation which is much more lumpy in essence than the lumps we've seen so far. Some way in which to be more general or to open a floodgate of new innovation that accumulates a lot faster.
00:40:29
Speaker
to produce much more rapid progress. So, you know, we could ask about how likely does that look? We could ask about, well, what signs would we expect to see? We could ask, have we seen such signs lately? Those would be the kinds of questions we could ask about that scenario, because then we would have an acceleration relative to what we see so far.
00:40:54
Speaker
And what would those signs consist of? What would we have to see in order to believe that there's something going on with the rate of change in AI progress? So we can go through a number of different plausible signs we might see. So one is that if investors expected this change, that is, they got signs that this change was coming,
00:41:16
Speaker
They would change the relative prices of some kinds of investments, right? If say the question is, well, there's a new kind of computer technology, which has a new potential for economic impact of, you know, getting economic value in the economy.
00:41:34
Speaker
Where do they think that investment will come from? What will it be embodied in? Who would be profiting from those investments? So you might, for example, think, well, this is computer mediated. So, hey, the price of computers would go up. All of a sudden, there'd be this huge possibility for using computers more than you had in the past. And whatever kind of computer was most suitable to be used in this new technology, well, that price would go way up.
00:42:00
Speaker
And you might think, well, whatever software companies are likely to own the new software that enables this new technology, their price would go way up, at least if they could own some fraction of it. And, you know, whatever humans, workers would be especially valuable by, employed by those companies, their wages would be expected to go up, people would be expected to be trying to be trained in their techniques. Whatever nations are accompanying this, those in our cities, they would expect to have more tax revenue, they would have more growth,
00:42:30
Speaker
And to what extent do you think we're seeing these trends? So for everything you mentioned here, we could go through that list and say, well, we are seeing lots of investments in artificial intelligence. Machine learning researchers are very highly paid. There's a lot of economic activity in Silicon Valley, which is kind of the heart of AI innovation. And so Google is very valuable and so on. So where?
00:42:59
Speaker
To what extent are these signs already kind of fulfilled? So even a small chance of such a thing could have a substantial image in that technology, right? A 5% chance, say, in the next 20 years that Google would be the pivotal company could substantially affect the price of Google. So I think, again, since we've had this history of these bursts of concern and attention,
00:43:25
Speaker
that's sort of the relevant pattern to be comparing. So I think you might want to ask, well, do we see it now different than we saw it then? Yeah, that's a good framing. Is the level of investment and concern higher than it was in another AI cycle in the past? Right. So I think that
00:43:50
Speaker
We did see, say, in the 1930s, prior to that, we saw this burst, of course, of investment based on technology more generally up until the 29 crash. We saw a burst of stock market investment in the 60s centered around more technology companies.
00:44:12
Speaker
there was the big dot com boom. So dot com boom in the late 1990s was explicitly said to be based on an expectation that new technologies were appearing that would enable a lot more economic growth. AI was the center of that, but it was within the mix. And for example, you know, the famous IBM
00:44:39
Speaker
The chess win was in 1997, you know, two years before the peak of the dot-com boom. So, well, within that. And we can look at sort of how big a change the dot-com boom moved to the relative value of, say, tech versus other sectors. And it was a substantial change. During the dot-com boom, it looked like a substantially large expectation that there would be a change in the economy based on technology, centered in technology.
00:45:08
Speaker
that would make a big difference to productivity and the value of different kinds of firms. And of course, they changed their mind in 99 and the crash. And I would say, if we look at prices of tech firms in the last 10 years, it has not been as dramatic a change as it was during the dotcom booth. We have not seen that level of expectation.
00:45:34
Speaker
of change. It's not no change, but it's not that big. That's surprising to me. In the previous AI cycles, we saw higher levels of investment, for example, into AI technology.
00:45:52
Speaker
expectations of the valuation of technology firms. We saw higher activity in the past than we are seeing now. So let's be clear about our nested circles of a focus of attention here. And let's think about which scale the circle should matter. So at the highest scale, there's just the economy as a whole. How big is the economy? How fast is it growing? And how much investment does it have?
00:46:18
Speaker
Then there would be, say, more technology more narrowly, which could be, say, a quarter of the economy, say. How much do people expect gains to come from technology? And then within technology, we could talk about computers, right, kind of software and software. And then with computers, we could talk about AI more specifically. And with AI, we could talk, within AI, we could talk about machine learning.
00:46:45
Speaker
And maybe even with machine learning, we could talk about deep learning. And now we've got these nested circles. And now we might ask, where in this set of nested circles do we expect to see the signals? And in what order, right? So when you get down to deep learning, you're looking at a pretty tiny fraction of the world economy.
00:47:09
Speaker
But if you look at tech, you're looking at a much bigger fraction, and even a computer is a substantial fraction. So, you know, where do you expect to see the change? So I think if you're expecting to see an economy-wide impact, that should be reflected on the quantum-wide parameters, or even tech-wide parameters, or even computer-wide parameters. But lately, the AI boom hasn't really changed the computer industry valuations that much.
00:47:39
Speaker
and or even tech so much. It's been part of a tech movement. It's less clear that it's AI that's the main driver of the price of Google or Apple or something. That doesn't seem plausible.
00:47:51
Speaker
So maybe an assumption we're working with here is that the market is good at pricing. It's good at predicting which technologies will be valuable in the future. I agree that this kind of betting against the market definitely puts pressure on people who are predicting that we will see rapid AI progress in the next, say, 30 years or so. But we have had instances where the market has
00:48:22
Speaker
bet on the wrong things or not foreseeing some change. So here's another metric, right? So I'll give two more metrics here we could talk about.
AI Investments and Job Automation Trends
00:48:31
Speaker
One metric would be, because I did this actually statistical analysis, I looked at the United States economy from 1999 to 2019 at 900 different kinds of jobs. And for each kind of job in each year, how automated was that job?
00:48:48
Speaker
And so I could look at trends in automation in the US economy over 20 years, including a period when many people said that we're about to have an enormous amount of change in automation due to AI. And so in this data set, I could look at when jobs got more or less automated, did that change the number of workers doing that job or the wages for that job? And I could ask, what predicts which jobs are how automated? That are one of the usual determinants of whether a job is automated, which kind of job features?
00:49:16
Speaker
And for example, we could ask in particular whether the things that predicted a job being automated had changed in those 20 years. That is, if there was a new kind of automation showing up that was having an important impact in the economy, we might expect not just maybe an acceleration of the rate of automation, we might expect a change in which kinds of jobs are being automated and maybe even a change in whether the automation is causing a change in the number of workers of the wages.
00:49:42
Speaker
And the answer in this analysis was there was no change over this 20 years in the things that predicted which jobs are automated. There was no change in the rate of automation change increase. There was no correlation between jobs being more or less automated and having higher or lower wages or higher or lower numbers of workers change. So basically no sign of any automation revolution from 1999 to 2019 in the United States economy overall.
00:50:08
Speaker
Not to say it couldn't happen in the future, but it hadn't happened yet. The last metric to use, I think, which would be sort of getting the closest to the phenomena, is that we have people excited in the last few years with things like GPT-3 and DALI, generators of language and generators of images. And the most direct question to ask about those is, are they getting commercial traction?
00:50:36
Speaker
Are there customers who want to pay to use them to get value from customers? That's the most straightforward question would be to ask about that. I mean, again, I don't think that's.
00:50:47
Speaker
in AI, but if you were saying, is there a new technology that's appearing right now that's about to cause a big change, that plausibly would be the thing somebody would be pointing to. And then the most straightforward indication that would be about to be causing a big change is that it was getting some traction with customers. People were paying for it. They were doing stuff with it, right? And so, for example, I've made a bet with my colleague Alex Tabarac about the amount of revenue from GPT-3 sort of
00:51:14
Speaker
products that will appear in the next 10 years, I guess. And I bet it was less than a billion. He bet it was more than a billion. And so I think there's a market on that and a metaculous and it says about 50-50. So people don't think I'm obviously wrong. There'll be less than a billion dollars of revenue from the latest new thing. And a billion dollars is tiny in the world economy, right? You got to say, if you think maybe there'll be a billion dollars, maybe not, you're not talking about a revolution in the world economy.
00:51:42
Speaker
That's a great way to end it there, Robin. Thank you for talking with me. Thank you for talking.