Introduction to the Podcast with Jason Crawford
00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker and I'm here with Jason Crawford. Jason is the founder of Roots of Progress, which is an organization dedicated to studying a new philosophy of progress for the 21st century. Jason, welcome to the podcast. Thanks. Great to be here. Thanks for having
What is the History of Progress?
00:00:19
Speaker
me. Great. Could you quickly summarize the history of progress on Earth for us?
00:00:23
Speaker
The most pithy summary maybe comes from, I think it's Luke Muehlhauser. I hope I'm pronouncing his name correctly.
Human History: From Tribal to Industrial Eras
00:00:31
Speaker
He said, basically, everything was terrible for a very long time and then the Industrial Revolution happened.
00:00:36
Speaker
Slightly less pithy with a little more detail. I think you can divide human history high level into three eras. There was the tribal hunter-gatherer era, which lasted some tens of thousands of years or hundreds of thousands, depending on how you count.
00:00:53
Speaker
And then there was the agricultural era, which lasted another 10,000 years or so. And then for the last few hundred years, we've been in the industrial era. And the number one thing, I think if you want to summarize it all down to just kind of like one concept or pattern, I think the most important thing to understand about progress is that it compounds and therefore it accelerates.
00:01:15
Speaker
So, if you map any various metrics of progress you might have like GDP per capita or even just world GDP or any of other various similar metrics like population or energy usage or whatever, and you map it on a semi-log plot where exponential curves become straight lines. It's not even a straight line. It bends upwards. So, the curve of human progress is actually super exponential. That is, it grows greater than any exponential.
00:01:41
Speaker
It is potentially, you could think of it like exponential growth, except that the exponent itself is increasing over time as progress compounds on itself. Even the pace of progress itself gets faster, even on a relative or exponential basis.
Future Growth and the Computational Era
00:01:57
Speaker
Is this the mainstream view in economics regarding economic growth?
00:02:02
Speaker
Yeah, I would say if you asked, you know, growth economists, you know, Paul Romer or Chad Jones, for instance, would be, you know, very familiar with this pattern and would tell you, I mean, I, partly in part, I'm getting it from them. So you mentioned that, that growth is accelerating growth compounds upon itself. What about these modes of growth? So you mentioned the agricultural era, perhaps the industrial era,
00:02:28
Speaker
Does it make sense to talk of another mode of growth after that, perhaps the computational era? I think it makes sense to wonder about or predict or think about such thing in the future. I don't think we're there yet, or perhaps we're right now at the transition point or in the transition phase between those. But I don't think we've seen enough of it yet to be able to say that we're in it or that we've made the transition or to know what it looks like.
00:02:56
Speaker
What do we know about economic growth during the hunter-gatherer era or the agricultural era versus now, for example?
Understanding Economic Growth in Ancient Eras
00:03:04
Speaker
In general, with history and with economic history, the farther back you go, the sketchier the data gets, and that happens very rapidly. If you try to go back before the 20th century, we don't have even very good economic records. And if you try to go back before that, we don't even have good
00:03:21
Speaker
If you go back more than a few centuries, we don't even have good population records. And if you try to go back farther than that, if you don't go back more than 5,000 years, we don't even have written records of any kind. And now we're just into archaeology. And the farther you go back, the more the artifacts themselves have decayed.
00:03:36
Speaker
Any of these things, anytime somebody's talking about data from the past, you have to take it with a grain of salt and understand there's a lot of inference and interpretation here. However, that said, there are papers that attempt to present very rough estimates of population and even GDP going back
00:03:56
Speaker
in some cases, like a million or two million years. So back to our hominin ancestors who were not even homo sapiens. And again, the high level pattern is sort of that. So one pattern is that population growth, which in a sort of Malthusian regime where per capita living standards never really increase, population growth and economic growth are more or less the same thing.
00:04:21
Speaker
The high-level pattern of population growth is roughly that population growth rates are proportional to population itself.
00:04:29
Speaker
So again, the exponent on the exponential growth increases with population itself. Where the absolute amount of growth is proportional to the size, that is constant exponential growth. But if the rate of growth, the percentage increase, is proportional to size, that actually fits a hyperbolic curve that goes infinite in finite time. So obviously, that can't continue. And in fact, around 1960, that relationship between population size and population growth rate broke.
00:04:58
Speaker
and population growth rates level off. And in fact, they're decreasing since about the late 1960s. So in a sense, economic growth is no longer dependent on population size.
00:05:09
Speaker
I don't think we can say that necessarily. There's a question of where does economic growth come from. I mentioned Chad Jones earlier. A lot of his work is essentially contending that technological growth and advancement is driven by essentially the amount of resources that we put into it.
Challenges of Sustaining Exponential Growth
00:05:31
Speaker
And in fact, if we want to keep exponential growth going in technological advancement, that we have to put in exponentially more resources. In other words, we have to also have exponential growth in the amount of resources we're devoting to R&D.
00:05:48
Speaker
And that means not only money for equipment and everything, but also people. In the 20th century, we were able to sustain a pretty rapid exponential growth in R&D, in part because we were educating more people and we were bringing a greater portion of the population into
00:06:04
Speaker
R&D. But that obviously has limits within population itself. At a certain point, if your population doesn't grow and you're trying to grow your research workforce, eventually everybody in the world, 100% of the population is a PhD-educated scientist researching and trying to push the frontier forward.
00:06:23
Speaker
And then at that point, if your population isn't growing, if indeed your population is leveling off or shrinking, which is the best, the UN estimates of world population for the 21st century have it doing, then you might be in trouble for consistent growth. Unless, of course, you have some way of massively leveraging human R&D input, say, perhaps through AI.
Flywheel Metaphor for Progress
00:06:47
Speaker
Yeah, and we're going to talk about that in depth later. But first, let's get to your flywheel metaphor. You have the central metaphor you use to describe what you found out about progress. What do you mean by flywheels in this context?
00:07:02
Speaker
A flywheel is a metaphor for anything that has a lot of inertia that feeds on itself. Like I said, with progress, where progress compounds and the more of it we have made, the faster we can make it.
00:07:17
Speaker
I mean, it's a similar thing with a flywheel. The faster it's spinning, the easier it is to keep it spinning or get it going faster. A flywheel has a lot of inertia, so it's very difficult to get going in the beginning. If you push on a flywheel, you can get it going very fast, but only through many, many turns. In the beginning, it's very hard to turn. And then the faster it goes, the easier it is to turn. And eventually, it builds up. And once it gets going, it's unstoppable, or at least it's not stoppable immediately. It takes a long time for it to wind down.
00:07:44
Speaker
I use that metaphor for progress. It was very difficult to get progress going in the beginning. Why? Well...
00:07:52
Speaker
think about why did a hunter-gatherer tribe not โ why did most of them for thousands and tens of thousands of years not invent agriculture or smelting of metal or any of a number of other of ancient technologies? Well, they didn't have a lot of surplus. By the same token, you might ask, why didn't folks in the Middle Ages โ in the early Middle Ages, why didn't they invent the plow for
00:08:19
Speaker
or the spinning wheel for the longest time. Even then, once we had those things, why didn't they mechanize like we did later in the Industrial Revolution? Well, at any given point, you can say, first off, it wasn't a lot of surplus wealth to put into R&D, whether that means funding an individual to do it full-time or even maybe somebody who has spare time to tinker and invent.
00:08:42
Speaker
There wasn't a lot of base technology to build stuff with. Maybe there wasn't a lot of great metalworking technology, so it's hard to make machines out of metal, which means it's hard to make high precision machines, which means it's hard to automate a lot of stuff. Even if you made something, it would be hard for there to be a market for it.
00:09:01
Speaker
So suppose you invented some great agricultural machine. Suppose you were a millwright, you know, who were kind of the most skilled engineers of the medieval period. Suppose you were a really a genius millwright and you invented some amazing piece of agricultural machinery. Well, who could you sell it to? There wasn't a way to create a market much beyond your town or village.
00:09:22
Speaker
because we didn't have the transportation technology and the communication technology. We didn't have newspapers and railroads and so forth to advertise your product and then take orders and then ship product all over the place. You didn't have a national market or even a regional market. You had a very local market. Small market sizes because of low connectivity of the human race
00:09:43
Speaker
meant that there wasn't a lot of opportunity to make something that had a high investment cost, a high invention cost, and then leverage that through a large market, because markets were small. In the era before printing, or for that matter, before written language, if you made some great discovery, how could you even tell people about it? Word didn't travel until we got the Royal Society, which set up, and what are they called, the Republic of Letters,
00:10:12
Speaker
where these sort of scientists and other natural philosophers were like literally writing letters to each other. Then the transactions of the Royal Society could print those in a proto journal because we had the printing press and so forth. All of these technologies and advances and social advancement too, all of these things. By the way, if you did have an invention and you wanted to try to sell it and propagate it,
00:10:42
Speaker
When could you create a corporation? The corporate form, if you wanted to raise investment from a lot of investors, the corporate form was not common until the 19th century. It has a complicated history. Certainly, there were significant corporations by the 17th century, but it was difficult to get a corporation for a long time. For a lot of that history, it required an act of parliament or something.
00:11:10
Speaker
It wasn't until the 19th century that you could form a corporation by right, just by filing some papers, like we're kind of used to doing today. Anyway, I've gotten deep into this. But basically, every time we come up with some fundamental advance, like writing or printing or the internet or better transportation or corporate law or machine tools or metalworking or the scientific method,
00:11:39
Speaker
or anything like that, these things contribute to make all of progress faster. So you can think of whenever we come up with some fundamental advance like that that makes everything faster or that specifically makes progress R&D faster, that increments the exponent at which we are making exponential progress. That is the flywheel in the sense that there's inertia because it's difficult to get the flywheel going.
00:12:02
Speaker
What about the flywheel going on itself after it's been brought to a certain speed? Is that also part of the metaphor? I mean, yeah, in the sense that society at two has a certain momentum.
00:12:16
Speaker
And even today, even in the late 20th and into the 21st century, in a world where many people are very skeptical of the very idea of progress and very fearful and doubtful of it, a lot of progress just goes forward because there's so much social momentum behind it. Because there is a university system and there are research labs and there are venture capitalists and there's a market and there are engineers and inventors and so forth.
00:12:46
Speaker
Because there are all of those things, all those institutions, formal and informal, progress continues.
Is There a Slowdown in Progress?
00:12:51
Speaker
I read that you've recently been convinced that there is, in fact, a great stagnation in growth. Perhaps you could explain what that is and then explain how that might fit into the flywheel metaphor. Is the flywheel, so to speak, slowing down?
00:13:06
Speaker
Certainly. So again, a flywheel has a lot of momentum, but it's not unstoppable. If you leave it alone, it might slow down a little bit just from friction. And of course, if you actively begin to resist it, it will start to slow down even if imperceptibly at first.
00:13:25
Speaker
Yes. In terms of technological stagnation, this is something that people have been talking about, at least since the 2010s. Peter Thiel is one of the earliest to start talking about it. Tyler Cowan wrote a book, The Great Stagnation, which I think came out in 2011, so pretty early. In 2015 or so, Robert Gordon came out with this book, The Rise and Fall of American Growth, where he argued that American growth in the US had slowed down over the last 50 to 75 years.
00:13:53
Speaker
And so I started out my investigation of progress, not even really knowing much about this hypothesis. When I first heard about it, I was somewhat skeptical. And eventually, just through studying the history of progress, I came around to agreeing that there has been a slowdown in at least economic growth and technological progress over the last 50 years or so.
00:14:18
Speaker
This is not to say anything necessarily about scientific progress. I'm really talking about progress as manifested in the economy, so technology and industrial progress. This is also not a prediction for the future. I'm not saying that stagnation is any kind of inevitable, nor is it a diagnosis of the cause. It's just sort of looking at certain symptoms that we can see over historically in the rearview mirror over the last 50 years.
00:14:40
Speaker
And I think the simple โ so you can look at metrics like GDP growth or TFP growth, TFP being something called total factor productivity, which is kind of a metric that economists generally use to have an estimate of technological progress manifested in the economy.
00:14:58
Speaker
You can look at those numbers and those things certainly tell the story they have slowed down. You can look at it more qualitatively as well. So, you know, one reason why people are often skeptical of this idea of a slowdown is because they look back to the last 50 years and they see
00:15:13
Speaker
a remarkable amount of progress. Computers and the internet have progressed extremely fast over the last 50, 75 years. Plus, it was just under 50 years ago that we got, for instance, recombinant DNA. And so all of genetic engineering from synthetic insulin all the way up to mRNA vaccines has taken that. So there have been one or two major fundamental technological revolutions in the last 50 years.
00:15:43
Speaker
But look back at the period from say, the 50 year period from say 1870 to 1920. So roughly the 50 year period ending about 100 years ago. Well, that period also had a revolution in information and communications, because it saw the invention of the telephone and radio.
00:16:02
Speaker
That period also had a revolution in biology called the germ theory and the beginnings of its implementation in public health and very significant declines in mortality from infectious disease as we got better water sanitation and new vaccines and so forth.
00:16:16
Speaker
And then on top of that, it had a revolution in energy with the development of the electrical industry, generators and motors and electrical lights and so forth. It had another revolution in energy with the invention and deployment of the internal combustion engine, which led to a revolution in transportation with the invention of the end of the automobile and the airplane.
00:16:37
Speaker
Then it had a revolution in applied chemistry, which gave us synthetic materials like the first plastics, such as Bakelite, and the first synthetic fertilizer through the Haber-Bosch process. All the types of progress that we've had in the last 50 years were going on in that period. Then we also got progress in energy manufacturing, transportation, and construction.
00:17:02
Speaker
And so the recent developments just don't stack up. You can argue that the internet is freaking amazing, and it totally is. And I don't want to dismiss or downplay it as some people do. You can argue that it's at least as big, maybe bigger than telephone and radio combined.
00:17:19
Speaker
But I think it's hard to argue that it's bigger than telephone, radio, electricity, internal combustion, oil, the automobile, the airplane, plastic and fertilizer. There's just too many things that stack up. The argument becomes untenable at a certain point.
00:17:35
Speaker
So yes, I think there was more progress in the 1870 to 1920 period than in the 1970 to 2020 period. And yeah, so I think you can see that. Now, to be clear, there was more progress 1970 to 2020 than at any 50-year period before the Industrial Revolution, right? Stagnation doesn't mean zero progress. It just means a relative slowdown to what came before.
00:17:56
Speaker
Yeah, it makes sense. You mentioned the importance of the Industrial Revolution when talking about the history of progress.
Primary Causes of Progress
00:18:04
Speaker
So here we're talking about one very large factor that has a lot of explanatory power when we're talking about historical progress. Does it make sense to look for the main cause of progress, whether that be population size or energy use or ideas or intelligence?
00:18:24
Speaker
Is that something you're looking into or that you're interested in? When I got interested in progress and decided to start a blog about it, I called it the roots of progress. And that was for a reason. I wanted to find and understand that ultimately the deepest causes of progress. I knew that the way to get there was to start by understanding things at the object level and to build up to a bigger and deeper understanding, not to try to jump to grand theories right away. And that's why I've spent a lot of time
00:18:53
Speaker
way down in the weeds of the history of progress. Before you can even explain progress, you have to know what is there to be explained. And I think even that is not part of a basic education today, as it should be. That's a separate topic. So yes, I am interested. Does it make sense to talk about the one cause?
00:19:12
Speaker
I think if you want a very full, rich, thorough understanding of progress, you want to understand all the different causes that operate at different levels. And you want to understand what levels they operate at and what the relationship between all of them is. And yes, some of them are more fundamental than others. But I think if you only knew about one cause, no matter which one it was, you would have an impoverished picture of the full story.
00:19:39
Speaker
That makes sense. I was thinking whether we could zoom in on one factor that perhaps explains 70 or 80% of the progress we're seeing. Do you think that's the case? Again, it matters a lot what level you're talking about. So at a certain level, what happened with the Industrial Revolution and the whole industrial age is just the high level pattern that I was talking about that applies to all of human history, which is that progress compounds.
00:20:04
Speaker
What's interesting about the Industrial Revolution is that it was something of an inflection point. I think there are a few things that you can see. Why was it an interesting inflection point? Why did it kick us into a new mode of production? You can understand that on a few levels. I think one important level to understand it on is
00:20:28
Speaker
the fundamental economic importance of energy and manufacturing. That's what the Industrial Revolution was about. It was mostly about energy and manufacturing. And then I think the other very important lens on it is that I do believe the Industrial Revolution was ultimately a product of the Enlightenment. And that means it was a product of certain ideas, ultimately philosophical ideas, ways of looking at the world,
00:20:54
Speaker
ways of thinking and ultimately political structures as well. I think those are probably the two most important ways to see it. We've talked about progress, especially technological progress.
00:21:07
Speaker
We at the Future of Life Institute, we're very interested in the relationship between technological progress and risk.
Technological Progress and Risk
00:21:14
Speaker
So do you have a sense of whether we have a good model of how technological progress goes together with risk? I think there's a lot we can say about it. I don't think we have, I mean, you say a good model. I don't think we have very good models.
00:21:29
Speaker
I haven't really looked for it, but I haven't seen anything that an economist would call a model, like a formal mathematical model or anything like we do have for economic growth itself. We have some pretty good models of economic growth. They're not perfect. They can't explain everything, but they can explain a lot.
00:21:51
Speaker
I haven't seen even an attempt at that with risk. It would be very interesting to try to do. I think just from looking at the history of progress and risk, we can say a few high-level things. I think it's certainly true that progress overall decreases at least day-to-day risk.
00:22:12
Speaker
or even year-to-year risk for individuals. And you can see that just in mortality rates. So the long-term decline of mortality rates is one of the big stories of progress. Again, we don't have very good mortality figures. They go back much more than a century or two. But just from the last, you know, at least century and a half, two centuries or so, in some places, I think in Sweden, they go back to mid-1700s. So maybe we got up to about 300 years or so.
00:22:39
Speaker
You can see very long-term, very consistent declines in mortality. A lot of that was from infectious disease, but not all of it. Some of it is from accidents, some of it is from violence, and so forth. Now, I think you could make an argument that advanced technology increases a certain sort of tail risk.
00:22:58
Speaker
that maybe our day-to-day risk gets lower, but there's an increased risk of some massive blow-up. Certainly, when you look at the history of war, it's a little hard to tell. Maybe war is getting less destructive overall, or maybe we're just having more infrequent wars that are exponentially bigger. You could say a similar thing about other kinds of catastrophes.
00:23:29
Speaker
not sure whether that is correct overall, but it's an argument that you could certainly make and have some plausibility to it. Do you think we would have trouble developing a formal model of the relationship between risk and progress because risk is perhaps very difficult to measure? I'm not an expert in creating such models. I read academic papers that have those models in them, but if I were to try to create one, I would be very much a novice.
00:23:54
Speaker
But I would imagine that where you might start is with the data that we do have on mortality statistics, right? That's sort of the most straightforward, probably, you know, it's probably where you're going to find the best data. It's the most obvious thing. Death statistics are going to be the most accurate, the hardest to fake. That's probably where I would start.
00:24:16
Speaker
Can you model mortality rates and maybe model it in different times and places and for different demographics and for different causes?
Managing Risks from Technological Advancements
00:24:24
Speaker
And how would that fit different models? There's perhaps an increase in large scale risks from war you mentioned. Do you think that also goes for risks such as pandemics or nuclear war? Certainly, there's no risk of nuclear war before nuclear weapons existed. Another is. So that has obviously increased.
00:24:44
Speaker
Pandemics, yeah, the risk of that has probably increased, one, because we're just a much more connected society, and two, because we have bio research now, which could accidentally create or sometimes deliberately create lethal pathogens.
00:25:01
Speaker
At the same time, we also have a lot of technology to defeat pandemics. There has been nothing as bad as the 1918 flu, even though the world is much more connected. COVID was not as bad as the 1918 flu, even though the world is much more connected now. That's for a lot of reasons.
00:25:26
Speaker
with airplanes and everything, the disease can spread a lot faster. But with the internet, information about the disease can spread a lot faster. And the science of the disease can progress a lot faster. And we can sequence the DNA of the virus, or the RNA of the virus, and we can instantly publish that. And we can even track the virus around the world by sequencing its genetic material from different victims.
00:25:55
Speaker
And we can create a vaccine right away. So here's the thing. In the 1918 flu, it wasn't just the flu. What often killed you was not the influenza, but it was the subsequent bacterial pneumonia that you would get once your system was weakened from the pneumonia. Today we have antibiotics. We would be much less susceptible to an influenza pneumonia combination like that.
00:26:20
Speaker
And today we have mRNA vaccines and perhaps in the future we will have broad spectrum antivirals that are as effective against a broad class of viruses as our current antibiotics are.
00:26:31
Speaker
effective against many different bacteria. Maybe we'll have far-UVC light sanitizing the atmosphere, at least indoors, so that these things don't spread so much indoors. There's all sorts of things. Maybe we'll have wastewater monitoring so that we can detect these things much faster. I suspect that what happens is that we go through peaks of vulnerability
00:26:58
Speaker
So vulnerability ramps up as maybe technology creates risk, and then it ramps back down again as more technology decreases the risk.
00:27:08
Speaker
You see this with, for instance, car accidents and road deaths. The road deaths, even in absolute numbers or per capita, in the US, they ramped up as more people were driving, and then they peaked in the middle of the 20th century, and then they've been coming down ever since. Even in absolute numbers, road deaths are on a long-term decline.
00:27:36
Speaker
even as the population grows and as vehicle miles traveled grows even faster. That's probably the high level thing I would look at. And I think if you just think about general existential risk, I think most people who think deeply about this would probably say, yeah, in the future, we are going to be much less vulnerable. And in fact, perhaps now is a uniquely vulnerable time. So maybe we're somewhere near the peak of that risk curve.
00:28:03
Speaker
If we look historically, can we say anything in general about how the pace of development of a given technology has an impact on the safety of that technology? I don't know if the pace of development, per se, is the best thing to look at. One of the things that jumps out to me the most is that some of the worst risks are the ones where we just literally didn't have the knowledge to even anticipate them.
00:28:31
Speaker
An example that comes to mind is x-rays. So when x-rays were discovered, people had no idea they were harmful and they would x-ray their hands. People would go get an x-ray of their own bodies as like a novelty at parties for fun with like
00:28:46
Speaker
no shielding. X-ray technicians, actually, when they were setting up a machine to make sure it was working properly, they would do an X-ray of their hand. And then a lot of these guys ended up getting severe damage to their hands. Some of them had to have their hands amputated and so forth. It was quite common. I mean, people died from experimenting with this stuff. And you think about it, this radiation phenomenon,
00:29:15
Speaker
You can't see it. You can't hear it. You can't feel it. You can't detect it in almost any way. It's this like ghostly ethereal thing that seems to have like very little contact with like the material world. How could it possibly hurt you, right? Like nobody could really anticipated that. So that's the kind of thing, you know, just like it's the unknown unknowns that are really the worst.
00:29:40
Speaker
I'm thinking about this in real time because I haven't analyzed this question. I'm just trying to pull up some relevant examples and think about them. Automobiles are a good example. Early automobiles were pretty dangerous. They were slightly less dangerous in that they didn't go as fast and people weren't going as fast. The roads weren't paved so that you couldn't even go very fast.
00:30:02
Speaker
They were missing most safety features. They had brakes and they had steering, obviously. But short of that, they didn't have seat belts. I can't remember if they had horns. Maybe they're not standard. They didn't have lights to signal, brake lights, turn signals, that kind of thing. We didn't have stoplights and traffic direction, that kind of thing.
00:30:27
Speaker
There was all, and of course, anybody and everybody would walk in the streets and horses and children and whatever, right? There was no separation. So, what made automobiles dangerous?
Automobile Safety Evolution
00:30:38
Speaker
I mean, part of it was the automobiles were, you know, began to be placed in the hands of like everybody.
00:30:45
Speaker
So contrast, at least with the railroads, you had a trained professional running a locomotive. But with automobiles, it started to be like, yeah, anybody who can buy one can drive one. And of course, again, we didn't have driver's ed, we didn't have driver's licenses, et cetera. There was this sort of widespread availability and economic incentive for lots of people to get one.
00:31:06
Speaker
Another thing I think about around the same era a little earlier is the patent medicine industry. So around the very late 19th century into the early 20th, the medical industry, like the drug industry was just absolutely full of shams and fraud.
00:31:23
Speaker
And there's a lot of stuff that was at best completely ineffective and often actually quite harmful. Some of it containing large amounts of alcohol or even toxic stuff. And I think a significant part of what drove that was the combination of a lot of people really, really wanted solutions to their medical problems.
00:31:47
Speaker
And we literally didn't have any, like the drug actually working drugs had not yet been discovered. And so people grasped at anything they could find. They were willing to take something that was completely ineffective because literally nobody was offering them anything that was effective.
00:32:02
Speaker
And so there are various reasons why how that problem got solved. Part of the way it got solved was we clamped down on false advertising and demanded truth in labeling and that kind of thing. And just word spread. But part of what actually happened also was that we discovered actually working drugs. And then people could take those instead.
00:32:27
Speaker
So those are a few things kind of off the top of my head, a few lessons that we might be able to draw from a few case studies. But again, it's not obvious to me that sort of pace of development is a highly meaningful variable or something, or like a highly correlated one. But I just haven't thought about it that deeply in terms of kind of looking at the case studies or really analyzed it.
00:32:49
Speaker
You've been working a bit on a philosophy of safety. At least you have a post on this. And one of your main points here is to say that safety is actually part of progress, at least historically
Are Safety and Progress Oppositional?
00:33:01
Speaker
speaking. So perhaps you could tell us about that. I think it's a mistake to think of safety and progress as something that are sort of opposed to each other. Again, come back to drugs, right? Drugs now are much, you know, the drugs on the market are much, much safer than they used to be. Part of the reason for that is that we do clinical trials.
00:33:19
Speaker
And we have a very careful testing regime. And then after we do the trials to make sure a new chemical is safe and effective, we also have very careful manufacturing processes to make sure that as we manufacture the drug, we're actually making the thing that we think we're making, and it's not getting contaminated with dangerous things and so forth.
00:33:41
Speaker
All of that adds overhead. And surely more drugs would be released onto the market if we didn't test them so carefully before we released them.
00:33:52
Speaker
But I think it would be a little ridiculous to claim that the introduction of clinical trials was something that was opposed to progress or that harmed progress in pharmaceutical industry. It was something that contributed to progress in the pharmaceutical industry because we don't just want any old drugs flooding the market. We actually want safe, effective drugs.
00:34:14
Speaker
When you think about progress, progress is ultimately getting us more of what we want as human beings, and what we want is safe, effective drugs, not random, dangerous drugs. The introduction of clinical trials, even if it decreased the number of new drugs or increased the cost of getting them out, if it's a good trade-off,
00:34:33
Speaker
If it's cost-effective, if it's a cost-effective safety measure, then it actually buys us something that we want for a cost we're willing to pay. And that's progress. That's what I mean when I say that, you know, and again, I mean, automobiles are surely more expensive than they would be if they didn't have any of the safety features that they have.
00:34:50
Speaker
But that doesn't mean that the introduction of those features was somehow a reversal of progress in the automotive industry. It was a part of progress in automotive industry. It contributed to that progress. Safer cars are better cars, even if they're a bit more expensive. Again, as long as the cost benefit trade-off is worth it.
00:35:10
Speaker
Yeah, there's a case to be made that we are over cautious and we over regulate some areas, perhaps air travel and we are under cautious and we under regulate other areas such as maybe biological labs. Do you have any ideas why we might both over regulate and under regulate at the same time in different areas?
00:35:31
Speaker
I mean, this phenomenon is very common. I mean, it happens even within one agency. Scott Alexander has written a lot about the FDA in the US, and he points out that the FDA is
00:35:47
Speaker
too strict about some treatments and not strict enough about others. So it will let through occasionally drugs that don't even work super well, like the Alzheimer's one that I can never remember how to pronounce.
00:36:02
Speaker
And then maybe they take a long, long time to approve something that maybe should have gotten approved a lot earlier. Overall, Alex Tabrock makes the argument that on balance, they're actually way too strict. But in some things, that doesn't mean that everything, every decision they make is therefore too strict.
00:36:20
Speaker
Yeah, why does this happen? I mean, I certainly think visibility is a big part of it. Air safety is very visible. When a plane crashes, it makes the news. Bio lab leaks are not very visible. Most people don't even know that they happen.
00:36:32
Speaker
Terrorism is extremely visible. And so we overcompensate for terrorist attacks in terms of that security. And then maybe we undercompensate for lab leaks, that kind of thing. So visibility is a big part of it. So part of it has to do with sins of omission versus sins of commission. So we set up agencies whose job is to regulate or to provide us with safety.
00:36:59
Speaker
And maybe those agencies are structured in a certain way, like a regulatory agency like the FDA or the NRC, the Nuclear Commission, is set up to review and approve all new things of a certain type.
00:37:13
Speaker
And so one problem with that structure is that it creates very one-sided incentives because anything that gets approved and turned out to be bad or creates harm, the agency might get blamed for badly approving.
00:37:29
Speaker
If it gets approved and it doesn't create harm, mostly nobody notices that they did a good job of approving it. If they fail to approve something that would have created great good, well, that's also invisible, right? And nobody sees that. So they kind of get no credit for approving good things. They get blamed if they approve bad things, and they don't get blamed if they fail to approve good things. And so you just get all the incentives are towards being too strict.
00:37:59
Speaker
What I described in a recent essay is this regulatory ratchet where anytime anything bad happens, the rules get stricter. That's pretty much a one-way street. You just end up with this morass of regulations. You end up with a regulatory overkill. I think that's a very common pattern.
Defense in Depth as a Safety Strategy
00:38:17
Speaker
You write that safety requires defense in depth. What is defense in depth? And perhaps we could take an example where we talk about how it might work. Yeah, sure. I mean, defense in depth just means you have many layers of things. You don't count on any one mechanism as a silver bullet to create safety. Safety is created by attacking the problem from many different angles. So you have some sort of redundancy in the safety features.
00:38:43
Speaker
So come back to automobiles. We have seat belts and anti-lock brakes and collapsible steering columns and crumple zones and shatterproof glass. And we have turn signals and brake lights and headlights. And we have traffic systems. And we have divided highways and overpasses and underpasses and on-ramps.
00:39:10
Speaker
And we have traffic lights and stop signs. And we have social mechanisms like we have driver's education and driver's licensing. And we have social stigma campaigns against drunk driving. And we have airbags and we have all these things.
00:39:28
Speaker
Some of them are literal sort of inventions. Some of them are inventions to prevent accidents. Some make accidents less damaging, like a seat belt or an airbag. Some of them are in the systems, like the road system rather than the vehicle itself. Some of them are about the driver. Some of them are social. Some of them are legal. Some of them are moral. So you add up all of these things, and it's all of them put together that create road safety.
00:39:56
Speaker
That's defense in depth. Another metaphor is the Swiss cheese model, where any piece of Swiss cheese is going to have some holes. But if you take a bunch of pieces where the holes are not correlated and you stack them up, then there's no one spot where you can penetrate the whole thing, where there's a hole that goes all the way through.
00:40:13
Speaker
And so basically it's a recognition that any safety mechanism is going to be imperfect and incomplete. And so the only way you get anything approaching acceptable safety is to layer multiple of them, combine multiple of them in a system. Do you think we can predict in advance which technologies might become dangerous so that we can develop these safety features and we can think about defense in
Predicting and Managing Dangerous Technologies
00:40:35
Speaker
depth? Well, sure. I mean, to some extent. There are always unknown unknowns and there's always errors in prediction.
00:40:41
Speaker
We can and do very often anticipate risk. It can sometimes be hard to tell the difference between true risk and people who are just worried about new technology or are opposing it and are looking for arguments. Maybe there isn't even a very sharp line to be drawn between them.
00:41:02
Speaker
Go look at 1820s England or Britain. It's hard to find a more pro-progress time or place. These people erected a statue to James Watt and called him like a benefactor of the world. Yet at the same time, what was going on? Well, the big development that everybody was talking about and hadn't quite yet happened was railroads.
00:41:24
Speaker
And you get people writing about railroads and saying, these things can never be safe. There's no way. I mean, come on. Some people were proposing that the locomotives might travel faster than stagecoaches. And you get people saying, come on, there's no way that could possibly be safe. You've got 10 tons of iron barreling down a track at what, 18, 19, 20 miles an hour?
00:41:47
Speaker
Come on, right? I mean, this is literally how people were thinking. And in fact, there was one writer who said that basically hoped that Parliament would come to its senses and pass a law limiting locomotives to a speed of eight or nine miles an hour, which obviously was the maximum that could possibly be safe. So there's always these kind of worries, and sometimes they're even correct. And from a certain angle,
00:42:13
Speaker
Were we correct to worry about train wrecks? Absolutely. There were lots of train wrecks. It was a very real safety issue, and it wasn't hard to see that. But it was also wrong to say that eight or nine miles was like the maximum safe speed, and that we should just limit things to that. And if we had done that, it would have been very bad for the economic development of the world going forward. So yeah, it's always possible to... And by the way, even before railroads became a big thing,
00:42:43
Speaker
People were starting to recognize some of the safety and even the pollution problems. In 1829, there was a famous contest called the Rainhill Trials to test a bunch of locomotive engines against each other, try to figure out whether there was any engine that was ready to actually work a passenger railroad, which none had yet done.
00:43:07
Speaker
And in this contest, they stipulated a couple of things. One thing they stipulated was that if I'm recalling this correctly, that there had to be pressure valves on the boiler.
00:43:18
Speaker
to make sure that you didn't get a boiler explosion from a buildup of too much pressure. And in fact, they'd already started to learn about the human factors. It turns out the engineers, so the hardest people to convince to go along with safety measures are like the workers themselves whose lives are in danger. So it turned engineers running these locomotives, they were being run for cargo and coal mines and stuff like that.
00:43:44
Speaker
And the engineers, if the pressure valve would go off, sometimes they would just hold it down because they basically didn't believe in the pressure valve. The rules of this contest, if I recall correctly, stipulated that it had to have at least one pressure valve that was out of reach of the engineer so that he couldn't go hold it down. And then another thing that they stipulated was that the machines basically were not allowed to emit smoke.
00:44:09
Speaker
Either they had to somehow consume their own smoke, or they had to, which they all did, they had to burn smokeless fuel, such as the purified form of coal known as coke. In fact, all these engines in the contest burned coke because they didn't want to create smoke.
00:44:28
Speaker
And that was actually part of the law. I think when parliament passed an act allowing this railroad to get created, they said, but you're not allowed to create a whole bunch of smoke inside. So already people were thinking about some of the safety and environmental issues.
00:44:42
Speaker
So I think the reality, and that was 200 years ago. That was in a time that was way, way less safety conscious than we are today. And it was in a world where there was just objectively much riskier, where the background risk that everybody faced every day was much higher. You could catch cholera or malaria any day and be dead in a week at any age, at any time of life, no matter how healthy you were.
00:45:06
Speaker
So I think, yeah, people always do anticipate to some extent what some of the risks and problems might be and often try to compensate for them. But how much they do that and what the incentives are and what the rules are, I mean, that varies over by time and place. So one of the main points in your philosophy of safety is that getting to safety or getting safe technology does not require sacrificing progress.
00:45:30
Speaker
But you also mentioned a couple of instances where it might make sense to slow down the technological progress in certain areas in order to let safety catch up. For example, when it comes to genetic engineering, how widespread is this phenomena where we might want to slow down in order for safety to catch up? And are there perhaps also technologies where we would like to institute an indefinite ban, such as, for example, gain of function research?
00:46:00
Speaker
So first off, I mean, when you talk about slowing down, I think what's important to conceptualize it is that we are โ if we're slowing down progress, we're slowing down one dimension of progress. But if we think about the total package, what we care about is actually โ again, so like if โ
00:46:18
Speaker
Go back to what I said about drugs, right? Maybe we're slowing down the rate of introduction of new molecules onto the market. That's one dimension of progress. But we're not slowing down the overall progress of the pharmaceutical industry by introducing clinical trials.
00:46:32
Speaker
And I don't think we slowed down the overall progress of genetic engineering by pausing certain types of experiments for eight months until we could do the Asilomar Conference of 1975, where they got together to decide on safety practices. Again, not if you add up the total picture of what do we actually want to do. That's why, again, we have to think about integrating safety into an overall program of progress. It is an integral part.
00:47:01
Speaker
Now, so then the other question you asked was, are there some technologies that we should ban such as gain of function? I don't consider gain of function to be a technology. I consider it to be a research avenue or as a type of experiment that we run. I'm not an expert on this, but I'm certainly, I am inclined to believe from what I have heard that there are certain types of experiments along these lines that we should not do. The term gain of function might not be the best term to describe them. I'm not sure if we have the right term for them.
00:47:31
Speaker
I would just say, just go listen to the interviews that Kevin Esvelt has given on this topic. I think he's very smart and has very good judgment on this. I'm basically willing to go along with whatever he says is best, given what I've heard from him. He certainly believes that there are certain types of experiments where we literally try to make viruses more dangerous.
00:47:56
Speaker
that we should not do. Actually, even more to the point, one of Svelte's points is that not only do we try to make viruses dangerous or to identify which ones would potentially create lethal pandemics, but then people are encouraged to publish this and even to create ranked lists of viruses by how dangerous would they be, which is just putting weapons in the hands of bioterrorists or foreign bioweapons programs.
00:48:24
Speaker
for basically no gain. There's a theory that we gain from this because by identifying the pathogens ahead of time, we can be ready for them. But I don't think there's any evidence that we're actually doing that. Certainly, we weren't ready for COVID when it hit. And being ready for it wasn't, you know, or like having knowledge about coronaviruses in particular,
00:48:45
Speaker
wasn't what allowed us to combat them. What allowed us to combat them was very rapid response platforms. So the fact that we had invested for decades into mRNA vaccine technology
00:48:57
Speaker
was what allowed us to, not to mention all of the underlying, more broad enabling technologies like genetic sequencing and genetic printing and so forth. It was those general capabilities and that rapid response platform that allowed us to beat COVID, not some foreknowledge of the particular virus that was going to come along.
00:49:18
Speaker
So yeah, I certainly think when there is a relatively delimited
Are Some Technologies Too Dangerous to Pursue?
00:49:23
Speaker
or narrow area of research that creates particular unacceptable risks and which is not generating anything like proportional gain, and when there are other ways to sort of learn the key things that we need to learn,
00:49:37
Speaker
It's not as if we're cutting off a whole area of biological knowledge by restricting these experiments. Then yeah, we shouldn't do them. There's some things that are too dangerous to do and that aren't ... You always have to apply cost-benefit analysis to these things. Just if we're speculating here, do you think there are technologies that are so difficult to handle or dangerous intrinsically that
00:50:02
Speaker
We should not go near them. Because it seems a little suspiciously lucky that all technologies turn out to be controllable by us with enough experience and experimentation and trial and error and so on. Do you think there are areas of science or technology where it would be ill-advised for us to go?
00:50:21
Speaker
Yeah, so first I don't think it's luck. I think it's the nature of human problem solving that we can conquer pretty much anything given sufficient compounded intelligence and work and research on the problem.
00:50:35
Speaker
Are there any technologies that might be just too dangerous? I think it depends how broadly or narrowly you're defining technology. There are certain things that we ban or that are extremely limited. I don't name off the top of my head, but there are chemicals that are just too dangerous to be handled.
00:50:53
Speaker
And there are viruses that, maybe like smallpox, that would be better off if we just literally destroyed all of them, even the last remaining stores in some research facility somewhere, et cetera. And somehow scrubbed the genetic sequence of smallpox off the internet.
00:51:10
Speaker
See, I'd be much more hesitant about that. Well, off the internet maybe, I would be very hesitant to destroy it from all of humanity's records. That would be an irretrievable loss of information, right? I don't know. I'd have to hear arguments pro and con there. I'm not ready to say we should do that. Scrub it off the internet. Yeah, I might be good with that, right? Classified, dangerous information, that kind of thing.
00:51:34
Speaker
I mean, I'm also partial to, I mean, to go back to Kevin Esvelt, he's got this program called SecureDNA, where the ultimate goal of the program is to make it impossible for anybody to just print any known dangerous pathogen or anything that is too very similar to such a pathogen. If you define technology very broadly,
00:51:57
Speaker
Like, oh, is artificial intelligence just too dangerous and we should never build it? Or is, you know, genetic engineering too dangerous and we should never build it? I haven't seen any technology at that level where I think, yeah, we just need to cut off this entire branch of the tech tree because the whole thing is too dangerous. I mean, I wouldn't say that can never, ever happen. And I can prove to you now that it'll never come about, but I haven't seen it yet. And I would, by default, assumption for anything at that level of generality.
00:52:27
Speaker
would be that we can find good ways to use it. You have an essay in which you sketch out four different perspectives on AI risk. And I found this very useful. I read this essay as being sort of ordered in order of perceived risk. So when you go down the list, as I'm about to go, the risk kind of increases. So the first one, the first perspective from which we can view AI is that it is software.
00:52:55
Speaker
And as with all software, it has bugs, but we can solve these bugs. Second is that AI is a complex system and it can fail in ways that are difficult to predict. For example, we saw the financial system due in 2008.
00:53:14
Speaker
Third one is that AI is becoming, or perhaps on the verge of becoming, a powerful agent with misaligned interests. That seems pretty difficult to control. And the fourth one is that AI is becoming a secondary advanced species. And there it seems, almost.
00:53:41
Speaker
Yeah, a very difficult problem. Where do you fall yourself?
Perspectives on AI Risk
00:53:46
Speaker
How do you see AI risk? I think that we will certainly have complex system type failures and even some sort of principal agent problems. I think we've already seen evidence of principal agent problems, for instance, in the documented instances of reward hacking. And you might explain what a principal agent problem is.
00:54:11
Speaker
Sure, the principal agent problem is just that any time you, the principal, delegate to some agent to do something for you, it's just hard to align your interests. The agent is ultimately acting in their interest for their goals, and no matter how hard you try, it's hard to get those goals to be your goals.
00:54:27
Speaker
So, you know, think of a company that employs, you know, that employs people. So you have a company, you have workers, and the company tries to set up a reward structure where they promote people for, you know, for doing good things.
00:54:43
Speaker
And then what you find is in any such system, people start to game the system and they start to work for the letter of the reward structure rather than the spirit of it. So one famous thing that happens in technology companies is
00:55:00
Speaker
If you're a product manager, to get promoted, you have to create and launch a new product to get promoted beyond a certain level from senior to super senior or whatever. And so then what do you find? Well, you find the company launches too many products.
00:55:17
Speaker
And then after launch, those products are not maintained and cared for with as much love. And so this is one very plausible hypothesis for why does Google launch so many products, often overlapping with and competing with each other, and then often shut them down some years later? Well, one argument is there's internal pressure for this to happen because the PMs want to get promoted. And so they create new products for their own sake, even though it's not really within the best interests of the company overall.
00:55:45
Speaker
Many, many examples of, you know, I mean, this is just sort of Goodhart's law. Whenever a measure becomes a target, it ceases to be a good measure. And so if you try to measure, if you, the principal, try to measure what's good for you and set that up as a target for the agent, it ceases to be a good measure.
00:56:01
Speaker
Anyway, so these sorts of things happen all the time. And given that they happen so frequently, I think it's not at all surprising that we'll get some of this with AI agents. So I believe those things. I am much more skeptical of the new intelligent species analogy.
00:56:21
Speaker
I'm even more skeptical of the clash of civilizations analogy, where you think of it's like, oh, this is like when Western empires met with less technologically advanced civilizations. Or if we had an alien invasion and they were at another technological level. I don't think that analogy really applies at all. Homo sapiens are evolving in the context of amid Neanderthals. Maybe that's a somewhat better analogy. But even that is not perfect.
00:56:49
Speaker
So I'm kind of less, I think those analogies are much more speculative. Yeah, and for listeners interested in the full argument for why AI might become an analogous to a separate advanced species, they can go back and listen to the podcast I did with Dan Hendricks in this feed. So we have the principal agent problem, as you talked about,
00:57:14
Speaker
And in some sense, companies are quite intelligent. They have resources to make sure that the employees of those companies do as is in the interest of the company. In the case of AI, we are the principal and the agent is the AI. But what if the agent grows to become more intelligent than the principal? Will it still be controllable?
00:57:41
Speaker
I certainly see the argument for why it would be difficult to control. I'm not convinced that we can't find some way to do it. I think a number of different futures are open, and I can imagine a future in which AI essentially remains a tool. If that is somehow impossible, I can imagine a future in which AI becomes autonomous
00:58:10
Speaker
but still co-exists peacefully and productively with us. Possibly, and I would count this as not as good, a future in which we basically get taken care of like pets. Any dog or cat has a much better life as a human pet than it could ever have on its own.
00:58:32
Speaker
I don't know, it's again, we're getting into like, we're getting into things where all of our conclusions are based on thought experiments and pretty high level analogies. And I just think we have to be extremely wary about drawing too much of a conclusion from that type of reasoning.
00:58:48
Speaker
Do you think that people disagree about AI risk because they anchor on different analogies? We have two eminent AI scientists, Geoffrey Hinton and Jan Lekun. Perhaps Jan Lekun thinks of AI risks from the perspective of AI as software with bugs.
00:59:10
Speaker
Whereas perhaps Joffrey Hinton thinks of it more in the sense of the advance of a new, separate species. Do you think that analogies play a large role in the disagreement here?
00:59:24
Speaker
Yeah, I think they do in part because, again, that's all we have to go on. I think there are many, many reasons for why people disagree. They can be engaging in motivated reasoning because they have some prior conclusion they want to protect. They can just have some fundamental philosophical or psychological difference in temperament as to how optimistic or pessimistic they are. There's lots of different reasons why.
00:59:48
Speaker
in a field and on a question where there is so little to go on, where we have no empirical evidence and no formal rigorous mathematical models, nothing to really base a very firm conclusion on. And all we're doing is speculating. It's very easy for people to reach different conclusions. And which conclusions they reach are going to be based on things like their underlying temperament or their biases or which analogies they happen to latch onto.
01:00:18
Speaker
Yeah, I would say we have a little bit more to go on than that. In machine learning, you can do experiments and see that, for example, it can be difficult to specify a reward in a learning system or that the model you're developing might be able to hack the reward you've given it and so on. So we have something more than just analogies, I would say. Fair, but not enough to get us to the big conclusions.
01:00:46
Speaker
Yeah, agreed. And so perhaps we should try to get more empirical data to solve this problem.
Solutionism Philosophy in Technology
01:00:54
Speaker
This leads us into your essay on solutionism on AI safety. So what is solutionism in general and how do you see this applying to the AI safety problem?
01:01:07
Speaker
Solutionism is my third way. I tend to hate the discourse around safety and other kinds of technological problems because it tends to break down into the complacent optimists versus the defeatist pessimists. On the one hand, the folks who are so pessimistic that they see no way to solve the problems that they envision or that they identify.
01:01:33
Speaker
And then on the other hand, the folks who are so optimistic to the point of complacency where they don't even see the problems or they want to dismiss the reality of the problems, claim they don't exist, claim that they're trivial, claim they'll be easy to solve, claim that they're already solved, et cetera, and so forth. And I think problems are often real. And the way we make progress is not by denying or dismissing them, but by tackling them and solving them.
01:02:00
Speaker
And so in my sort of main essay on solutionism, which I wrote for MIT Technology Review, I gave the example of William Crooks, who in 1898 warned the world that we were running out of fertilizer and that if we didn't solve the problem, we were heading towards famine.
01:02:21
Speaker
And he was right. The numbers, we were relying on natural fertilizer, and we were using it up way faster than any processes where we're replenishing it. And we were going to run out. It was good that he foresaw that.
01:02:37
Speaker
But he didn't, and so you could call him, and some people did call him, alarmist, because he was warning of bad things happening in some quite dire terms. But he didn't do this just to sell books and get on talk shows, which they didn't even have talk shows back then.
01:02:54
Speaker
He did sell books, by the way. He made a speech to one of the scientific societies, and then he turned it into a book which went through three editions. So he did sell some books, but that wasn't his purpose. His purpose also, by the way, he did not call for de-growth. He didn't say we need to reduce population or anything like that.
01:03:12
Speaker
He said, we need to invent synthetic fertilizer. His main purpose was to call on the chemists of the world to come up with a way to synthesize fertilizer and to fix nitrogen. In fact, he had the basic idea correct. He said, we need to fix nitrogen from the atmosphere. He even had the beginnings of a plan of how to do it. Now, his plan was not the one that ended up getting adopted. He thought we could do with electricity, and we ended up doing it with
01:03:36
Speaker
with chemistry. But he had an idea. And so he pointed the way. And indeed, within some 20 years or less, chemists did come to the rescue and created the Haber-Bosch process and synthetic fertilizer. And so that is solutionism, right? Acknowledge the reality of the problem, fully embrace the reality of the problem, don't deny it or dismiss it, or put on your rose-colored glasses, and then go solve it.
01:04:02
Speaker
And then we move forward and we solve the problems of progress with more progress. It's the last part that might be very difficult. I mean, we have some of the people who are working very close to this in the top AI companies and in academic departments and nonprofits and so on talking about how difficult it is to solve the AI alignment problem.
01:04:26
Speaker
What does solutionism tell us that we didn't already know in a sense? Of course we have to solve the problem, you could say. So what is it that solutionism asks us to focus on that's different?
01:04:39
Speaker
Solutionism is a philosophy and philosophy only takes you so far. At a certain point, you pass over into technical details and those have to be answered by the engineers. At the philosophical level, I can't tell you exactly which strategies are most promising or where we should invest resources. It's more of a way of looking at how to frame the problem. I really think if you look at
01:04:59
Speaker
AI is just one example of this. Climate change is another example. There's lots of examples. You get a lot of people who are just harping on the difficulties, and then you get people who don't want to hear the pessimism, who are then inclined to reply by denying the existence of the problem.
01:05:17
Speaker
And I think once you're aware of that pattern, you can see it in a lot of the discourse and you can avoid it in your own thinking and in the way that you talk about the problem. And so I think that's the value of having this concept and understanding this kind of false dichotomy or this trichotomy.
01:05:33
Speaker
As for what to actually do, again, I'm not the expert. There are a lot of different ideas. I think we should pursue all of them or anything that anybody thinks is promising and worth investing in. I do think that we've been held back for a long time by... This will echo something I said a little bit earlier, but I'll reinforce it here and I'll cite. Part of where I got this from was Scott Aronson. He did a different podcast, the AXRP, AI X-Risk Research Podcast.
01:06:01
Speaker
He did a very good interview with them where he pointed out that any field of science or engineering to make progress needs one of two things. It needs either empirical data or it needs a rigorous or formal mathematical theory.
01:06:16
Speaker
And for decades, AI safety has had neither of those things. And so it's been very difficult to make progress. Now that we have some pretty capable AI models, we finally have the ability at least to get some empirical data. And so actually, now might be the golden age of AI safety, like we can finally do something about it because we can start testing out hypotheses.
01:06:37
Speaker
Hopefully, that will make it easier to actually make some progress. One potential problem there is that AI might be different than other technologies in that it moves too quickly for us to do this trial and error and experimentation that has worked for other technologies.
01:06:53
Speaker
such that whenever we get these capable models that we have now, we are close to even more capable models, and we don't have the years and years of experimentation that we might have had with, say, cars, for example, or decades upon decades.
01:07:12
Speaker
It might be the case that we are now coming around to thinking about technical AI safety as a real problem and that we have the funding and the means to experiment and AI safety is becoming a real field, but is it perhaps too late? Like first off, historically speaking,
01:07:30
Speaker
I think we're extremely early when it comes to thinking about AI safety. I don't know many historical examples of something where people have been so concerned about safety for so long before you had any real capability at all. And where people were researching it and writing papers and creating entire institutes and everything. Historically speaking, this feels to me like the most safety conscious that humanity has ever been. I don't know how it could have been earlier.
01:08:00
Speaker
And again, there's only so much progress you can make when you have almost nothing to go on. So again, I literally don't know if we had just poured even more resources into it even earlier. I'm not convinced that we wouldn't have just spun our wheels harder. Ultimately, I think that safety of this type, especially from, you know, safety that comes from research and development and from things in the lab,
01:08:25
Speaker
I think ultimately, in large part, it's in the hands of the technologists themselves who are on the front lines. They are the ones who are literally metaphorically standing in front of Pandora's box and holding the keys. They're the ones who can decide to open it or not and whether to open it in a secure, contained facility or not, et cetera.
01:08:44
Speaker
I think it's the people who know the most about these systems, who understand them the deepest, and who are closest to the literal concrete day-to-day decisions that are being made, who are best positioned to make these decisions. I'm not going to tell them what they should be doing. I will tell them that I think they should pay attention and take the issue seriously.
01:09:03
Speaker
Just as if you were a bio-researcher in a lab, I think you should take the biosafety issues very seriously. But I'm not going to tell the bio-researcher which pathogens are the ones that require a negative pressure laboratory and which ones require full PPE and which ones require all the different mechanisms.
01:09:25
Speaker
I do think we should think about it and take sensible precautions. Certainly, we should think a lot about what are we hooking the systems up to. One thing I would wonder about is what's the security plan for GPT having an API access to various things? It's getting tools now and you can plug it into all sorts of stuff.
01:09:48
Speaker
What are people going to do about that? That seems like a thing where, okay, now that I can do something other than just chat with me, that's certainly the kind of thing where we want to think very carefully about what are we hooking it up to. If we're going to give an API to anybody, we would think about what can this API do and what might someone do with it.
01:10:07
Speaker
Do you think AI is a normal problem? Do you think AI is a problem of the sort that we have faced before and solved before? Or do you think AI risk might be different? My first question is, are there any normal problems? Again, at what level? On some level, it's a new different problem. There are many aspects of it that are new and different.
01:10:32
Speaker
At another level, it's a problem in that it's an aspect of reality that we have to contend with, and that's just like all our previous problems. It just completely depends on what level you're talking about. Do you think we can handle AI risk with traditional means? Here I'm thinking, for example, of insurance.
Can Insurance Manage AI Risks?
01:10:50
Speaker
Saying that AI companies by law must be insured to a certain extent, so that if they cause damages, they are liable for those damages.
01:11:02
Speaker
I think that would certainly help if it were defined properly. But again, there's no silver bullet. You need defense in depth. When you're looking at a safety mechanism, you don't want to ask whether it will solve everything because nothing will. You just want to ask whether it's a good thing to layer on. Proper definition of liability is essential, and having insurance schemes for that liability would help. But neither that nor anything else will solve the whole problem.
01:11:30
Speaker
What about the resources that we as an economy are, we're dedicating to research into capabilities versus research into safety in the field of AI? Should we perhaps, and again here I'm thinking about traditional means, should we perhaps be subsidizing research into safety? Because we believe that too much of resources, too many resources are going to AI capabilities.
01:11:59
Speaker
I mean, it depends on who's we, right? And it depends on what the proposals are for what we would do with marginal dollars. I don't immediately know. Within AI companies themselves, I would like to see safety integrated with the development.
01:12:15
Speaker
I mean, it makes sense to have some of it be separate. It certainly makes sense to have like a black hat team, right? A team who's not thinking about, is not responsible for capabilities, who then gets to think about what are the ways this could go wrong and their whole job is to be the advocate for the other side of the judgment. But just in general, are there high ROI opportunities at the margin that we're missing? I don't know off the top of my head.
01:12:43
Speaker
Here's something concrete. Let's say a wealthy philanthropist sponsors a pool of prize money that's used to set up bug bounty programs for these systems such that anyone finding a security flaw can report that flaw and get money from the prize money pool instead of perhaps using that flaw in the system to earn money in some illicit way.
01:13:07
Speaker
Oh, yeah, sure. It's a good idea. We do that for regular software security as well. Companies have bug bounties, and that sort of thing certainly helps. Much better to set up incentives for people who discover flaws to white hat report them rather than exploit them.
01:13:24
Speaker
I've been talking about AI and how it might be different. And my thinking here is that AI is sort of the ultimate general purpose technology. If electricity is general purpose, then intelligence is definitely general purpose. It's kind of a meta-technology that can perhaps create other technologies.
01:13:44
Speaker
Don't you think that this changes the way we see the trade-offs involved? That we're dealing with something here that might be more powerful than any technology we've faced before?
01:13:57
Speaker
Certainly. I mean, we're very often facing something that's more powerful than any technology we've faced before, because that's the nature of progress in technology, right? When we created nuclear power, that was the most powerful energy source we had ever found. And when we created internal combustion engines, they were the most powerful engines we'd ever created. When we created fire, that was the most... Yeah, right. We generally don't waste a lot of time creating things that aren't going to be more powerful than anything we've ever created.
01:14:21
Speaker
Yeah, no, I think it's certainly true. Look, this is part of the reason to be worried about it. It's also part of the reason to be excited about it. I think in general, any powerful new technology is going to be maybe equal parts, exciting and worrying. Those things are going to go together because the very nature of power of technology means it can be used for great good and it can also be carelessly or recklessly used or it can be abused and used for evil. That's the case with any new technology.
01:14:51
Speaker
What are you most worried about that we will, as a species, as humanity, seriously harm ourselves by using, by AI basically, or that we will fail to harness the full power of AI, fail to use it to use the technology to its full potential?
01:15:09
Speaker
I'm not super worried about either of those in the long run. I think failing to use it to its full potential is much more plausible. Some sort of disaster from AI is likely just because, again, every new technology creates potential for disasters and often creates disasters.
01:15:26
Speaker
We have plane crashes, we have nuclear plant meltdowns, we have all sorts of things. I expect some disaster at that level from AI, right? But wouldn't that disaster involving AI be more analogous to a nuclear war than a plane crash or a nuclear power plant meltdown?
01:15:44
Speaker
I mean, it could be. I mean, you could get things at any level, right? The smaller disasters are more likely and more plausible, but obviously less impactful. We started by talking about economic growth and the history of economic growth. And now I think we should talk about the future of economic growth.
Future of Economic Growth Rates
01:16:03
Speaker
I have this question that's been kind of on my mind for a long time, which is just for how long can current economic growth rates continue?
01:16:14
Speaker
into the foreseeable future. And what does foreseeable mean? By the time we get anywhere near the point where the growth rates cannot continue, the world and human life and civilization will be so completely, unimaginably transformed that it will be unrecognizable from where we are today. So that point is beyond the horizon of what we can foresee.
01:16:39
Speaker
But there are some physical limits. And here I'm talking about actual limits in physics, for example, that we cannot expand beyond Earth faster than the speed of light. Sure, sure. There's only so much matter and energy within our light cone, right? And if you keep growing at even like 3% a year or whatever.
01:16:55
Speaker
let alone if you project forward the trend that I mentioned towards the beginning where progress actually speeds up over time, then at some point maybe you've used it all up. I don't know. It's a little hard to say because again, it's so far away from where we are right now. I just feel like there's not one but multiple barriers in between here and there where things are going to transform so much that we just can say almost nothing about what's beyond them.
01:17:25
Speaker
But all I know is that, again, it'll be a completely unrecognizable world. It'll be a world where there's no disease, where death is completely optional, where we can travel anywhere in the universe that is physically possible to travel, where we can build anything that is physically possible to build as fast and as cheaply as is physically possible to build it.
01:17:47
Speaker
where we've already created every possible form of art personalized to every possible different taste, where we can transform our bodies and inhabit any kind of physical object that it's possible for our consciousness to inhabit, where we've created every sport and game that we can possibly think of and have mastered them and had every kind of competition, where we've
01:18:13
Speaker
where we've solved literally every problem in the universe left to solve. I can barely begin to guess at what that world looks like, and I don't think anybody else can either. If we're just extrapolating current growth rates out into the future, that world is perhaps thousands of years away. Just because if growth continues to that about 2% or 3% per year for thousands of years, we will begin to hit these fundamental physical limits.
01:18:43
Speaker
But what about, as you mentioned, what about if the growth rate is itself speeding up? Here's one scenario. This is what I would have called explosion, economic growth explosion, which just means that the growth rates accelerate and they reach extreme levels this century. So perhaps we see the world economy doubling every year or perhaps even every month.
01:19:09
Speaker
And then the scenario that you just described begins happening in a much faster than we might have thought. So perhaps this century as opposed to thousands of years. How plausible is this scenario? And if you just extrapolate from the data we have today, is it the conclusion you reach?
01:19:35
Speaker
Yeah, but it's very hard to put any kind of numbers on it, right? But yes, so I haven't done the math. But certainly, if you project forward the trends, then you do get some significant increase in the rate of growth probably within this century. And maybe that means you hit some sort of ultimate physical limits in centuries rather than in millennia.
01:19:59
Speaker
Or even sooner, I don't know. Again, I haven't done all the math. Again, if you take the most naive curve fitting and you end up fitting like a hyperbolic curve, which goes infinite in finite time, and so that can't literally happen. It's a failure of the model when we get infinities. We're not actually projecting infinite growth, whatever that means.
01:20:19
Speaker
Obviously, there's something wrong with the model. But maybe what's wrong with the model is just that it doesn't account for hitting physical limits. Or maybe there's something else going on. I don't know. So rather than a simple curve fitting exercise, which gets you maybe this sort of hyperbola, another way to do it is to try to fit a series of exponential modes. So Robin Hanson attempted this in a paper some decades ago and found that with
01:20:45
Speaker
a little bit of clever model selection and parameter selection, you can fit all of known or estimated economic history going back millions of years to a series of three exponential modes with smooth tradeoffs or smooth transitions between them. And so he roughly says, OK, maybe these modes are like hunting, farming, industry.
01:21:15
Speaker
And so the fun thing about this is if you look at the relationship between one mode and the next, it turns out that each mode goes roughly two orders of magnitude faster. And also, the length of time to the next mode arriving is like two orders of magnitude less.
Robin Hanson's Model of Economic Growth
01:21:34
Speaker
And if you extrapolate that out and say, well, if there's a fourth mode, when will it come and how fast will it go?
01:21:40
Speaker
then you get that some fourth mode would arrive sometime in this century, and it would just be growing unimaginably fast. And the funny thing is, and this is all speculation, of course, and fun with models, but the funny thing is that it is not that far off from the sort of scenarios that Holden Karnovsky has painted in his blog writing where he talks about AI automating all of
01:22:07
Speaker
human R&D essentially, all of science and technology and industrial development, and just getting some insanely fast growth rates. So maybe we're on the cusp of it. Then there's the extinction scenario that perhaps if you look at it along the way, it looks a bit like the explosion of growth scenario, at least as I imagine it. Perhaps AI helps us have very fast economic growth
01:22:32
Speaker
until we get, again, speculatively an AI takeover and we go extinct. How do you think of that scenario? Nothing's guaranteed. I can't prove that it won't happen. Obviously, we all hope that it doesn't. Yeah. OK. Then I have another scenario that I've called a current normal goes on. This is perhaps a bit of a misleading title, but that's on purpose. So this scenario involves just we have this 2% to 3% growth rate.
01:23:02
Speaker
into for this century. And perhaps we have better technology in all sorts of ways, but the world is still recognizable to us now in 2100. And we still have problems. We aren't living in some sort of unrecognizably great or unrecognizably bad world.
01:23:21
Speaker
Should this be our default scenario or should we perhaps think that we are living in a special time of these growth rates? Because historically the growth rates we're seeing now are not normal if we look at it in the grand scheme of things. My first prediction would be that the highest level pattern will continue and the highest level pattern is increasing growth rates over time. So I wouldn't predict that 3% or we're getting a little less than that growth now
01:23:51
Speaker
would continue in that ballpark forever or for thousands of years. I would predict that the increase in the growth rate would continue and that things will speed up. But again, that is not guaranteed either. Normal world continuous scenario is maybe the world
01:24:10
Speaker
where we get the Butlerian jihad and we go to war against the machines or at least the AI and destroy all of it and outlaw it and then just sort of continue on in our 20th century industrial mode for forever. If that's even possible, I mean, I don't know.
01:24:29
Speaker
What about a scenario in which civilization collapse, but in a slow way? Perhaps we can imagine that growth rates decrease and we slowly lose knowledge, we lose capabilities, perhaps population
01:24:46
Speaker
the populations decline all over the world. And then maybe in some thousands of years, we die out from a natural disaster like an asteroid or a volcano or a pandemic or something like this. Yeah, what do you think of that? This would be like if you took the great stagnation and extrapolated that and took the kind of most worrying extrapolations of the great stagnation.
01:25:13
Speaker
Yeah, I mean, that's possible. Civilizational collapse certainly has happened before.
Slow Collapse Scenario of Economic Growth
01:25:19
Speaker
Stagnation and regression have happened before. I think we are, I suspect we are less vulnerable to civilizational collapse than we used to be. And I would like to think that that trend will continue and that will continue to get less vulnerable to collapse. But I haven't studied collapse in enough depth to have a real theory there. All right, Jason, it's been a real pleasure to have you on. Thank you for coming on. Thanks a lot. It's been a fun conversation.