Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Ajeya Cotra on Thinking Clearly in a Rapidly Changing World image

Ajeya Cotra on Thinking Clearly in a Rapidly Changing World

Future of Life Institute Podcast
Avatar
178 Plays2 years ago
Ajeya Cotra joins us to talk about thinking clearly in a rapidly changing world. Learn more about the work of Ajeya and her colleagues: https://www.openphilanthropy.org Timestamps: 00:00 Introduction 00:44 The default versus the accelerating picture of the future 04:25 The role of AI in accelerating change 06:48 Extrapolating economic growth 08:53 How do we know whether the pace of change is accelerating? 15:07 How can we cope with a rapidly changing world? 18:50 How could the future be utopian? 22:03 Is accelerating technological progress immoral? 25:43 Should we imagine concrete future scenarios? 31:15 How should we act in an accelerating world? 34:41 How Ajeya could be wrong about the future 41:41 What if change accelerates very rapidly?
Recommended
Transcript

Introduction and Guest Background

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. I'm Gus Dogger. On this episode of the podcast, I talk with Adje Khatra, who's a senior research analyst at Open Philanthropy. In her role at Open Philanthropy, Adje has produced what is perhaps the most in-depth report on AI development.

Impact of AI's Rapid Development

00:00:20
Speaker
AI is advancing rapidly, but how much will it change the world?
00:00:25
Speaker
We discuss how we can know whether the pace of change is accelerating, how to think clearly in a fast-moving world, and how we should respond, both personally and professionally, if we expect the world to change quickly in the future. Here's Ajeya Khatra. Okay. Ajeya, let's talk about fast-moving worlds and how the future could arrive sooner than we might think it would.

Future Picture: Default vs Accelerating

00:00:55
Speaker
So in my terminology, I have categorized your thinking into kind of thinking on the one hand about the default picture of the future and on the other hand about the accelerating picture of the future. Could you tell me about this difference?
00:01:14
Speaker
Yeah. I guess the default picture of the future that many people imagine is that, you know, eventually humanity might be a space-faring civilization. We might have cool technology like teleportation tech or, you know, mind uploading technology or, you know, faster than light travel or all these things. You know, if they're physically possible, we might have them.
00:01:40
Speaker
Um, but that this is like not really very relevant to us. And, you know, it might happen in the year 3000 or the year 5,000 because, you know, we need to progress really far in science and technology before we get to

Accelerating Growth Rates: What's Next?

00:01:56
Speaker
that state. And we're kind of progressing at like some steady state now, a steady rate now. So the accelerating view of history in the future, um, roughly says that growth rates have been accelerating over time.
00:02:09
Speaker
since the beginning of human civilization, basically because as humans invent some technology that improves productivity, that kind of increases the carrying capacity of the world, increases the number of humans that can be supported, which means population increases, which means there are more people who could come up with, you know, incremental increases in productivity. So if you map that out,
00:02:38
Speaker
That model suggests that if we don't see exponential growth, like X percent growth per year, we see super exponential growth, where maybe at the beginning of time, at the beginning of history, we were growing at 0.01 percent per year, and then it's 0.02 percent per year. Eventually, it's 0.1 percent per year, then 1 percent per year, then 2 percent per year, and so on.
00:03:02
Speaker
So that suggests that, you know, a naive extrapolation of that suggests that we reach infinite productivity or infinite growth and finite time. Um, and that that finite time period is not that far away. It's like 2050 or something.

AI: Key Driver of Compressed Progress

00:03:16
Speaker
Now, of course, we don't get infinite growth and finite time because we hit various physical limits. You know, if nothing else, the limit of speed of light to try and like travel from earth to other places that have resources.
00:03:30
Speaker
will cap our growth at some point. But the kind of, the adjusted conclusion would be there's gonna be a period of time before 2050 when growth is radically faster than it is now before it levels off. And then that kind of has the effect, that perspective kind of has the effect of compressing the future. It has the effect of, you know, if you thought that a thousand years of
00:03:57
Speaker
human scientific progress would be enough to get us to some crazy post-singularity world where we're spacefaring civilization and so on. Then if you believe we'll invent technology that accelerates the pace of that progress and then accelerates that in an accelerating way, then you kind of expect all the science and progress and research that might've been made over those thousand years to be compressed into five years or 10 years.
00:04:25
Speaker
And the main technology we're talking about here is AI, which is what could compress all of this scientific discovery into a shorter period of time. And so the accelerating world or the accelerating picture of the future is one in which we make all the same discoveries that you would kind of, in the default picture, think that we would make in a thousand or 10,000 years. We make those same discoveries

Transformative AI: Exponential Progress?

00:04:54
Speaker
maybe within 50 years. And then we hit these physical limits. So you're thinking that if we don't destroy ourselves, we will discover and build technology to the edges of what's physically possible. Yeah. And I think that the speed up is even more dramatic than what you described of 50 years. The definition of transformative AI
00:05:19
Speaker
that we use in my timelines report and elsewhere is AI systems that could 10X the pace of progress. So with 10Xing the pace of progress, that's not a one-time boost. So in particular, they also 10X the pace of AI progress. So they
00:05:41
Speaker
They 10x the research that would have been necessary to create even more powerful systems that 100x the pace of progress. And those systems are furiously inventing all sorts of things, including even more powerful systems that can 1000x the pace of progress and so on. And there are going to be some limits to that based on some kind of unknown physical
00:06:05
Speaker
or otherwise constraints on how quickly research could possibly be done. But we shouldn't be imagining a one-time boost of 10x, which would mean that, you know, something that takes a thousand years would instead take a hundred years. We should kind of be imagining, we often imagine the kind of like characteristic thing as, you know, the doubling time of innovation or growth or whatever is
00:06:33
Speaker
one year and then it's six months and then it's three months and then it's one and a half months and then it's three weeks and so on, which is how you get the reaching infinity and finite time effect of extrapolating that.
00:06:48
Speaker
And we should say that there are some papers in kind of mainstream economics that just extrapolating past growth trends reaches this conclusion of infinite growth in the near future. So it's not just you saying this, it is the extrapolation that arises if you take into account past growth rates.
00:07:15
Speaker
Yeah, I don't totally know exactly about the economic literature. My understanding is that this conclusion is reached if we use pretty standard models of how innovation works, like endogenous growth models, and fitting those models to historical data tends to have this conclusion fall out from that. But I think economists in general are pretty skeptical of this overall.
00:07:41
Speaker
partly because it just sounds absurd. And I think partly because they tend to work more with data from the last 100 years as opposed to data from the last 10,000 years. And they tend to be skeptical of the quality of older data, which is totally fair. And so if you just look in the last 100 years, our growth actually looks roughly constant. And it's only when you include the last 10,000 years, does it seem clear that
00:08:10
Speaker
our growth rate hasn't been constant over time. And so often they'll take these models, but they'll set the parameters such that you expect constant growth going forward. But in fact, if you wiggled those parameters a little bit, it would suggest either growth accelerating to infinity or stagnation, where like eventually GDP stays constant, isn't growing at all.
00:08:37
Speaker
I think my understanding is economists are much more likely than people in the futurist crowd and people at open fill to believe that growth will stay exponential for a long time or else stagnate as opposed to accelerating.
00:08:53
Speaker
Okay, what I would like us to do is to talk about from the personal epistemological and psychological position, how do we understand which world we're closer to?

Signs of Acceleration: Fact or Fiction?

00:09:05
Speaker
How do we find out whether we're closer to the default world or the accelerating world?
00:09:10
Speaker
I'll just start with my own experience where I'm following the news about artificial intelligence and it seems like every month or even every week a new amazing paper is coming out about now we have this capability and we have corrected the mistakes that we have taken into account what the critics are saying about the paper we published last month.
00:09:39
Speaker
And now we've corrected those mistakes. So how do we handle this kind of flood of information about progress in AI in a rational way? Because it seems like we could either get overhyped or we could get very panicked about progress. And we want to avoid both of those, I think.
00:10:01
Speaker
Yeah, so in terms of the question of how do we tell which world we're in, I'm not sure that AI progress as we kind of subjectively process it on the scale of weeks or months is hugely informative for that. I think that in the longer or medium term, I would be looking for has the US economy
00:10:26
Speaker
picked up, is it growing slightly faster as a result of AI? Or is it growing slightly faster at all? And particularly, can that be attributed to AI? And I don't think that's really happened so far. AI systems have been very cool, and the research progress has been fast. But it hasn't really translated into a very large industry in terms of dollars per year and hasn't added much to world GDP yet. I think it probably will start
00:10:55
Speaker
adding and like I, you know, if my views on timelines are right, I expect that 10 years from now, we'll have economists saying, Oh, you know, this was like surprisingly fast growth. I don't think we'll have economists saying, you know, now it's the singularity.
00:11:13
Speaker
Like we believe that there's going to be infinite growth in finite time, but more like, oh, we had really modeled growth as being two and a half percent, but actually it's more like three or three and a half percent. And it's been that way for a few years, so it's not a fluke. It's not just recovery from something.
00:11:30
Speaker
Um, so that kind of sign feels like a more durable, uh, sign of whether we're in the accelerating world as well as kind of observing specifically how AI, how useful AI is to scientists doing their job. Um, how much people report that it, people in innovation fields specifically report that it accelerates them. Um, and looking at, you know,
00:11:57
Speaker
In 10 years, can Google do the same amount of work that it did in 2022 with half the workforce? In 10 years, is there a factory that is similarly productive with half the workforce? So that kind of big picture macro trends is what I'd be looking for. And I don't know if we have super strong evidence of that in the next couple of years.
00:12:22
Speaker
Um, in terms of handling the flood of information about AI in a healthy way, I don't know, like I find it quite stressful too. Um, I think it's, it can create a sense of chaos and a sense of, you know, you have to run as fast as you can to stay in the same place or, you know, a sense of trying to outrun something much larger than you. Um, and that can definitely be very stressful and I'm not sure I have.
00:12:53
Speaker
great coping strategies I can offer people right now. Okay. So if we're thinking about AI progress and you're saying that we haven't seen massive changes in economic growth as a result of AI, what about the valuations of companies that are heavily invested in AI?
00:13:17
Speaker
Could they point us in the direction? Could we use the market to predict whether AI will be a big deal very soon? Because then it seems like maybe companies such as Google or Meta or Microsoft should have even higher valuations if the market as a whole believed that we are approaching very fast progress in AI that will change growth rates.
00:13:43
Speaker
Yeah, I think that looking at company valuations and how they've changed is a pretty good idea. I haven't done that too much yet. I think it's like, at least I'm not.
00:13:57
Speaker
very well-versed in finance, but my understanding is at least, it's at least somewhat non-obvious how you translate a market cap into an expectation of revenue in the future. Cause you could maybe expect this to have a modest amount of revenue, but for a very long time or like a high revenue for a shorter period of time or something like that. Um, and then also some companies are not as purely betting on AI. I think that all the big tech companies are kind of doing a mix of AI and other
00:14:26
Speaker
more prosaic things. But it broadly does seem like the market, these companies that have exposure to AI have gone up over the last five years, probably will continue to go up

Utopian Futures: Health and Beyond

00:14:37
Speaker
companies like Nvidia, various startups building AI, like OpenAI and Thropic, Adept, you know, a bunch of other places like that. And the kind of larger tech companies as well, probably
00:14:53
Speaker
their valuation will continue to climb and that will be an indication of people kind of expecting. At least partly it'll be an indication of people expecting AI driven profit.
00:15:08
Speaker
But I think in general, it's pretty safe to say that the world is just not at all prepared for being transformed by artificial intelligence anytime soon. People are making plans that relies on the world not changing that much and not moving that fast. So it seems like if we get this world of very fast growth driven by AI,
00:15:33
Speaker
then a lot of people will have their plans disrupted in quite uncomfortable ways, perhaps. And you could imagine the societal chaos or disruption arising from very fast AI progress. I'm thinking not only of unemployment issues or unemployment issues, but also
00:15:59
Speaker
maybe in a sense of a loss of meaning or loss of control for humans. Again, I'm not I'm not proposing that you be our psychologist on this podcast. But maybe you I know you've you've spent a lot of time thinking about this. Maybe you have something to to help us deal with with this world.
00:16:19
Speaker
Yeah. Um, so I have found it personally, very mentally challenging to imagine this. I think, uh, I have a feeling of frenzy and panic when I picture what my life might be like in 20 years. Uh, right. As AI systems are, you know, making everything go crazy, um, like faster growth, like
00:16:42
Speaker
very intense pressure cooker environment, especially for somebody that wants to be close to AI and wants to help with it. And that is very stressful for me. And it's been pretty hard on my mental health for the last couple of years. I think one of the things that helps me feel better about it is to remind myself that
00:17:08
Speaker
In some ways, all of this chaos and craziness is a consequence of there being a hope for a much better future. Most people imagine a world where humans exist. Every human has a kind of short life involving some joy and some suffering. Then they die. And humanity as a whole will probably die off in a few thousand years or a few tens of thousands of years.
00:17:39
Speaker
But what I'm picturing is something just much higher stakes and weirder and crazier, where there's a real shot at something grand and beautiful that is much more than I expected was possible for sentient beings to have. There's a possibility of creating a truly utopian future with technology.
00:18:08
Speaker
And it's sort of, as a result of that possibility being real, that there's going to be this period of, you know, chaos and stress and risk. And we should try and reduce that risk, but I try and have a perspective. I don't always succeed, but I think that it's helpful to frame it as this whole package, including the crazy chaos and scariness
00:18:38
Speaker
and substantial risk is a lot brighter of a future than I thought we were going to get before I thought about all this.

Ethics of Technological Progress

00:18:51
Speaker
Maybe you could elaborate on what you mean by this utopian future. What is it that we could
00:18:58
Speaker
arrive at if we succeed in surviving this very high stakes situation of a fast moving world with very powerful AI? Yeah, I mean, at a sort of baseline, I think
00:19:14
Speaker
all of the good things you might have expected from science and technology, like extending health span and curing cancer and eliminating infectious diseases, making everything cheaper, making everything so cheap, energy so cheap and food so cheap, that everyone in the world has access to.
00:19:39
Speaker
you know, lots of great food, inventing really convincing meat alternatives or lab meats so that we no longer do factory farming. Just in general, technology has been good overall in the history of humanity for empowering humans, reducing poverty and suffering, giving people choices. And I think this is more controversial, but I think it also has
00:20:05
Speaker
a some kind of causal effect on improving values. I think people who are more comfortable and secure and have more of what they need are often able to kind of more afford the luxury goods of caring for people who are very far away and wanting to do altruistic pro-social things. Whereas if you're in a very desperate situation,
00:20:30
Speaker
you kind of have less room for that in your mind. So I think both materially and morally, I think technology has been very good. And I think if we make it past this hurdle, we'll have much more quickly than I thought we would in the past have all those benefits.
00:20:53
Speaker
And then I think it gets crazier from there, if you imagine one of those technologies being the ability to upload people's brains onto computers. So mind uploads are digital people. Uh, that seems like it's something that's physically possible. It seems very hard. It seems like we're kind of nowhere near getting to that point. If you extrapolate the progress of modern neuroscience, but if you have AI systems that have done the equivalent of many hundreds of generations of
00:21:23
Speaker
neuroscience research, then that might be possible very soon. And then if people are uploaded onto computers, I think, and we have kind of good rule of law around how they can be treated, then there are kind of limitless possibilities. You don't really feel, you're not necessarily restricted to physics as we know it. You can just have like,
00:21:52
Speaker
you can choose exactly how you look and you could fly or whatever you wanted. And I think that's kind of crazy.
00:22:04
Speaker
Is there an argument to be made that this creating this high stakes situation for the world? And I'm not blaming any particular company or individual for this, but just the overall state of competition between technology companies creating this potentially high stakes situation.
00:22:22
Speaker
Is that in some sense immoral? If you imagine a person who began thinking that technological progress was a bit too fast in the 1800s and they joined the Amish and this person's children are now kind of have stepped outside of technological progress. Is it important that such people be allowed to live their lives in what we could call the default world?
00:22:55
Speaker
There's a sense in which it's not possible to get off the ride of technological progress if you're in this world. People who want to get off the ride are left without any form of agency because progress is being driven by this kind of system of companies and people competing with each other, interacting with each other, and driving a very high pace of the future.
00:23:22
Speaker
So to answer the first question, I don't think that it's on that immoral to participate in technological progress. I think right now, AI progress in particular, I would prefer that it went slower. But I think this is a pretty complicated argument. And I don't think it's helpful to frame people that disagree with me about a complicated argument or haven't thought about a complicated argument as doing something immoral.
00:23:53
Speaker
The second question of should we or could we have people choose to kind of opt out on a personal level? In terms of technological possibility, I think it will be more possible than it was in the past to create a sort of like Amish earth or leave behind physical normal earth and have digital people live elsewhere and so on.
00:24:20
Speaker
In terms of political negotiation around that, that's much more complicated. But it seems pretty plausible to me that a lot of people will have demand for getting to choose and meter how crazy their life is. And I think it will be less fringe of a position than the Amish position is today, because it'll be much more
00:24:42
Speaker
scary and totally transformative to embrace all these technological changes right away. So I think that we will probably end up observing degrees of adoption of these technologies based on what people feel comfortable with. And there'll probably be a number of people that don't want to become a digital person, don't want to be
00:25:09
Speaker
a very long living, even as a biological person, want to have a life that's more like, you know, a particularly wealthy person in a wealthy country today, that's pretty normal and that it would be possible to make that happen for them.
00:25:28
Speaker
Of course, it's difficult to speculate about these things, but I could also easily imagine a movement of people who are buying a house in the countryside, living more traditionally, and opting out of technological progress as far as that's possible.
00:25:44
Speaker
We're talking now about quite kind of, we're trying to imagine a very concrete future. So when we're trying to imagine how the future might be, should we ought to be more concrete? Or should we be more vague? Because it
00:25:59
Speaker
It seems that if we are vague in the sense that for example we talked about economic growth rates you could leave it unspecified what would be causing this massive rising growth and then it's statistically more likely that you're right because you haven't specified an exact scenario.
00:26:17
Speaker
On the other hand, it seems more useful if you then specify that, oh, it's artificial intelligence that's driving this rise in growth. So how do you think about concreteness versus badeness in imagining possible futures? Yeah, I think I like to have
00:26:38
Speaker
a spectrum of these things explored and kind of go back and forth between, I think I wouldn't want to frame it as vague. I think truly vague speculation about the future is generally not that useful in the sense of just kind of being like, oh, the future will probably be crazy. But I would say maybe model-driven. So if you were to draw a chart of economic growth,
00:27:07
Speaker
throughout history and extrapolate that forward and say, somehow, I think our default guess should be that growth will look like this extrapolation. That is not vague in the sense that you have something to work with, but it is abstract in the sense that you're not telling a story. And I think we should generally be expecting and aiming for
00:27:32
Speaker
these different lines of evidence and thought to be pointing in the same direction and be curious when they're not and try and reconcile them when they don't and try and think about, can I bring in some other perspective that tells me which of these perspectives is wrong? But yeah, I think I would not feel very good if I had no

Concrete vs Abstract Future Predictions

00:27:55
Speaker
models that suggested this was possible in a broad sense
00:28:01
Speaker
And I also wouldn't be feeling very good if I couldn't imagine any particular way this could happen. And I think it's kind of like these different things are useful for different purposes. I think the more abstract models are probably more useful for having general beliefs than for making forecasts about the long-term future. And I think more concrete stories are often more useful for
00:28:31
Speaker
making particular plans and for sanity checking models because you often can't really act on something that's like the world might go crazy and growth might be really fast in the next 30 years.
00:28:47
Speaker
What I had in mind when asking this question was books such as The Economist, Robin Hanson's Age of M or Ray Kurzweil's The Singularity is Near, where they paint very precise pictures that can then also be very precisely wrong.
00:29:05
Speaker
But so is it worth spending more time on projects like that? Or is it better to spend time in, you could call it model thinking or more broad thinking? I mean, I think that both are pretty underdone right now. This particular level of specificity of age of M and the singularity is near, I think
00:29:36
Speaker
I am like somewhat more skeptical of that sort of thing, which is, which is to say they kind of. They're not very qualified. They kind of act as if all these things are obvious. They have statements like age of M has a bunch of claims about things like, you know, how quickly do factories reproduce themselves? And he makes a statement that this is one month, but there's no citation or analysis there.
00:30:03
Speaker
And it's covering like every single aspect of society. It's trying to tell a concrete story about, you know, everything. And it's trying to, it's not particularly careful about what's more uncertain, what the different possibilities are. And it's not particularly careful about analyzing some of these quantitative parameters that go into the story. And that I think is somewhat less useful.
00:30:34
Speaker
Concrete stories are often more useful when there's one kind of high level thing that you think is probably going to happen. Like misaligned AI systems pose a significant danger and you want to kind of lay out a way or ideally multiple ways that that thing could happen. It feels somewhat more useful to me. And often you have an intuition that that kind of thing will happen because of a pretty abstract argument.
00:31:03
Speaker
And this is more kind of lending support to the abstract argument and trying to see if making it more concrete reveals that it's flawed or not right in some way. If we discover that we are in this accelerating world, how should we change our plans? What should we do differently? Personally or professionally? Both I'm thinking about. Yeah, I mean, professionally, I think more people should seriously consider
00:31:34
Speaker
learning about these technologies and seeing if they can be helpful in some way with this transition, especially people who are already committed to using their careers to do something good for the world. It seems like a neglected opportunity to do something good for the world. And you can have an outsized influence if you are relatively early to being concerned about it. And you believe that in the future, lots more people will be concerned. Personally, I think
00:32:04
Speaker
it's more complicated. I think people are relatively risk averse. So, you know, I know some people who aren't saving for retirement because they expect that, you know, very powerful AI will be built pretty soon. And like, you know, they'll, that won't be like, they're not planning to be 70. I think that
00:32:30
Speaker
for most people, even if you have a pretty high probability on that, even something like a 20% probability on this being wrong and you actually living to 70 in a normal world and like wishing you would save for retirement probably motivates some amount of saving for retirement. So I think you probably have to have
00:32:52
Speaker
either unusual confidence or unusual risk neutrality in order to sort of totally change around your financial planning based on this. I know some people who are investing in AI companies because they believe that AI is undervalued based on their views about how soon that it's likely to have transformative impacts. And that would be one way to attempt to beat the market
00:33:22
Speaker
uh, make more money, but it's definitely very volatile. And, you know, just recently tech crashed a bunch. Um, and so, you know, it kind of depends on your personality and your financial circumstances, whether it really has any implications on that front. I mean, other things just kind of like, I think it's just a big thing to grapple with and think about, like in terms of whatever amount of your time you spend thinking about,
00:33:53
Speaker
like life and the nature of the world. It's just a pretty big thing that is, I think, probably going to happen. And it's pretty scary and wild. And I think may have subtle idiosyncratic effects on people's how they conduct their lives in the same way that
00:34:19
Speaker
You know, there's a cliche that getting a diagnosis of cancer does for some people. It might be a wake-up call to prioritize differently, whatever that means for people. But yeah, that's going to be very person-specific and, you know, for a lot of people, I think it shouldn't necessarily change too much in their personal lives.
00:34:42
Speaker
say that we meet again in 2070. And it's the case that we now have self-driving cars and we have better virtual reality technology and better medicine, but nothing radical has happened. It turned out we were more in the default world versus the accelerating world.
00:35:07
Speaker
What would be your best guess as to how your picture of the world was wrong in that scenario? Yeah, I think it would be probably something like we are at about how useful ML systems will be. And it turns out that it was too hard to train them to do truly autonomous, powerful, economically valuable tasks. Maybe they were too brittle, and we could never get their rates of dumb errors down.
00:35:36
Speaker
uh, enough or maybe automating that kind of task would have been possible, but require far more computation than we actually had available or far more data than we had available. And then as a result of AI progress, kind of stalling out, um, investment and research and stuff stalls out too. And there's kind of another AI winter where people aren't really thinking much about AI. Um,
00:36:03
Speaker
And it's not a particularly fast paced or lucrative field to be in. And then that lasted quite a while. That seems like plausible to me that that's how things will end up. I definitely don't have, you know, strong confidence that we will, that that won't be the case. I think I have like at least 20% chance that that happens, maybe more like 30% chance.
00:36:31
Speaker
There is a certain danger to extrapolating trends. We can imagine the epistemic situation of a person looking at birth rates in late 1950s and seeing them just spike and thinking about whether the world will be overpopulated in the year 2000 or 2050.
00:36:52
Speaker
Or we could imagine after the moon landing, we might think that in the year 2000, we will definitely have a moon colony and maybe in 2050, we will have a Mars colony and so on. What's the risk that this could be happening in your extrapolations? That you're basing your beliefs on certain trends moving in a
00:37:14
Speaker
in a particular way. But these trends for various natural physical limits, they slope off. And what looked to be exponential turned out to be more like an S-curve. I agree that there's only so much you can get from extrapolating trends. It's a very
00:37:39
Speaker
you know, low information thing in the end, even though I think it's often the best we have. I think it's like kind of tough for me to answer specifically what the chance is that that's off. I think, you know, you can always pick times when there are a bunch of examples of times when trend extrapolation leads people astray, but I nonetheless think that
00:38:02
Speaker
as a strategy, it would have done better than like many other strategies, especially ones that are kind of driven by people's intuitions about what's absurd or like what's possible. Um, and so I think it's kind of like, you know, it's an inherently difficult thing to try and estimate the

AI's Impact on Planning and Coping Strategies

00:38:20
Speaker
future. We're probably very wrong by default. Uh, and so, you know,
00:38:26
Speaker
probably what I have in mind is wrong. But yet I don't really think that I have a particular reason to deviate in a particular way from trend extrapolation. Because I think trend extrapolation can also underestimate progress. And it's not only the case that trend extrapolation overestimates progress.
00:38:46
Speaker
it definitely goes both ways. So the trends I just mentioned were specifically cherry-picked by me to show this phenomena of the sloping off effect. And I could have picked economic growth that so far has continued to accelerate. But I'm thinking like,
00:39:10
Speaker
If we return to the current models, I know you said that we shouldn't put too much emphasis on the language and image models that we're seeing right now. And I agree with that. But given that we haven't seen massive economic impact yet, the current set of models seems to me to be the way or the place to begin speculating about what will happen.
00:39:34
Speaker
I'm thinking that these models are the place from which we begin thinking about which models we will have in the future that will have significant economic impact. So shouldn't we put at least a decent emphasis on the current models? Yeah, so specifically the thing I said was that
00:39:56
Speaker
I don't think looking at week to week or month to month progress in existing models gives a huge amount of evidence of whether we're in the world where we're about to cap out and reach an AI winter or we're in the accelerating world. I do think thinking about existing models is very important for other reasons. And, you know, my report on
00:40:22
Speaker
the plausibility of AI takeover relies on kind of imagining a path to transformative AI that's very like close descendant of existing models. And I think a lot about existing models to kind of try and have one concrete picture to test my theories against. And I also think that it's very, it's actually pretty plausible that
00:40:50
Speaker
transformative AI arrives in just several years from now. And in that case, I think it is plausible that it looks very similar to existing systems. And you could argue that people who want to influence the future of AI have the most leverage on these unusually short timelines, which I think have less than 50% probability because
00:41:15
Speaker
They might be unusually risky, so it might be unusually important to reduce risk by some amount relative to the total. And they might be easiest to intervene on specifically because we can get further by imagining these systems being very similar to existing systems. So our research is better targeted to them versus something that arrives in 30 years.
00:41:41
Speaker
If we're in a very, very fast-moving world where transformative AI arrives, say, within 10 years, how is what we should do different from what we should do if it's more likely to arrive within 30 years or, say, 100 years? So what I'm thinking about here, this is a very difficult question to answer, of course, but how should we change our priorities based on our best guess about how fast-moving the world we're in actually is?
00:42:10
Speaker
Yeah. So I think within AI safety in particular, um, and all the activities that are focused on AI directly, I think it's actually makes a surprisingly small difference to prioritization. Uh, mainly because if we think that AI is going to arrive in 30 years, we're kind of stuck just imagining it being like pretty similar to today's systems, because it's probably going to be different, but we don't really know.
00:42:39
Speaker
how it's going to be different. And so if our probability on 30 years versus 10 years changes, I think that the tools we use for thinking about things are not going to change as much. And then the other reason is
00:42:57
Speaker
it's probably productive to just work on the next problem you might face first. So, you know, even if you had a 5% chance that it was in 10 years or a 10% chance or a 15% chance, as long as it's a significant chance, I think it's like productive in terms of feedback loops to focus on those worlds. And then if you're wrong, focus on the next 10 years. And then if you're wrong, focus on the next 10 years, because you're more likely to be able to do productive work
00:43:26
Speaker
like close up toward when transformative AI is developed because you have a better picture of everything at that time. In terms of like the balance between AI work and other work, I think there's more of a change there. Whereas our timelines are longer, we probably want to shift more resources toward community building and saving versus
00:43:51
Speaker
spending our resources on AI right now. I think that's more relevant for someone in a philanthropist position kind of trying to decide how to allocate fungible resources than it is from the perspective of someone who's deciding on their career. Where I think from the perspective of someone who's deciding on their career, for a pretty wide range of timelines fluctuations, you probably want to decide whether you're going to be doing AI or not AI based on your
00:44:19
Speaker
personal interests and skills, as opposed to these kinds of considerations, because it's actually fairly rare for somebody to have close enough fit for these two things that the difference is made by thinking about timelines. Ejaya, thank you for doing this interview with me. It was very informative. Thank you.