Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Katja Grace on the Largest Survey of AI Researchers image

Katja Grace on the Largest Survey of AI Researchers

Future of Life Institute Podcast
Avatar
476 Plays9 months ago
Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What AI will look like in 20 years 22:43 Experts’ extinction risk predictions 29:35 Opinions on slowing down AI development 31:25 AI “arms races” 34:00 AI risk areas with the most agreement 40:41 Do “high hopes and dire concerns” go hand-in-hand? 42:00 Intelligence explosions 45:37 Discontinuous progress 49:43 Impacts of AI crossing the human-level intelligence threshold 59:39 What does AI learn from human culture? 1:02:59 AI scaling 1:05:04 What should we do?
Recommended
Transcript

Introduction and AI Survey Overview

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Darker and I'm here with Katja Grace from AI impacts. Hey Katja. Hey. Glad to have you on. We are talking about a number of surveys that AI impacts has done. So maybe you could tell us about the most recent survey and maybe also the past surveys.
00:00:22
Speaker
The most recent survey was the biggest one yet, which is very similar to the last two. So they were in 2016, 2022 and 2023. We started out just writing to everyone at NeurIPS and ICML and this time we expanded it to six top venues. What are those venues?
00:00:41
Speaker
New York's and ICML are particularly machine learning oriented. These are all AI venues. This time we expanded it to ones that are less machine learning related in order to be sure that we're getting kind of what the AI researchers think, not just what the machine learning researchers think. Fantastic. And so you mentioned that the 2023 survey is the biggest one yet. What does that mean?

Impact of ChatGPT on AI Predictions

00:01:04
Speaker
How comprehensive is that survey?
00:01:06
Speaker
Well, we had nearly 3,000 participants. We had quite a lot of questions, so it would take quite a long time to answer all of them. So for a lot of the particularly less important ones, we randomise for each person which of a set of questions they receive, partly to fit in more questions and partly to try out different framings of roughly the same question. Because I guess we thought that there might be framing effects. And in fact, there would be quite notable framing effects from how questions are asked. Fantastic.
00:01:35
Speaker
I want to run through some of the top line results here and we can discuss them and what they mean. So starting with the fact that the estimated time to human level AI dropped anywhere from one to five decades from the 2022 survey. So this is a pretty massive and pretty fast drop in one year. I guess it's around one year between these surveys. Why did you think the estimates from the experts dropped that much in such a short amount of time?
00:02:05
Speaker
The most salient answer is chat GPT being these kinds of things happened and were a whole lot of public attention. I think I'm actually surprised that it had that big an effect because I think it was unclear to me whether this would move the AI researchers' views about things, especially since between the 2016 survey and the 2022 survey, the time to HLMI, which is all human tasks, roughly, dropped.
00:02:32
Speaker
or changed by like a year. So these things are by default, not just moving around all over the place. So I think this was a really notable drop. Yeah, and it's interesting because you would expect experts to be familiar or in the in a 2022 survey, of course, they're familiar with the state of large language

AI Forecasting Challenges

00:02:50
Speaker
models. And so why do you think that so the experts were surprised by chat TBT, basically?
00:02:57
Speaker
Apparently, I guess, I think one thing it could be is like, it's a big field where people do lots of different things, maybe they don't know what one another are doing necessarily. Or it could also be a kind of like, what does it feel like other people think? Like if you're sort of updating on what everyone else thinks is going to happen, which makes sense, then if there's more of a public reckoning about
00:03:21
Speaker
what's happening. Maybe very people look at each other and are like, oh, this is happening more. I guess I think we haven't yet talked about the chance of human extinction questions. But I think it's interesting to compare how much those numbers didn't change since previous surveys. Because you might think that the thing that AI researchers would more already have a
00:03:42
Speaker
confident view about is like how things are going in AI relative to like what are the social consequences whereas it seems like the what's going on in AI questions saw quite a shift whereas the is it going to kill all of us questions didn't so much. In spite of like I think both of them seeing quite a lot of public discussion
00:04:02
Speaker
Yeah, over 2023. In the 2023 survey, the question around human level intelligence is the result there is that the experts expect human level intelligence by 2050, around 2050, and full automation of labor around 2100. Why the discrepancy? Is there a discrepancy there? Are these questions around the same thing?
00:04:25
Speaker
It seems to me, as I understand the questions, quite a big discrepancy in that most questions, to be clear, are asking about when these things will be feasible, not when they'll happen. So it's not about when will actually all the jobs be automated. Just like for any particular job, could it be automated?
00:04:46
Speaker
not asking about like one machine doing this, it's just like any combination of AI being able to do any particular thing. So I guess I would think of an occupation as either like a very big task, so then it would be included under old tasks, or a collection of tasks, so it'd still be included under old tasks probably. Or like you might say there's something else to like putting all the tasks together that you don't want to call a call a task, but I still would have thought that once you have all of the components, you're not like
00:05:14
Speaker
50 years away from having the combination of them. So I think on my read, I'm like, wow, this is logically occupations should happen before all tasks. And so this is like a framing effect. Some of my colleagues disagree on that and think of an occupation as consisting of things that are not included in tasks. So I think you could, you could say something like that, but I'm not going to do a very good still hand over here. What does it mean that it's a framing effect?
00:05:42
Speaker
sort of the people are maybe understanding it even as the same question but something about like like when you get asked the question it's not like there's a slot in your mind that has the answer to every different way that question could be referred to like the way the question is posed causes different things to come to mind or you to calculate it in a different way or something so maybe like when you when you think all tasks you sort of
00:06:06
Speaker
think of some tasks that the tasks you think of are like, I don't know, cleaning the floor, writing an essay or something, you don't think of a task like take over Europe in a war or something. And so like, you know, because that's like really out there, you know, if you think of occupations, then you do think of
00:06:24
Speaker
things that are central examples of occupations that are big, complicated things. And maybe you still don't think of things that humans have done that are most wild and surprising as their jobs, but you're somewhere closer to it.
00:06:38
Speaker
If there's such a large discrepancy between expert predictions on questions that are roughly similar, what does that mean? Does it mean that the people interviewed for this survey, not interviewed, asked in the survey aren't really experts in forecasting AI? Maybe they are domain-level experts in some narrow domain, but are they actually experts in forecasting AI?

Uncertainty in AI Timelines and Capabilities

00:07:03
Speaker
I think they're very unlikely to be experts in forecasting AI. I do think that there is expertise you can have in forecasting, and they could probably be better at it. I guess I think that people who are experts in forecasting, these are quite hard questions.
00:07:20
Speaker
when will broadly this industry that we haven't seen yet affect everything in the world? It's quite a tricky question. I don't expect them to do well at it, but I think that humanity needs to answer these kinds of questions in order to make decisions. I think that the best we have is a mixture of asking people who do know something about AI, asking people who know about forecasting,
00:07:44
Speaker
Even if, you know, I think even predicting what will happen with AI in five years is not really their area of expertise. It's just like you're related to their area of expertise. And so I also expect them to not do amazingly at that. They're able to do better than predicting what will happen in 100 years.
00:07:59
Speaker
So we mentioned these two predictions, human level machine intelligence at maybe around 2050, full automation of labor at maybe around 2100. But we also mentioned that these estimates dropped a bunch from the 2022 to the 2023 survey.
00:08:16
Speaker
do we then extrapolate out and say in the 2024 or 2025 survey, they will drop another three decades or something? Can we extrapolate a trend of expert opinion and then maybe just update our beliefs around when we will have AGI based on that?
00:08:36
Speaker
the gap between them. I think there's some evidence from when we ran questions by people and tested them on people that they were sometimes misreading and even if it explicitly says that it includes all tasks or something, they think of the occupation one as including robotics and the task one as not, or they think of
00:08:58
Speaker
the occupations one is including implementing it in the world and the other one is not maybe like, I don't know, one instance of seeing these kind of errors in a small group of people or something like that. Where the occupation one is the full automation of labor? Yes. Yeah. And the task one is the human level machine intelligence.
00:09:21
Speaker
I don't know that I expect them to always be dropping a lot. Probably more expect them to
00:09:29
Speaker
to drop them to go up. But I guess I haven't seen enough of a trend to really predict that it's difficult to make it to extrapolate the trend from free data points. But still, I mean, there is there. It's in the vibe that that these estimates are going down. But of course, we should we should remain kind of sober with the evidence.
00:09:47
Speaker
Yeah, I think people often think that in the long term, people have been like extremely overconfident about AI, and then that their predictions have had to sort of go up over time. I don't know that that's right. That they have been as I think they've been less overconfident people think if you actually dig up all the predictions and look at them. But still, like I think we've seen, you know, people's predictions being longer as well as shorter outside of these three surveys. So yeah,
00:10:17
Speaker
But you'd be pretty surprised if the expert timelines go up in the next survey, right? I think I wouldn't be super surprised if they went up at all. Yeah, I think if they went up 10 years, I'd be pretty surprised. Probably more surprised than I was here with them going down 10 years.
00:10:31
Speaker
Yeah. Okay. So in the survey, you also asked about a bunch of tasks and when these tasks would be automatable. And most of these tasks, the estimates for these tasks being automatable decreased. So I just have, I have a list here. These are within five years. Experts predict that an AI system can make a payment processing site from scratch.
00:10:55
Speaker
that they can autonomously download and fine-tune a large language model and that they can generate a new song that sounds like Taylor Swift. That's within five years. I think if this comes true, the world isn't ready for it. Do you agree with these predictions personally? And yeah, what do you think of this?
00:11:14
Speaker
my views on these predictions are pretty weak. I feel like I don't know very well what it takes to fully make a payment processing site from scratch. I think it does seem pretty plausible to me, and this is the kind of thing where I think the people answering these questions probably know much better than me what it's like to at least do coding type things, which is closer to the question, and they're much more about
00:11:41
Speaker
systems involved, so I think I don't expect there's much reason for me to be more right than them. I agree that the world is not ready for that at all.
00:11:49
Speaker
If we talk about the further out predictions, the further out in time that is, experts predict that in 17 years, AI and robotics in this case will be able to physically install the electrical wiring in a new home. They'll be able to in 19 years research and write a high quality machine learning paper. And in 22 years prove mathematical theorems that are publishable in top mathematics journals today.
00:12:16
Speaker
Those are pretty intense predictions. And 22 years is not that far away. I wonder if you think experts are better at predictions in the near term than

AI Automation and Task Prioritization

00:12:29
Speaker
in the long term. I mean, I think everyone is.
00:12:31
Speaker
So yeah, I think I would trust those ones less, but I don't think that tells us which direction to not trust them in. It seems quite possible these things will happen in less than that amount of time. I think especially for things where if a thing seems like it would be very socially impactful, I think that's kind of a heuristic against thinking it's going to happen.
00:12:50
Speaker
just because like, it would be like, such a big deal if it happened and things that are a big deal don't happen that much or something. Like, I'm not quite sure why it's so heuristic, but I feel like it is like it's sort of related to the absurdity heuristic house, where it's like, I don't know, that would be, you know, I mean, I think it's sort of like, well, if this thing, people are claiming that AI might destroy the world, there's some sort of like, well, things that destroy the world very often, but there's also like, I don't know, you're claiming this is the biggest deal ever.
00:13:18
Speaker
It's almost by definition that big things don't happen that often because, yeah, but there's something to it. When you looked at these tasks, specific milestones, were there anything about the ordering predictions about when these tasks would be achieved that surprised you? So for example, did the order make sense? Why would an AI be able to write a machine learning paper three years before a mathematics paper, for example?
00:13:45
Speaker
I think I don't know a lot of details about what doing really advanced math research or AI research looks like. I feel like from a distance from what I hear it sounds like, I don't know if math is more mysterious how it happens or something. I don't know if that's reason to think that machine learning would be worse at it.
00:14:05
Speaker
if doing machine learning research is sort of easier to break into parts and know what you're doing or something, I might expect it to be easier to do earlier, but yeah, I'm really speculating here. I think that the cool things being generally later was a trend, if I remember anything that sort of makes sense. I don't know much about why, but it does seem like stuff that you can just do on a computer is generally easier.
00:14:29
Speaker
What about AI experts' predictions about their own work? Did you find a bias towards saying that AI research is so difficult that it will be automated as the very last thing?
00:14:42
Speaker
hard for me to know if that's a bias. I think I'd be more inclined to say they're like more likely to be right about that one than anything else and then you know because they do actually know what the job entails. I think also when when they say that it will be last or like toward the end I think there's some
00:15:00
Speaker
argument for that from like, well, if AI research was automated, if you expected AI to just go very fast, then it's like, there are not many things that would come long after AI research. So you might expect that many things come very shortly after AI research. Because if AI research was automated, then we would expect research to progress so fast that everything could be automated within a kind of short timeframe thereafter.
00:15:26
Speaker
I don't know if you should expect that, but often people do expect that. Okay, so if you take this survey as a mechanism for predicting AI timelines or timelines to when AI will achieve certain tasks or certain performance, how would you compare that to prediction markets, for example, or thinking about the stock market as a way of measuring AI progress? What do you think are the strengths and weaknesses of this survey methodology?
00:15:53
Speaker
Compared to prediction markets, the people were asking probably no more about AI research than a lot of people in the prediction markets. It seems like prediction markets have a sort of mechanism for people who know more spending more money and are being incentivized to do that. I think I don't know a lot about how well that works in practice for the scale of prediction markets that there are and that sort of thing. I think I'd be surprised if that many of these people were participating in prediction markets a lot just because
00:16:23
Speaker
relatively small thing at the moment. Yeah, I sort of expect the survey to have more like access to whatever expertise they have, but less like being able to update on each other's views a lot and thinking about it with incentives on the line and that sort of thing.
00:16:42
Speaker
I've thought less about the stock markets as a way of ensuring things about this. It seems like there are a lot more incentives on the line to get things right there than there are in the survey. But I don't know how much other things complicate things. Yeah, exactly. There are also many other factors involved in the valuation of Google, for example, than just expectations of AI progress that Google will make. So even though there's a lot more money behind it than a survey, for example, yeah, it might not be better.
00:17:12
Speaker
I think it's also much less straightforward to interpret. Even if it did have good information, the question of interpreting it, there's interpreting it so that you understand it personally, and there's interpreting it so that the world understands it. I think for running this survey, I'm somewhat satisfying my own curiosity, but I'm mostly hoping to inform the world.

AI Traits and Extinction Risks

00:17:34
Speaker
And I think for informing the world about things, there's a benefit to doing like simple enough things that people can clearly see what you've done and know that it's trustworthy. Whereas I think if I were to try and interpret the stock market for the benefit of the world, I think for one thing, it just wouldn't get attention, which I think is reasonable because it would be hard for most
00:17:55
Speaker
people. It'd be hard for a journalist to look at that and be like, oh, you did a good job of it instead of a terrible job of it. And then you sort of think about the whole information economy for figuring out which projects to do to get information about this in the right places.
00:18:11
Speaker
You also had a section around what AI systems will be like in 2043, so 20 years after the survey. And there the experts predicted a bunch of traits that the AI systems will have, where the highest, the most likely traits were things like finding unexpected ways to achieve goals, being able to talk to human experts on most topics, and behaving in ways that are surprising to humans.
00:18:38
Speaker
Are these surprising to you? This sounds like a trend towards capability in terms of talking like an expert, but also autonomy in terms of doing surprising actions and finding unexpected ways to achieve goals. I think I wasn't surprised to hear that they expect them to be able to talk like a human expert on those topics.
00:19:00
Speaker
given that we are pretty close to that in many domains right now. I don't know how close we are to it. In my own experience interacting with chatgbt say I feel like it's sort of hit or miss and then it's hard to really measure with your own experience how far the misses are away. But yeah, at least sort of
00:19:23
Speaker
feels like not super far away. So I think that didn't surprise me very much. I think sort of finding unexpected ways to achieve goals is perhaps an ambiguous kind of thing that, you know, whenever something is sort of agentic and achieves goals, it's like doing it somewhat unexpectedly. But I think probably people read it as like more unexpected than that, where you're
00:19:46
Speaker
you know, more like, hey, wait, why did you do that? Anyway, a bit surprised to see that so high, since I think that sort of suggests, like, problems. I guess maybe across the board, there are sort of suggestions of many problems that these people are concerned about.
00:20:02
Speaker
The trait that was considered least likely is power-seeking behavior. And this is kind of surprising if you compare it to the other traits I mentioned. So behaving in surprising ways and finding new ways to achieve goals. So if an AI behaves in a surprising way and finds a new way to achieve a goal, how do you know that it's not engaging in power-seeking behavior?
00:20:25
Speaker
To me it would be quite surprising, but it would feel surprising if we had new creatures that did have goals and were trying to achieve those goals and were doing it freely enough that we were surprised by their behaviour. I think another one that was relatively high up is something like you're lying to deceiving humans without another human asking for it or something.
00:20:50
Speaker
I think if things are behaving like that, it's hard to see how they would not be engaging in power seeking broadly construed, where it's by power seeking, I mean just like, not just trying directly to do the thing you want, but trying to get yourself into a situation where you can do the thing you want, like whether that's like getting some money so you can buy the thing or like getting yourself in a position of importance or just like causing people to be friendly with you more so they'll do what you want with the aim of getting what you want ultimately.
00:21:20
Speaker
it feels like a very broad kind of a thing where yeah maybe they mean something much more limited but I would I would assume that I don't know like if you looked at game ai's now or something presumably they're engaging in power session you know if like if they're playing you know star craft or something I assume that there are various things you do there to set yourself up to beat other players it would be sort of weird to me if we had
00:21:47
Speaker
strategic agents acting in a real world that weren't doing that. Why do you think experts rated power seeking behavior as unlikely then? What do you think was the thinking behind that? I don't know. I guess to me it sounds like the thing that's like most closely related to extinction risks, or like real bad catastrophes happening. So I could imagine a person being like putting that high is saying, I think there's a
00:22:15
Speaker
you know, a big chance of something quite bad happening here. So it's kind of like making a fuss in some sense, or like, there's maybe more of a bar to raising the alarm there, whereas maybe for some of these other things, you're not like raising an alarm. But I don't know if they're thinking about that. I feel like maybe the connection between that question and your extinction risk scenario is more of like a niche thing to think about. So hard to say.
00:22:43
Speaker
Let's get to the extinction risk question there. The way you and your team at AI Impact summarized this is that median respondents put 5% or more on advanced AI, leading to human extinction or similar. And a third to half of participants gave 10% or more.
00:23:02
Speaker
So you mentioned we talked about trends in AI capabilities and expert predictions of those capabilities. And those predictions are dropping quite fast. But predictions of extinction risks are quite stable between the surveys. Are they stable between the 2016 survey to the 2023 survey?
00:23:26
Speaker
So we asked various different questions about these, and there are different things you could look at about the answers, like what percentage of people said 10%, what's the median number, what's the average number, et cetera. So the question that we actually asked in all three years was the one about how good or bad the future would be as a consequence of HLMI and where people could sort of divide it up between five buckets, like extremely good
00:23:53
Speaker
somewhat good. Good. Sorry, I'm forgetting the exact wording, but like they're five and the lowest one is like, includes e.g. human extinction. So in 2016, that was the only question about human extinction we asked. But then, so for that question, which we've kept the whole time, the median answer for that worst bucket is 5% every year. There was a different survey run by other people that got a lower number between 2016 and 2022. But yeah, in our surveys, it's been the same. I think they changed the question a bit, so it's not super comparable, but
00:24:23
Speaker
It's pretty comparable, not identical. Yeah, if you take your impression of all the evidence you've seen, is there a trend in expert predictions about extinction risk from AI?
00:24:36
Speaker
I think basically not, there might be like a mild recent trend toward like less extreme views than number of people or the fraction of people who said 10% went down between 2022 and 2023 that the median stayed the same. I'm forgetting the, like the numbers for like, there were four different questions about this. I guess, yeah, we sort of added them as we went along after.
00:25:04
Speaker
it's sort of been curious about the thing last year each time. So I can't remember whether that's a trend across different questions much. So what's the right way to summarize expert opinion here? You could also summarize it as most experts believe that there isn't really a large chance of this happening or a large risk from AI. You could summarize it and you could kind of frame it positively.
00:25:30
Speaker
I think you would have trouble framing it positively to, like, editor a sensible listener. Like, I think you could make it sound positive for a second or something, but, like, if your doctor tries to frame your cancer diagnosis to you, it's like, you know, the chance of you dying is, like, not that high.
00:25:48
Speaker
i think there's like i don't know there's always 80% you'll make it like you're still like wait sorry what so i think like given given the stakes we're talking about like if you take people's answers on this seriously the fact that like most of them think there's pretty non-negligible chance i think even if it's like low as far as chances go like in the context it's like much higher than
00:26:11
Speaker
it makes sense to be relaxed about. So there's no way to frame this positively or kind of spin it into something positive? I mean, I guess it depends on what you were thinking. I mean, there are people around who think it's roughly 100% that will be killed by AI. So if those people trusted AI researchers on this question, they might be reassured, probably they've mostly thought about it quite a bit more. And they're not updating a lot from hearing us.
00:26:40
Speaker
Did you test about the participants explanations of why there might be risk of extinction from AI or their reasoning? Did you test whether they had some arguments for this or how did you make sure that this is kind of an informed estimate?
00:26:57
Speaker
we did not make sure that this is an informed estimate. We did ask like some fraction of people after each question some open-ended questions so that we could try and figure out what was going on. I forgot exactly what they were but along the lines of like like how did you interpret that question? What were you thinking in your answer there? But we actually haven't gone back and looked through those yet. Just sort of it's a lot of
00:27:23
Speaker
work to analyze open ended questions, plausibly, we could use AI to do it faster. But yeah, we also asked people to assess themselves how much they've thought about these topics. So we can say something about like, you know, the people who've thought more about it and people who've thought less about it, what, what their answers. And do you have any results, their preliminary results?
00:27:45
Speaker
Yeah, so one thing is having thought more either a lot or a great deal about the question was associated with a median of 9%, whereas having thought little or very little was associated with a median of 5% for how likely is AI to cause human extinction. Sorry, I have to talk to my
00:28:03
Speaker
And we're like, briefly looking at this, I'm not quite sure if that's across multiple questions or one version of this question now, but something broadly like that. So a bit more concern if you've thought more about it. And then it's hard to know to what extent that's caused by it. If people become concerned about it, then they think more about it. But at least sort of rules out the picture where everyone who's really informed about this is like, this is silly. The reverse seems to be going on.
00:28:31
Speaker
Right. Yeah, okay. So kind of related to extinction risk, there is a question or section about what would be best for humanity?

AI Development Speed and Regulation

00:28:40
Speaker
At what speed should AI progress for for AI to be best for humanity? And there the results were kind of all over the place with 5% of participants saying that AI should move much slower.
00:28:53
Speaker
are 30% saying somewhat slower, 25% saying at the current speed, 20% saying faster, and 15% saying much faster. So that's a very kind of almost the same for each bucket, except for not many people think that AI should move much slower. What do you make of this?
00:29:13
Speaker
I find this interesting. I'm not quite sure what to make of it because I guess I think I was initially like surprised seeing this because I guess I would have thought that if you thought there was like a serious risk of extinction, but also I guess like in this other question about concerns, people have like quite everything else about, you know, a lot of people say is, you know, concern. So given quite a lot of concerns about different things, sort of any particular one of which seems quite serious,
00:29:41
Speaker
I might have thought that there'd be more enthusiasm for moving slowly, and so I'm not quite sure what to make of that. I think thinking that AI, that we should try to slow down AI, is not overwhelmingly popular even among people who are pretty concerned about AI risk, but I think much of the reason to not do that is for reasons to do with arms races and that sort of thing.
00:30:04
Speaker
Not many AI experts are convinced that AI should move at a much slower speed. Is that just because it's such a downer if you're working on trying very hard on pushing this technology forward? And this is your life's work, maybe? It's difficult to say that AI should move slower and that that would benefit humanity. Is there such an effect, you think?
00:30:24
Speaker
I think there probably is such an effect. I think other things I could imagine are just, I don't know, it feels like a big deal in some way to ask or to sort of decide that you're against something that you're working on or something. I could imagine, you know, being tentatively in favor of, you know, AI progress until you've really thought about it a lot. But I think there are also various reasons you might think that even if you are pretty concerned about, like,
00:30:53
Speaker
AI causing some kind of catastrophe that going slower wouldn't help. I guess I'm on people I know, I feel like there are various theories for why it would be better to go faster. They don't sound compelling to me. In the survey, we specifically said that the
00:31:15
Speaker
the speed would affect all of the projects equally, so that sort of rules out all of the things that they do, like arms races, or like, you know, if it's important that this AI lab is ahead of this AI lab or something like that, which I think accounts for a decent chunk of the lack of enthusiasm for going slower. But yeah, I think there are other stories around as well.
00:31:37
Speaker
There's a story surrounding compute overhang, so overhanging the amount of chips and the speed of those chips available to train AI. And that continues to grow if we go slower. And so you can now build an advanced AI for less money, train an advanced AI for less money if we wait or if we pause or if we go slower.
00:31:58
Speaker
There's overhang in algorithms too, or maybe even in collecting the right training data sets. I know you've thought about this, or maybe what do you think about what speed should AI move? I'm actually on the fence. If I had to guess right now, I think
00:32:16
Speaker
slower. More specifically, I think probably pushing for it going slower is good, but I would like to think more about it, particularly because of these kind of arguments about overhangs and about like in practice, these sort of arms race considerations, or I guess, yeah, I think it's really wrong to call it an arms race in many cases, or like it's
00:32:39
Speaker
and Om's race implies certain things about the incentives, which I think are not obviously true, for instance, if everyone dies plausibly when you win, it's like not exactly the same winning, but yeah, tentatively for slower.
00:32:54
Speaker
Yeah, it's such a broad question, and therefore probably difficult to answer if you're if you're a participant in this survey, for example, you might be an expert in a narrow domain in AI, but but asking whether AI should move at a what speed AI should move at to benefit humanity the most that's that involves economics and all kinds of sociological societal questions that you're not necessarily an expert in that grappling with this question yourself, I think it isn't difficult to to kind of
00:33:24
Speaker
wrap your head around all of the factors involved and come to a conclusion here. I think there are different purposes of this survey there. There's trying to find out what is true about the world and what will happen.
00:33:36
Speaker
I think it's also interesting to find out what is true about AI researchers and what they think about things. I think if AI researchers broadly think that their research is very dangerous and they expect it to go badly, that's very important relative compared to a world where they're all very gung-ho just in terms of what kind of cooperative things could be done.
00:34:00
Speaker
Okay, so let's dig into some of the specific scenarios that experts are worried about here, where almost everyone is basically worried about spreading false information and deepfakes and manipulating large-scale public opinion. So what does this tell you? Why is there such agreement around those two issues?
00:34:22
Speaker
Yeah, I'm speculating. It seems like those are issues that are pretty close to happening. They're not speculative. I sort of wonder if people are combining probability and badness in a way that they would if they thought a lot more about it here. Explain what you mean they're combining probability and badness. What do you mean?
00:34:47
Speaker
Well, they're answering like how concerning are these things? And I think it can either be like very concerning because it's like extremely bad and a bit probable or very concerning because it's like very probable and somewhat bad. I guess these things seem like they're decently probable and decently bad.
00:35:07
Speaker
would consider a 5% chance of destroying the world bad enough to overwhelm the less probability for some of these other things. I sort of wonder if they do not. If they genuinely think there's like, if many of them genuinely think there's like a 5% chance of the world being destroyed or humanity being destroyed, do they in fact think that that's overall a smaller concern? I'm not sure. I think a reason they might
00:35:33
Speaker
they that we don't obviously disagree here is that like all of these other things kind of just mess or many of these things mess with everything such that like it seems plausible that I in fact should be more concerned about manipulation of public opinion than directly about like some particular extinction risk because manipulation of public opinion makes it quite hard to
00:35:56
Speaker
run society well, like to have good governance of the other things going on, for instance, technological risks, such that even if these things don't immediately sound completely catastrophic, if they make everything else go worse, they're sort of factors for
00:36:15
Speaker
for all of the problems. You could say that a society where we can't process information correctly is a society where we can't deal with basically any problem. If we don't, if we can't find out what's true, then how do we, you know, yeah, where do we go from there? If I was trying to sort of deal man, like the position that the biggest concern is like manipulation of opinion, or false information spreading, I think that's the kind of thing I would say, but I'm not sure how much that's what they are thinking.
00:36:45
Speaker
In terms of other specific risks they're worried about, they're worried about helping dangerous groups make powerful tools like engineered viruses. They're worried about authoritarian rulers using AI to control their population. And they are worried about making economic inequality worse.
00:37:03
Speaker
So I find the last one there is kind of in the same bucket as the spreading false information and deepfakes and manipulating public opinion to me and that it's a kind of broad effect that in some sense would make society worse. Whereas thinking about engineered viruses or authoritarian control, those are more specific.
00:37:27
Speaker
I think I disagree that there's a more specific, even more specific, but still quite broad in that editor. Like you think of COVID, I feel like that kind of screwed up everything in society a bit there. And so I feel like if, if you're just dealing with even specific conflicts or like bad people doing things or, or diseases, like they can just, you know, make everything more inconvenient.
00:37:51
Speaker
So when I read papers by researchers specifically studying AI risk, I find them they often talk about something like an engineered virus or an engineered pandemic. And they don't put the same weight on misinformation or false information or deep fakes. At least they haven't historically done so, do you think?
00:38:14
Speaker
Yeah, why is there this discrepancy between kind of broad AI researchers and researchers in specific AI risk? One thing that sort of comes up for me when I think about deep fakes say is that I think you could have argued that there would have been a huge problem like already and I don't know of like huge amounts of
00:38:34
Speaker
like problem arising from them. I feel like I sort of know of like scattered cases, but it's not like, oh yeah, you just can't trust anything online anymore. No one knows what's happening. And so I think from my perspective, they don't really understand how the information ecosystem causes most things to be either reliable or seem unreliable. But
00:38:56
Speaker
whatever it's doing, it's not clear that being able to make a really good fake picture or something is a match for it. You could already photoshock things and know that that made a huge difference to anything. So now it's cheaper and better perhaps, but it's not clear to me that we just don't already
00:39:15
Speaker
have systems for dealing with that given that it hasn't become a problem. And so if I am a kind of representative example, which I'm probably not, but more representative of the kind of like people thinking about AI safety, it could be that that kind of view is more common there, whereas I less know what the technical AI people are thinking.
00:39:33
Speaker
Yeah, although I think we have seen some examples of financial frauds using deepfakes, specifically voice deepfakes where you will call

AI's Societal and Economic Impacts

00:39:44
Speaker
an employee, a scammer will call an employee pretending to be the chief financial officer of a bank or something.
00:39:51
Speaker
And these are scams involving millions of dollars. Recently, it's been possible to make a fake picture of a driver's license that's usable enough to sign up for online services where you have to prove your identity. And so you could see how these defects would begin to disrupt some of the systems we have for kind of making sense of the world and putting humans into a human bucket and bots into the bot bucket.
00:40:20
Speaker
Yeah, it definitely seems like on paper that they should be able to disrupt various things like that. I mean, it's like, well, so far, like my experience of banking hasn't become like impossible or something due to like the rise of this kind of thing. And so it's like, I don't know, we haven't each experienced some huge problem from this to
00:40:39
Speaker
Not yet, at least. Okay, so you summarize the participants in the survey as thinking that kind of high hopes and dire concerns for AI go hand in hand. So what does this mean? So participants who believe that AI could be extremely good also believe that AI could be extremely bad. Is this related to a kind of general belief in the power of intelligence itself, do you think?
00:41:09
Speaker
I mean, I guess I don't know that they more believe that than the people who don't. Maybe like, I think that the observation is more like, like you might imagine that the expectations are pretty polarized, where there are some people who are like really optimistic and some, I guess, definitely going to be fine, probably amazing, and some people who are like not.
00:41:27
Speaker
very likely be bad, probably terrible. I think that's not what we see. It's more like the bulk of people have some probability on very bad, some probability on very good, some probability on some intermediate things. And so, yeah, I guess it's
00:41:45
Speaker
more like lots of people are on a similar page, very broadly similar. Definitely have different probabilities on these different outcomes. But it's, I guess, quantitative, not qualitative difference and sort of like not that polarized. You also asked about the scenario of an intelligence explosion, where experts are also quite divided, I would say. So the intelligence explosion is
00:42:11
Speaker
the development of an AI system that accelerates technological progress and then that progress accelerates AI again and then you get kind of a feedback loop of AI improving AI until we have massive technological progress in the span of say five years.
00:42:30
Speaker
And experts believe that, again, I'll read the numbers here. So 9% said that something like that is quite likely. 20% said likely. 24% said about even chance. 24% said unlikely. And 23% said quite unlikely. Again, it's kind of like it's very evenly distributed, except that only 9% said quite likely. So what do we make of it?
00:42:54
Speaker
I think at a high level it seems like predicting the future on these things is tricky and people are kind of all over the place. I think maybe the main thing we make of it is most of them think that this is a plausible thing to happen.
00:43:13
Speaker
a pretty important thing, then you don't really need to be at like 99% for it to be worth keeping an eye on. I think the decision relevant thing is kind of like, is this on the table? And it's like, yeah, pretty much everyone thinks it's on the table. So let's keep an eye out there. I think it's sort of ambiguous.
00:43:32
Speaker
I guess we said like less than five years in particular for more than an order of magnitude faster progress. So that's kind of specific. But I guess in general, we're talking about whether there'll be an intelligence explosion. Like we're talking about a feedback loop of technology, improving technology. That's kind of clearly happening all around us all the time and has been for a long time. And so the sort of question of like, how tight is the feedback loop? Or like how fast is the feedback loop?
00:43:57
Speaker
like, like surely AI is helping with AI already, like surely, you know, surely computers are helping with computers surely pencils are helping with pencils. Like, yeah, so then I think like, I think a complaint that I've had in the past is that people sort of quickly jump from the feedback loop to it might be arbitrarily fast, like, I don't know.
00:44:16
Speaker
a lot of feedback loops. You need some reason to think it will be arbitrarily fast. I guess like if you're asking people, will there be something like this feedback loop? I can imagine them just sort of giving different answers based on whether they're thinking of this as like a really out there thing to happen or basically like, yeah, of course AI is going to improve AI.
00:44:33
Speaker
So the likelihood of an intelligence explosion depends on the speed of the feedback loop. So how fast will progress in AI cause further progress in AI? What do we know about the speed of the feedback loop or the tightness of the feedback loop?
00:44:49
Speaker
For example, I mean, we could ask, so large language models, how much do they improve programming at OpenAI or DeepMind or something like that? Is it 5%? Is it 50%? And that would be a pretty direct feedback loop, I would think.
00:45:05
Speaker
I actually don't know, but it sounds like the kind of thing someone might have actually checked, or at least for programming in general. I feel like maybe I've seen such numbers, but I can't remember them. I think I would guess that it's non-negligible, but not twice as fast or something. But yeah, I think things at that level of usefulness are not crazy, as in you get various software tools that are helpful, and software generally gets faster.
00:45:34
Speaker
Yeah, this question of intelligence explosion is kind of related to discontinuous progress, which is something you've written about. And there, I think the main conclusion is that, first of all, what is discontinuous progress?
00:45:48
Speaker
when I've written about it, I'm talking about like jumps in progress. So like something's going along at some rate, and then suddenly things are much better. In particular, we tried to measure it in terms of how many years of past progress happened in one jump. So if there was like a lot of improvement, but there had also been a lot of improvement, like the last three times something was discovered, that it might not be a discontinuity. For example, I looked into penicillin
00:46:14
Speaker
as used for syphilis, which was apparently incredible. But also, I think the previous drug was nicknamed the silver bullet because it was so amazing compared to the drug before that. It's easy for it not to be an actual jump in the progress trend.
00:46:33
Speaker
I think some examples in AI of discontinuous or jumpy progress would be Alpha-0 for chess, where we see a large jump in the capability of chess engines, or GPT-3 for language modeling or language understanding. Do you agree that those would be examples of discontinuous progress?
00:46:54
Speaker
I'm not sure without looking at the trends. I think one thing I learned is that often things are popularly believed to be discontinuities because they seemed pretty wild. But if you sort of look at them, like whatever the progress was in the background was kind of already heading for there. Yeah, it seems plausible.
00:47:13
Speaker
So I think one conclusion you draw about this continuous progress is that the base rate for such jumpy progress is quite low. This is something that doesn't happen often. And that there are many feedback loops out there, but these feedback loops are maybe not fast enough to cause these jumps. Is AI the exception to these low base rates? I think actually we went, I guess I would usually divide
00:47:43
Speaker
the predictions of AI having sudden increases in performance into two categories, kind of like before the AI is very helpful for building more AI, and after, where after is where you would see an intelligence explosion. And when I've been working on this, I've mostly actually been just thinking about before. And so not really thinking about like, how fast feedback loops feedback on themselves, but more
00:48:09
Speaker
like, do other traits of AI make it likely that leading up to human level, you would see a big jump, where they often people have people thinking about AI safety have thought that there would be like, quite a jump getting to human level, and then that would sort of
00:48:26
Speaker
lead to an intelligence explosion or that an intelligence explosion would like start quite suddenly which I think is implicitly claiming that there would be quite a jump to getting there because otherwise you might expect that it sort of starts slowly and you know when AI is a little bit useful for AI and contributes to itself more and more if it's sort of like you make a thing that one day is like all right now I'm going to in five seconds it explode into a super intelligence that sort of implies that it went from
00:48:53
Speaker
very bad at improving itself, they're very good at improving itself prior to that intelligence explosion, as then you have a question of like, well, why did it see that huge jump?

Continuous vs. Discontinuous AI Progress

00:49:02
Speaker
So that's the thing that I've mostly thought about. And there, I think I didn't, I couldn't find any arguments for AI being different, that seemed clearly compelling, though I think there were various ones that like maybe could be compelling if
00:49:18
Speaker
with like they weren't clearly on the compelling I just like want more information so the thing that I've done on this is like look at a lot of other technologies and see how often there are jumps in any kind of technological progress trend and then also go and try and find arguments that AI would be different yeah I haven't I haven't much looked at will AI be different in terms of intelligence explosions question
00:49:43
Speaker
You've written about this kind of threshold around human level intelligence. This is what you talked about earlier, where I guess the expectation would be that if we can get AI to a human level, when we cross that threshold, we will get the AIs will be much more valuable and will have an enormous impact in the world.
00:50:04
Speaker
But when they are right below the human level threshold, they won't have that impact. Maybe we could say that AI progress will sneak up on us because at some point, AI will be a better programmer than the best human programmers. But before that point, AIs aren't having enormous impact.
00:50:25
Speaker
Right when they cross that kind of human level threshold is when there's a huge payout to all of the AI development. Is that a plausible picture to you? I think not really. I think that might be a part of the picture, but I think it's substantially missing
00:50:46
Speaker
just that there are lots of different humans with lots of different abilities at different things. There's a point you could call crossing, like surpassing humans where AI is able to do literally everything. Let's say literally everything better than one particular human, Bob. By the time it can do better than Bob at everything,
00:51:08
Speaker
Prior to that, there were a lot of things that could do a lot better than Bob at, could have a different distribution of abilities. And so I think we're going to see like a more continuous set of like things that AI is more worth paying to do x4 than Bob over time, leading up to
00:51:28
Speaker
being able to do literally everything. I think also, for many things, it's like, even if you're worse at everything, right, than someone else, like, you can often still help as long as, you know, that there are still things to be done. So I think even if AI is worse, it's going to be used for lots of things. And so I think that's sort of another reason to expect things to happen more continuously. It's more like, well,
00:51:53
Speaker
whatever AI exists is collaborating with whatever humans exist. People are much too quick to assume that whoever is smartest and best will do like literally everything. And why is it that they won't do everything? So why is it, say that I'm the CEO of a corporation, why wouldn't I just choose kind of the highest performing AI to do all the tasks?
00:52:16
Speaker
you might choose the highest performing AI to do all the tasks if you can pay to have as much of that AI as you want. But if the AI is not as good as Bob, you can't actually pay Bob to do all of the tasks because Bob is a more limited entity. You might end up paying the AI to do half the tasks even if it's worse than many of your employees either because there are at least some employees you can get or because there are tasks that it is apt for even if it's not.
00:52:44
Speaker
Yeah, I think economists in general, I kind of worried that we might be making a mistake if we say that when AI gets to a certain point, AI will do all the tasks in the economy or do all the jobs. Because even if my productivity is 10% of the best AI, I can still contribute something and so I can still be part of the economy.
00:53:07
Speaker
Do you think that argument holds? Or is AI such a kind of step change that it will be like saying, you know, the horses can still be part of the modern economy, even though they aren't as fast as cars? Where does this argument go? It seems like you're like, all right, well, I guess, like 101 economics is like comparative advantage. Like everyone can do something, but it seems like everyone is doing something for some wage and practice, like everyone is someone that useful.
00:53:35
Speaker
seems like in fact if you can do something in the AI world but the amount that you can get paid for it is like what it would cost to like run a machine for one second and you get that every year or something then it's like implausible that you will be able to live on that salary and I think that's probably where this runs into the ground except
00:53:57
Speaker
you might ask, well, can you live on that salary? Maybe you can, because maybe the world is sufficiently productive. At this point, we're making predictions about the price of beans in a future AI economy. I guess I don't actually know how to demonstrate that you won't be able to support yourself, but it seems plausible to me if I knew slightly more economics.
00:54:23
Speaker
And the argument there would be that everything is so cheap that even if your wages are very low and you might not be able to find a lot of work, you can still earn enough to support yourself and maybe even live a luxurious lifestyle.
00:54:40
Speaker
Right, but I guess, I guess the way you go with this is like, all right, suppose that I can do a certain amount of work in a year, then like how much AI does it take to do that work? Let's say like AI can just be like more and more of it can be built. So let's say that the AI that
00:54:55
Speaker
Yeah, I think it's the AI that can like do the same work as me and can be like arbitrarily produced is going to do it for less money than it's going to take less for it to exist than for me that it seems like it can push the wage below where I can live and so that I can't live. I think that's
00:55:14
Speaker
That's how I would run this argument and then say you are in fact at some point screwed. But yeah, I'm not an economist. Okay, so when we're thinking about this continuous progress in AI and the possibility of an intelligence explosion, how relevant is it to think about human evolution and the difference between our brain and the brain of our closest evolutionary ancestors?
00:55:43
Speaker
I guess what would you think about that, specifically with respect to intelligence explosion? It would be something like, look at how powerful humans are. There isn't that much difference between the brain of a human and the brain of a chimpanzee. And so if we get AI at a certain level, maybe not that more capable on some kind of metrics than us, their impact on the world will be enormously greater than our impact on the world, or their power in the world would be much greater than our power.
00:56:12
Speaker
Maybe you don't need very much intelligence explosion to get you heaps of power.
00:56:18
Speaker
I feel like, would this argument not just apply to all kinds of things that you may feel like, I don't know, if we just put a little bit more effort into our product, maybe it will radically remake the world or something, which does happen sometimes with products, but mostly not. I guess the thought that this would only apply to agents or something, because we could also say, I don't know, elephants. Sometimes there are better elephants that they'd suddenly take over and kill all the other animals or something.
00:56:45
Speaker
thought so much. So I guess I want to have like a clearer picture of what is being claimed and then like check it against other empirical observations we can make. Yeah, I guess the argument would be something like being 25% better is a world of difference. And so if AI becomes kind of slightly superhuman, it will suddenly be much more powerful than we are.
00:57:12
Speaker
So there's some metric of brain goodness, and it's like a small amount of movement on the brain goodness scale will get you vast amounts of power. It's the thought. And I guess the claim that all humans are basically at the same point on the brain goodness scale, except very disabled people,
00:57:32
Speaker
Yeah, only to the extent that all chimpanzees are kind of on the same level on the brain scale too. So at least both chimpanzees and humans cluster around some level of intelligence or capability.
00:57:48
Speaker
I don't know if the variability is greater within chimpanzees or humans, but at least it's kind of coherent to say that we are different and that chimpanzees are clustered around somewhere on the brain scale and humans cluster around a different point on that scale.
00:58:09
Speaker
an individual human, an individual chimpanzee, and neither of them has any access to culture. Does the human do heaps better than the chimpanzee in terms of dealing with the world? My guess is not, and so I guess I haven't thought a lot about this lately, but my impression is that this is a good chance that it's like,
00:58:28
Speaker
human culture and our ability to do human culture that makes humans great. On that kind of story where it's like an individual human is pretty powerless, but the meager set of things they learn in their lifetime, they get to add to a big pile that everyone else borrows from and adds to. And the chimpanzees aren't doing that. I think you would ask like in that picture, what does AI look like?
00:58:53
Speaker
And it's like, well, it might be even better at adding to such a pile, still an individual AI, if the thing that we're looking at is being like markedly better than humans at these things. Like if it's markedly better at getting the media things that adds to the pile, that's still not going to make it suddenly better than the whole pile.
00:59:12
Speaker
unless it's like really is vastly better. If it's better at sort of adding things to the pile and taking them off the pile, that seems like it might cause it cause an individual one to be like substantially better than an individual human that's doing things. They might ask like, are we then going to have multiple different piles? Like there's a human pile and the AI pile? Like why would you have that? You might expect that everyone's still sort of contributing to the same pile.
00:59:35
Speaker
Yeah, except maybe we can't understand what they are doing, right? Maybe they are they're building upon our concepts in ways that are kind of opaque to us. And so they're, they're beginning to build their own culture that we can't benefit from. And then you get this kind of they run away from us in terms of understanding of power in the world.
00:59:55
Speaker
I think it's an interesting frame to say because I think what we're seeing right now is that AIs are consuming humanity's culture, all of it. Basically, if you're training on the internet, you're getting the entire culture of humanity, or at least a large part of it.
01:00:15
Speaker
But I guess the question there is how efficient are they at benefiting from that culture or becoming smarter as a result of training on that culture.
01:00:26
Speaker
So AIs are large language models and specifically are sample inefficient compared to humans. They don't get as much out of a piece of data as we do. And so maybe they're falling behind there. Do you think that's temporary? Is this a useful frame for understanding AI development?
01:00:45
Speaker
I think my guess is that it's useful at the moment and will become not useful at some point. In terms of adding things to the pile, I also don't really know of really interesting insights or something that have been added.

Key Factors in AI Scaling and Future Outlook

01:01:01
Speaker
by AI to my knowledge, except for maybe things that are in the news, like deep mind things, you know, some kind of discovery that I don't know the details of. I think even more like I can think of, you know, insight, like cases where my friends have an insight, they tell me or something and I'm like, that sort of changes my model of the world. Like, how often does that happen?
01:01:27
Speaker
from AI? Like, are there ones floating around that are making a difference to my picture of the world now, but I don't really know they came from AI, maybe. Yeah, I agree that we haven't seen kind of deep scientific insights yet, I would I would say maybe there was a deep mind paper on discovering a new algorithm in a new computer science algorithm for speeding up some some some kind of low level process. But I agree that we have we haven't seen great scientific advances.
01:01:55
Speaker
I think I wasn't mostly pointing at like, you know, fancy science things. I mean, even just like, I don't know, like someone writes an op-ed and they're like, here, maybe we should think about this thing happening in politics this way instead of that way. And I'm like, Oh, yeah, maybe. But you haven't seen, you haven't, you haven't seen that from, from language models. Maybe I haven't. I'm like, you're not
01:02:18
Speaker
not remembering it. Yeah, maybe among my asking chat GPT stuff about my life or something, I feel like there have been some things where I'm like, oh, that's- I mean, I think if we look at the results from your survey, experts predict that we'll get that kind of insight, at least at some point. When was the prediction around getting to an AI-written New York Times bestseller?
01:02:43
Speaker
I forget, but I think it was about seven years, just around 2030. So at least the experts think it's coming. Right. So yes, to the extent that a New York Times bestseller adds to the kind of collective culture of humanity, we'll get that. Yeah.
01:02:58
Speaker
AI scaling is also kind of an important topic and what do you think is most important in the scaling process? If we kind of split out the ingredients into computing power, training data and algorithms, what is most important for scaling AI?
01:03:19
Speaker
I feel pretty ignorant about this question, but we did ask in the survey, something like this, we asked about different inputs. And like if, for each of them, if there had been like, I forgot what it was, like half as much of it in the last time period, how much less AI progress would we have seen?
01:03:35
Speaker
And I guess I find the answers there surprising in that it seems like, well, people are very divided and also not like in a polarized way, just like their answers are all over the place and like kind of similar across the board. Whereas I think my impression was that there's at least a
01:03:52
Speaker
popular narrative that hardware is the worst of it, and so I think this was some evidence against that. I do wonder there if people understood the question properly. What was a hard question to understand? It was kind of complicated.
01:04:10
Speaker
I agree that there's a narrative out there around compute being the main and the most important ingredient. But I do wonder how you could trade off between say you have less compute, but more training data, less compute, but better algorithms and so on. It's a complex issue. Do you have an impression of that? Did you find anything out about that in the survey?
01:04:34
Speaker
I generally haven't looked into this in a while, so I think there is stuff out there about it that I'm not familiar enough with. In the survey, we didn't ask people to trade things off against each other, but we sort of asked for each different thing. What if that was lower? I think the various things we asked about looked more comparable than expected. And it wasn't such that compute stood out as the kind of overall most important factor in the expert opinion. Right. I think it was kind of similar to other things.
01:05:04
Speaker
Okay, I want to end by doing some kind of, you could call it rapid fire questions, or at least your impressions of what we should do. So you've been in, you're kind of a veteran of thinking about AI risk. You've been interested in this topic for a long time. What's the best path forward here?
01:05:23
Speaker
Where are we and what should we do and what are the best proposals for AI safety? It depends a lot who we are, what we should do. I think at a high level, I'd broadly be in favour of regulating it a bunch more in terms of individual action.
01:05:44
Speaker
for a lot of people, it would probably be good to just have their eye on this topic more and try to be right about it as much as it seems. We'll try and have accurate enough opinions that to the extent your opinions bear on are part of public opinion and change what happens in terms of regulating it, that they're a force for good because I think it's very high stakes.
01:06:08
Speaker
maybe also selfishly just try and pay attention to how this is going to actually affect your life quite soon for most people. Yeah, how do you think your opinions about that differ from from mainstream opinions? What do you believe is going to happen to us personally that that is not kind of widespread in culture? And I think for various of these narrow forecasts about what will happen in the next few years, that could affect
01:06:34
Speaker
particular people. If it's true that within five years AI can build a website from scratch, I think for a lot of people that means they should be investing in different things than the ones they might be investing in, that sort of thing. Anything else comes about? I guess generally investing in labor seems less good as investing in having other investments in the longer run.
01:06:58
Speaker
Have you changed your life based on what you've learned about AI over the last decade?
01:07:05
Speaker
seems like life is pretty different in that I, you know, am working a lot on this sort of thing. Like, my life would look extremely different if AI wasn't involved, but more like editing. I mean, I think it probably makes me more hesitant to have children right now, say, partly because, like, it's probably a lot of effort and it seems like this is a really important issue that I don't want to
01:07:32
Speaker
not put effort into, but also I just feel like the technology might change a lot in the coming years. So for instance, maybe you could have like a much healthier child, or something like, like, like, yes, there's just a lot of technological progress in the coming decades. I could imagine it being better to do this later. All right. Thanks for thanks for chatting with me. It's been it's been informative. Thank you. Thank you for having me.