Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Connor Leahy on Why Humanity Risks Extinction from AGI image

Connor Leahy on Why Humanity Risks Extinction from AGI

Future of Life Institute Podcast
Avatar
4.2k Plays27 days ago

Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.   

Here's the document we discuss in the episode:   

https://www.thecompendium.ai  

Timestamps: 

00:00 The Compendium 

15:25 The motivations of AGI corps  

31:17 AI is grown, not written  

52:59 A science of intelligence 

01:07:50 Jobs, work, and AGI  

01:23:19 Superintelligence  

01:37:42 Open-source AI  

01:45:07 What can we do?

Recommended
Transcript

Introduction and Background

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker, and I'm here with Connor Leahy, who is the CEO of Conjecture. Connor, welcome to the podcast. Thanks for having me back. Glad to have you. You're the author of The Compendium, which is an introduction to AI risk from the ground up. and Why did you write this?
00:00:19
Speaker
So a saying that I wrote it is very generous. It was very much a team project where I probably contributed, you know, not the most. So it was a, it was a team project between me, Chris Gamel, Adam Shimi, Gabe L4, and Andrea Miyati all contributed important parts. So it couldn't have been done without my co-authors.

Motivations and AI Risk Transparency

00:00:38
Speaker
Just want to really stress that. So.
00:00:41
Speaker
The reason we decided to write Think of India, well, I think I was actually the person who decided to write it and then kind of roped everyone else in, was that, you know, there's many arguments around, you know, AI risk and many explanations and so on, but they're often scattered. They're often scattered kind of in like strange locations, like, you know, fragmented across blog posts or like interviews and stuff like this.
00:01:03
Speaker
And they're often written by audiences and for audiences that are not your typical everyday person. They're often unnecessarily technical. They're often require a lot of like jargon or like buying into other assumptions or like philosophies or worldviews that are just kind of like not necessary. That's one one reason. So the the first reason why was just because to create something that's like the whole thing in one spot. Second reason was because it's actually aiming at a non-technical audience rather than a technical audience. And I would say the third reason is is that there are actually some arguments which I think have not been made in this clarity elsewhere, in particular around the social and the political dynamics. Many of the people talking about AI as I talk about the compendium
00:01:50
Speaker
have conflicts of interest or reasons to not want to talk about certain conflicts of interest or political situations or conflicts for various reasons. One of the things that I have going for me is that I am very i' am independent. My money doesn't really depend on anyone.
00:02:08
Speaker
In particular, I can kind of say whatever I want and also like you know call out anyone I want. And I try to make use of this privilege that I have. So there are many people and organizations that I believe had deserved critiques that had not been made, at least publicly.
00:02:24
Speaker
and not in this clear form for various reasons. So a part of that is I wanted to make these critiques and drive a wedge. Make clear that, look, there is an actual conflict here. Maybe it's a conflict that can be resolved.

Policy and Corporate Concerns

00:02:37
Speaker
you know i'm not you know Conflict doesn't mean that we can't find a peaceful resolution, but we should be clear there's an actual disagreement here. like I believe strongly that if there's a disagreement, you should have it. like It's okay to disagree about things. What you shouldn't do is pretend to not disagree.
00:02:51
Speaker
Maybe you could give an example of of what you're talking about here. What's something that you've pointed out that haven't been said before? I mean, it's been said by someone somewhere, I'm sure. But a a big thing is is that there is a very common confusion that happens when I talk to, for example, policymakers in DC, where they meet like the effective altruist movement, or like the AI safety movement you know around like open philanthropy and these kinds of people. And they're often very confused by these people.
00:03:19
Speaker
kind of like they're kind of like the thing that that kind of happens is they're like, these people come to DC and they say, oh, AI, huge risk is going to destroy everything. It's going to literally kill everybody. And then policymakers are like, wow, okay, wow, it seems pretty bad. So should we ban it? Like, oh, no, no, don't ban it. And that's very, very confusing. There's this weird thing where a lot of this like, there's this group, which we could talk about, I think we're gonna talk about a bit later, which just like formed this like a alliance, like of various organizations and people who claim to be the AI safety movement or representing the AI safety or X-ray movement or whatever, but fundamentally want to build AGI and are trying to build AGI. This is around groups such as anthropic, philanthropy, ah the effect of altruist movement, previously FTX and stuff like this. And I think there is a there are some people in this area, in this, you know,
00:04:10
Speaker
realm who are like do care about the risks and do you know want to like you know stop AGI from causing massive risks and so on, but there are those who don't or who think that as long as they're in control of AGI, it's okay, or like this is the best plan. and Those people actually disagree, but they they don't know that they disagree. so Driving a wedge is making two people aware that they're actually not allies. They think they're allies, but they're not. And this is actually very, very important for coordination because if you try to coordinate with two people who are actually different factions and they don't know they're different factions, you can't coordinate. And this will always lead to kind it led to confusion and misalignment and betrayal and so on. So I think it's very important that a conflict should be had.
00:04:56
Speaker
If you think building AGI is okay, like anthropic should just be allowed to build AGI or whatever. And then, okay, you should state that clearly. And then people who don't think that's okay, should also state that clearly. And they should have the conflict. They should. And I disagree with this. Like I am firmly, no, I don't think that anthropic or any other private corporation should be building AGI. Period.
00:05:22
Speaker
At least not in the foreseeable future. I'm guessing you're not against ah AGI in ah in a kind of blanket way that we should ever build it. Do you think we should never build AGI or super consultants?
00:05:35
Speaker
My true opinion is is that I think I shouldn't get to choose. This is not a decision that I should be making. This is a decision that humanity or you know democracy should be making, not me. I think it's extremely presumptuous and kind of like morally wrong to want to or decide to make a choice like this for humanity.
00:05:54
Speaker
I don't know what the best future for humanity is. And like you know maybe I would like X or Y more than Z, but then I should get to like vote on that and I should have my human rights respected. But like if I really want to live in this world, but other people don't, I don't think I should have the right to enforce that upon them because I'm going to be the one who builds AGI first. I think that is like morally evil or like very close to it. Yeah. All right.
00:06:21
Speaker
But Connor, what if others are racing to build AGI before you? and saying that I'm saying that as kind of partly a joke, but this is an argument that's often made, right? We will we will build it first and we will build it and it in a safe way. And if we don't do it, others will will do it in an unsafe way.
00:06:38
Speaker
Exactly. This is the main argument that these people make, and these are the exact people that are going to kill us. It's important to understand that you find yourself in the situation. You are the bad guy. like You are the person who is going to kill everyone. likes Let's be very, very clear here. if you That's not saying that you're acting irrationally. That's not what I'm saying. It might be the rational thing for you to do given your local incentives, but you must also understand that you are an agent of Moloch.
00:07:03
Speaker
you're an agent of you are the entity that is you know breaking everything. Like you're the problem. There is this really funny image that I very like, which is a which a picture of Sisyphus pull pushing a boulder, I'm saying. And the caption is, you know a lot of people think they're Sisyphus, but actually they're the fuck ass boulder.
00:07:19
Speaker
And like if you are one of the people who are like, I must race for AGI, or you're working with an organization that's like, I must race to AGI, you're the fuck ass boulder. you're You're the thing that is creating x-risk.
00:07:32
Speaker
and look i take a very systemic view when it comes to morality. I think most bad things happen for systemic reasons, not because there's like one evil guy who is specifically evil and de decides to be evil. I think this does happen, to be clear. like There are people who are like this, but it's like really quite a minority. Most bad things happen for systemic reasons.
00:07:55
Speaker
If they happen because of structure power structures, they happen because of orders or because of market forces. Look, I'm German. like like I know how it is. right is like Most Germans are and were fine people. like you know They weren't super evil, but you know bad systems lead to bad outcomes.
00:08:13
Speaker
so I take a very systemic view when I think about these things, and this is a systemic problem. There is a a fundamental thing that every individual actor you know who finds himself in the scenario is incentivized to do evil, which is to race the AGI, and they do. it's like And if they didn't, they are no longer part of the race, and then whoever the least cares about this enters the race. So there's also selection pressure.
00:08:39
Speaker
where if you actually cared about AI risk so much that you want to not build AGI, well, then you wouldn't be at open AI or anthropic, you would leave, you would not work there. So there's also selection effect. It's kind of like with like if ah if the CEO of a big oil company came up to be and he was like, I don't really think you know climate change is a big problem.
00:09:01
Speaker
This isn't evidence to me. like I don't update based on this information. I'm just like, well, yeah, duh, you were selected to not care about

Regulation and Ideological Motives

00:09:07
Speaker
this. like Otherwise, you wouldn't be the CEO of Oil Corp. I don't care. Your opinion is invalid. There's kind of a similar ah selection effect that happens here as well.
00:09:16
Speaker
So there's ah there's this big thing where like, okay, you're in a race. So what the hell do you do? So the weak answer is, well, you just give into your incentives, you do what's locally optimal for you, and you don't do anything. And this is the convenient excuse that people tend to use. I don't quite buy it. And I'll talk about that in a second.
00:09:32
Speaker
The main thing you should do is stop, drop, and catch fire. You should be like, help, help, help. I'm in a race. Help, help, help. ah You should go to the government. You should go to the UN. You should go to every newspaper that will listen to you. You should form a political party. You should do you should be help, help, help, help, help, or in this terrible situation, I can't get out. We need to get out of this together. Help, help, help, help, help.
00:09:53
Speaker
This is what you would do if you actually want to stop the race. If I was Dario Amadeer or whatever, I would not build AGI. I would go to the president and be like, help, help, help, help, help. We have to stop these people from racing. like I can't stop unilaterally. We have to stop all these people from racing.
00:10:11
Speaker
He should have found a political party. He should build a coalition. He should get unions on board. He should get you know start you know international collaborations on this kind of stuff. right like Actual coordination, actual civics, actual politics. It's the same problem as like the and nuclear arms race. It's the same situation. right No one wins a nuclear war and no one wins an AGI race.
00:10:35
Speaker
Some of the leaders of the AGI corporations have done something that is superficially similar to what you're what you're talking about here, which is they they've gone to the US government, they've talked about kind of some form of pseudo-nationalization or some form of government collaboration. The framing there is, though, we need to build these systems and we need to do it safely, but what's the difference between what what they're doing and what you're suggesting?
00:11:02
Speaker
And the difference is that the thing that they actually did is, you know for example, let me give a concrete example. I think it was Senator Blumenthal when he was talking to Sam Altman during these hearings. Sam Altman in 2015, I believe, wrote a blog post where he talked about how machine intelligence, super intelligence, pays the gravest risk to humanity's survival. Very unambiguous. Extremely unambiguous what he meant by this. This is one year after Bostrom's book came out. It was a kind of direct response to this. It was extremely unambiguous what this meant.
00:11:37
Speaker
senator blumental Quoted this exact line, he said, so you wrote this. And then he said, I assume by humanity's survival, you mean jobs. Did Sam Altman then correct him and be like, no, no, no, Senator. No, Sam Altman said, yes, of course, Senator. That's exactly what I meant.
00:11:55
Speaker
There is a deep thing here where people don't know their history, including me sometimes. God forbid, young tech guy not knowing his history has never, you know, not not a story as old as time, you know, young, ambitious guy who doesn't learn from history until he actually, you know, sits down.
00:12:12
Speaker
What was happening right now is the exact same thing that happened with Big Tobacco, or with Big Asbestos, Big Oil, or any of these companies. It's the exact same playbook is being played right now, where there are companies that think there is power and there is money to be had with technology that has some form of externality or unaccounted risk.
00:12:36
Speaker
And they want to delay and deceive and you know just muddle any response to this. This is exactly what is happening right now. And all of the companies are doing this. They are being extremely strategic about what they say. there you know I'm not saying they don't make mistakes. like Sometimes they do say more than they're supposed to. I'm pretty sure you know Dario probably is not allowed to speak publicly for various reasons. you that Because he every time he does, he says things that incriminate him.
00:13:05
Speaker
And you know other people, you know other CEOs are better media trained in various ways and are very clever about how they phrase things or how they dilute things and so on.
00:13:16
Speaker
There is no way that a corporation wants to actually get regulated. That's just not how the world works, and that's not how any of these regulations work. They want self-regulation. They want voluntary commitments. They want you know them to set the technical standards. And what they definitely dont want is they want holistic regulation.
00:13:36
Speaker
Like we have to, have you considered the considerations? We should first consider, you know, we don't want to act too hastily. No, no, of course not. You know, the the all the evidence of lung cancer from smoking is not in yet. We should do more evaluations of lung cancer when smokers before we go to regulation. Let's start with some more evaluations, some more studies. Maybe we can create a committee. yeah Can we have some like White House committees maybe that will evaluate the evidence, the controversy you see?
00:14:04
Speaker
This is the playbook which is called fear, uncertainty, and doubt, or FUD. The way FUD works is you don't directly counter the arguments of of the critique. You don't directly engage with them because you know you'd have to be good faith to do that when that is hard and like we'll just draw more attention to the people criticizing you.
00:14:25
Speaker
So instead, what you do is, if you benefit from the current status quo, if just continuing on the path you're currently on is beneficial to you, you just spread fear, uncertainty, and doubt. just You spread ink. You just confuse everybody. You create a bunch of nonsense data. You talk about controversy, and we should take our time, and we don't want to be hasty. You know you just stall for time.
00:14:50
Speaker
And this is what, you know, to varying degrees, you know, like, sure, is one org maybe marginally better on some specific thing than some other org? Sure. Maybe whatever. Is shell oil maybe marginally better than exon? Sure. Maybe. I don't know. Right. But like fundamentally what's happening is they're stalling for time.
00:15:07
Speaker
I'll give you one point here where I think the analogy might break down. So if you're if you want to sell tobacco or you want to sell oil and you don't want to have to deal with the externalities of what you're you're producing here or the health risks of what you're producing, what's motivating you is

Actors in the AGI Race

00:15:24
Speaker
is to earn money. right and And I think the the AGI corporations are partly motivated by earning money, but there's also something something above that, something kind of more ideological, something about bringing in, you know, creating a beautiful world for mankind, basically. Do you buy that difference? How do you think that that kind of changes the incentives?
00:15:47
Speaker
Exactly. so This is what we also talked about in the compendium. so We have a chapter on this where we categorize five groups of people who are trying to build AGI for different reasons. It is very important to understand is that the race to AGI cannot be understood in purely economical terms. This is exactly correct, as you say. If we look at this, if we analyze this as like a hard-nosed Wall Street analyst, it doesn't actually make any sense.
00:16:10
Speaker
you know it i mean It does right now because there's a lot of hype, but before this, it didn't really make any sense. There's a bunch of other things that don't make sense here. There is a very deep ideological, even religious aspect to this. so I split the actors into five groups. and Of course, there's a lot of overlap between the groups. It's not a perfect split, but it's useful for us to think of it this way. The first are the Utopists.
00:16:31
Speaker
So these are people who think they can build utopia, is that if they yeah which historically, as we know, is always a really great sign you know when someone believes that. So they believe that they must build AGI because if they do it and they're the good guys, then they can use this to usher utopia in for humanity and this will be great.
00:16:49
Speaker
So this is people, you know, groups such as Anthropic, many people at Urban AI, you know, you're, you know, Ilya Sutskevers and you're Dario Amadeis, you're, you know, effective altruists, like lots of people in this camp fall into like this group. The second group is Big Tech. So these are ruthless corporations that just want power.
00:17:10
Speaker
you know They're not sure if they buy the whole utopia thing or not. Some of them might, some of them might not. But fundamentally, they buy power. They understand the concept of power and they understand that AI means power and they yeah want more of it. So these are the ones that are now driving most of the actual resources of the race.
00:17:27
Speaker
originally the race, quote unquote, was just some ideal, you know, ideological utopists running small companies like DeepMind or, you know, OpenAI. And then, but we have now phase changed to, to where all of these companies, these utopist companies have been propped up by big tech, you know, anthropic by Amazon, ah DeepMind by Google and OpenAI by Microsoft. So there is a,
00:17:53
Speaker
strong symbiosis now where these have like i like merged into one entity to a certain degree. But I think big tech for the most part is far less ideological and far more power driven, more practical, and also more dangerous for these reasons. The third group,
00:18:08
Speaker
as the accelerationists. So there are some people, I mean, who are basic libertarians, they think they think technology is a moral good thing. Like there's no way that technology can be bad. And therefore, all technology must be good, it must be built, and anything that is against the building of new technology must be bad.
00:18:27
Speaker
And the fourth the fourth group is the zealots. So this is a relatively small person portion of people, but I think it's important to be aware that they do exist. These are people who think you meant not just think that humanity will be per placed by AI, but that this is a good thing.
00:18:41
Speaker
So they want humanity to be replaced by AI, either because AI is like a superior species or because humanity is evil and deserves to die or whatever. There are some pretty prominent people in this camp you might not expect, like Larry Page, the you know found of you know the founder of Google, who told you know Elon Musk, when Elon Musk said, hey, he doesn't want AI to eradicate humans, Larry Page said he's being a species.
00:19:08
Speaker
Which is, again, like that is such a crazy fact that actually happened in reality. If you saw this in a movie, you would be like, that's goofy. That's that's not real. like That didn't actually happen. But it did. It happened in real life. And there's, of course, other people in this camp as well, like you know Rich Sutton, a very famous AI scientist, Schmitt Hubert, another famous AI scientist kind of stuff. But it's a relatively small camp overall. know Some schizophrenic people on Twitter, I guess. And then the fifth group is just opportunists.
00:19:36
Speaker
They come along whenever there's power and money to be had. you know what Before this, they were on blockchain. Now they're here and now they're allowing for the ride. So they don't really have any particular loyalty to AGI as a concept. They just want to make money and gain power, which yeah fair enough, I get it. Hustle is a hustle.
00:19:54
Speaker
so that's That's the landscape. Which group are you most afraid of? You you mentioned the kind of combination of Big Tech and Utopians. Is that that's the main driving force behind AI progress? This is the current main driver, correct. Before this, it was the Utopists. Now Big Tech are taking over. I also think Big Tech is kind of eating the Utopists, you know classic Microsoft playbook of you know embrace, extinguish.
00:20:18
Speaker
which is what we're kind of seeing, where Microsoft is kind of in a slow motion de more divorce from OpenAI. They're trying to kick OpenAI out now that they have what they need. That's interesting. I heard that OpenAI was trying to get out of their Microsoft deal by by invoking their ATI Achieved clause. I haven't confirmed that. Do you know about that?
00:20:38
Speaker
I've heard about this seems like a possible rumor. I don't know. I think it goes both ways. I think there's some power struggle there that we don't understand. Who knows? But this is a classic Microsoft strategy. like you know All the way back in the 90s, I think it was one of their emails that included like a exact description of like the strategy where you would like embrace a competitor or a new technology, extend it, and then extinguish.
00:21:00
Speaker
And so I think this is the thing is that like open Microsoft is right behind on AI, kind of got the open AI tech that they needed, that they wanted, and also have hired like and Aqua hired a bunch of like talent and so on to build their own AI efforts and so on. So they're hoping to like probably break from this and continue to pass. Makes perfect set. Honestly, you know props to the you know executives, like some true horse House of Cards shit. Good job, guys. like Impressive.
00:21:28
Speaker
Is there a chance that this ends up being a good thing? If maybe the big tech, the profit focused, maybe they don't want to take as much risk, maybe they don't really believe that super angelitance is an actual achievable goal. And so you get a more kind of the idealists or the utopians get even up by more practical concerns and launching a useful kind of chatbot product.
00:21:54
Speaker
Right. So this would be actually a good thing if it happened in this regards. Big tech is more evil, but it's more predictable. which is a useful property to have if you want to regulate or prevent you know certain forms of externalities. It would be good, actually, if the Utopians were not in charge. I think the Utopians are the biggest problem like because they have an ideology, they have a burning ideology that justifies anything, basically, as long as it allows them to get to HAI faster. because i mean if youre This is always the problem with Utopist ideologies. Utopia is so good,
00:22:27
Speaker
that it justifies anything. you know We saw it with like FTX and like others like this. This is always the problem with like consequentialist or like utilitarian ideologies. This is usually the failure mode that happens. Not that all fall for this failure mode, but this is a very common one. Big tech and these like you know sociopathic corporations have a different failure modes, which are predictable and they're more competent in dangerous ways.
00:22:53
Speaker
So that if AGI was so far off, I think big tech taking over AI would be less dangerous. The problem is if AGI is not far off.

AGI Race and Global Risks

00:23:01
Speaker
If AGI is not far off, then then and big tech companies believe this, for example, because the utopists convinced them of this, then you're in big trouble. Then you're in really, really big trouble. And you are even more in trouble if nation states get involved. So let's talk about Entente.
00:23:19
Speaker
So, recently, there has been an emergent faction, which is kind of basically the same faction I talked about earlier. This is Anthropic, Open Philanthropy, some parts of Rand, um RandCorp, and a couple others in this kind of area. We've recently been pushing what they themselves have called the Entente strategy.
00:23:39
Speaker
The Entente strategy is basically try to gaslight and confuse and lie to the US government that the US must race to AGI before China gets it. and Let's be very clear about what their intentions here are is, of course, they need to build it. They need to do it for the US, and they have found a trick of how to deceive or frighten you know, US military apparatuses, hopefully in order to get them to race to the utopian AGI that they want to build. This is an escalation. This was not really a thing two years ago. So this is a recent form of escalation, where the utopists have now gone for in their dream to build utopia, they have now gone from, you know, their own companies, to big tech, to going after the state, as like the next thing of how to escalate their dream of utopia.
00:24:32
Speaker
And this is extremely dangerous. This is extremely dangerous. it And it's also wrong. This is a very important thing to understand. This is technically and strategically wrong. It's true that there is a race happening with China.
00:24:46
Speaker
But what people are missing, but the problem is, is that it's a race to the death, to the end. It's a nuclear war. You can't win a nuclear war. Funnily, so this strategy is called Entente, as named his way by Dario and Rand. Funnily enough, this, there is a historical parallel to this. So in the 1980s, Ronald Reagan announced his strategic defense initiative, also called Star Wars.
00:25:13
Speaker
which was a ah huge government project to build anti-nuclear missile defense systems in space, basically. and to wrap massively ramp up the American nuclear stockpile. and The proponents of this strategy, which was based on a lot of extremely controversial scientists, science that was not well-backed, most of the scientific community came out against this idea and thought it was completely unfeasible. It was even dangerous because it would provoke the Soviets potentially intuitive into a nuclear war even faster.
00:25:45
Speaker
and so on, because if they were to succeed at building a perfect shield against nuclear weapons, then the Soviets might be incentivized to strike before the shield is complete because, you know you know, use it or lose it. So this could potentially cause the exact nuclear war that they were warning from. But even more crazy than this, I didn't know this until I recently started reading a bit of history about this, is that ah proponents of SDI literally argued that they could win a nuclear war and that we should.
00:26:11
Speaker
They literally argued that we build a perfect nuclear defense system and then we nuke the Soviets. And this was an actual thing that actual people suggested. And then therefore we would have perfect, you know, American military hegemony forever. And the thing that pushed against us was a coalition of scientists, civil groups, et cetera, called the detente. And the detente basically argued that, no, you can't win a nuclear war. I mean, both you're provoking the Soviets into nuclear war ahead of time, which is the same thing that's happened with the AGI race. If the Chinese actually think that we are getting close to AGI and that we will use this to you know destroy the Chinese state, but what do you think they're going to do?
00:26:51
Speaker
I don't think there's ah there's ah there's a peaceful solution here. you know It's going to be ugly. and The second thing is is that you can't win a nuclear war. so Carl Sagan actually proposed or and like validated this theory of nuclear winter. and This was one of the decisive killing blows against SDI. As he he showed, even if one were to nuke the Soviets or whatever, you still get nuclear winter and everyone loses.
00:27:15
Speaker
So there is no winning strategy. You can't win a nuclear war. Everyone loses. And the same thing is happening with AGI right now. You can't win an AGI arms race. The only winner is AGI.
00:27:27
Speaker
what what do What do you think they would say in response to what you just said? why so They must be aware that, for example, if China begins to sense that the US is close to ADI and their intelligence points to the US using that ADI to basically control the future of the world. and the The CEOs of the ADI corporations are aware that that China has has ah nuclear weapons and so on. what's What's their response to what you just said?
00:27:54
Speaker
I mean, it depends on which one you asked. I think most of them has literally never thought about this. like A lot of this is not well thought through, to be clear. like Most of this is not well thought through. Some of them think through it, but like most of this is just not well thought through. Most of them, it's just like, well, we don't have a choice. This is the main counter argument. It's like, we don't have a choice if we race to AGI and we die. Well, not our fault. Nothing we could have done. There is a strong self-imposed nihilism where just people decide that there is no third option.
00:28:24
Speaker
They decide that, well, China is racing, stopping them is ontologically impossible. So therefore, racing is the only ontologically acceptable solution. And if racing kills us, then we are already dead. And so there's no reason to even try.
00:28:41
Speaker
What's interesting about the the China counter-argument to AI risk is that it's it's, in some sense, the first thing that people go to when you talk about trying to perhaps slow down, perhaps pause ah the AI race that's going on. I mean, I don't think it's a bad argument. I think it's it's at there's actually something that to be very aware of there. It seems that the The current kind of philosophy of the Chinese government wouldn't be the right philosophy to rule the world forever. right So we don't want that. And so even though it's it's kind of the first argument that people go to, I think it's still kind of a live argument. Do do you agree that it's a live argument? Yes, but it's wrong.
00:29:20
Speaker
like It's live, but it's wrong. It's just not true. like This is very important to understand. The argument is false. it's The argument is invalid. It doesn't apply to our reality. The problem isn't that we know how to build a safe AGI, and it's just a question of who gets to press the button. That's not the problem. The problem is AGI kills everybody. and If you race to the bottom, you build AGI as fast as possible. There will be no safety. There will be no alignment. There will be no American AGI. That's not what's going to happen.
00:29:49
Speaker
There's just going to be unaligned AGI. This is not a prisoner's dilemma. This is important to understand. This is not a prisoner's dilemma. People pretend this is a prisoner's dilemma. It's not. In a prisoner's dilemma, you always benefit from defection. like So even if the other person is already defected, if you defect, you get a better outcome. This is not the case here. If China raises to AGI, we die. If we then also raise to AGI, we just die marginally faster.
00:30:14
Speaker
This doesn't help. this doesn't This does not improve the situation. This is not ah this is a very deep thing. This is very important to understand. If you read, for example, you know Leopold Aschenbrenner is making exactly this point. He makes this long thing about how AGI is coming, which is like nicely argued and nicely put together and whatever. And then he just kind of says in like one sentence, you know for alignment, you know we'll muddle through. That's his whole argument. like There's no argument there. He's just like, well, whatever. you know we We have to race.
00:30:42
Speaker
There's no justification. like If you actually read the thing, he wait he wrote lets up all these arguments about AGI coming, but actually doesn't make any argument that justifies racing as the correct strategy. He actually doesn't justify this because he doesn't prove that alignment will get solved. It's not there. it's not in that it's You can read it. It's not there. du There is no reason to believe that if we race to AGI, that we will get safe AGI yeah that does what we want. in fact In fact, there is overwhelming reasons to believe that that was not what's going to happen.

Complexity and Risks of AI Development

00:31:11
Speaker
So the whole premise is flawed. It's just like it's it's it's wrong. One of the points you make in or you and your co-authors make in the Compendium is that AI progress over the last decade, say, have been driven mostly by more resources, more data, and more compute, more talent, more investment, and not equally by kind of deep research breakthroughs.
00:31:36
Speaker
Is this the the cause of ah ah us not understanding AI deeply enough to make it safe? so is is it Is it because that we we have grown these systems as you as you write, as opposed to building them, that we don't have understand them and therefore we don't know how to make them safe? This is a huge problem related to this. i think so there's there's strange thing that happened There's many strange things about AI.
00:32:02
Speaker
and fundamental the the kind like One of the core strange things about AI as you say is that it's grown, not written. It's not that you write line by line the codes of what your AI does. It's more like you have a huge pile of data and you grow a program on top of this data that like solves your problem, which is a very strange way to do programming, but it works empirically.
00:32:22
Speaker
so But like we don't really understand these programs that get generated. They do strange things all the time that we don't understand or that we don't know how to control. And they you know we and we have no way of predicting what they will be capable of or not before you know they're made in various ways. This is very curious because is this kind of different?
00:32:40
Speaker
from how capabilities tend to work in normal software. In normal software, as you add more features to your software, as you make it more powerful, as you make your system more capable and more things, you build up complexity. ah your your Your program gets more and more complex, and it gets harder and harder to manage. And usually, eventually, you will reach some limit of what complexity you or your organization can handle, and then your program just like you know freezes in what it's capable of doing or it collapses and just becomes un-maintainable. This is the normal life cycle for software. And then you have to like start over or like break it into multiple parts or like or you're just stuck. like There are many companies, massive corporations, whose whole job is just like
00:33:20
Speaker
like you know like parasitically living off one huge, un-maintainable code base that like no one can like actually do things with. like This is a very common thing in software there's ah ah to to give a bit of flavor to this. There's a very funny story. Our guy on Hacker News, which is like a news website,
00:33:38
Speaker
talks about how who used to work at Oracle. So Oracle is a legacy software company. They build like huge baroque software for like Fortune 500 companies and stuff. And one of the products they sell is the Oracle database, which is just a database. you know It's not that different from other databases.
00:33:55
Speaker
And he talks about how the code base of the Oracle database is just one of the worst things known to man. It's like 20 million lines of just poorly documented, complicated code that all everything interacts with everything. No one knows how it works. It all relies on literally thousands of flags that just get you turned off and all interact with each other in ways that no one understands and are not documented properly.
00:34:22
Speaker
So whenever you need to change anything about the code or add a new feature, you have to like you you change a couple but lines of code and then you have to run literally millions of tests on their you know huge cluster. And this takes like days, like three days to run all these tests. And then you always break thousands of them with every change you make. So then you have to go through each and every one of these tests that you break and just like fiddle with all the flags.
00:34:46
Speaker
until you have like the right combination magic combination of flags and like bug fixes and like special cases or whatever that all the tests pass. And you add like another flag for your specific, weird, bizarre edge case, and then you submit the code and it gets like merged you know months later.
00:35:03
Speaker
This is a terrible way to build software. Like this is just truly, truly a terrible way to build software, both radically speaking, most from a safety perspective. Like I can tell you with confidence that the Oracle database contains horrific security flaws. I can tell you this because there's no way to get them out of there. That thing is too complex. There are some horrific security flaws in there that have not yet been found. I am 100% certain of this and there's no way for Oracle to get rid of them.
00:35:31
Speaker
There is no way Oracle could take this code base and pragmatically actually find all the bugs. It just can't be done. It's too hard. It's too complex. And so this is how most software is. And the punchline is, of course, is that AI is developed like this too, but worse.
00:35:46
Speaker
is that AI, you don't even have a code base. You have a neural network. There's no code for you to test, and there's no tests. Sometimes we run evals, but evals are not tests. It's not like we take apart all the individual functions of the AI and test all the individual functions that they're correct. No, it's all a big neural network. So evals are just completely and you know like trial and error, brute force, and you just like see if the numbers go up or down and they like look good.
00:36:13
Speaker
This is a terrible way to design software. not just ah But the crazy thing is is that AI allows you to build software in these terrible, terrible, terrible ways while still making it very powerful.
00:36:24
Speaker
and This is where it's different from traditional software. The Oracle database, you can't add crazy new features, at least not easily, because it's just too complex. like you just You just run into a bottleneck. With GPT-4, well, just throw more data at it. you know Bro, just just get more GPUs, bro. Just add more complexity, more patterns. Just put more in there. Who cares? like Literally, who cares? Just put more shit in there and it gets better.
00:36:46
Speaker
So we have like this kind of like bizarre we worst of both worlds situation where the way AI software is built is so fundamentally in a way that we can... like like From a cybersecurity perspective, you're just like, holy shit, there's no possible way to make this safe. like There's like no way you could have a system like this that does not have bugs. It can't be done. like it's like If you think this, you're crazy. And at the same time, you can make it extremely powerful.
00:37:15
Speaker
Because in know the past, you know software at least would like plateau in its you know power, so to speak, because it would just get too complex and it'll start breaking. But AI, to a large degree, doesn't have this property. So we could build extremely powerful systems that can do extremely powerful things while having just the most complex bugs hit known to man. Just bugs that are so hard to understand, so hard to debug, so hard to even you know fathom or find that's impossible.
00:37:45
Speaker
And the reason this is so important is, let's be very clear here, when people talk about stuff like, oh, America will build good AGI that makes the good future. Let's be very clear what they're saying here. They're saying we're going to solve all of moral philosophy.
00:38:00
Speaker
All the problems, our institutions, our governments, our you know states, our militaries are trying to solve all political problems, all resource allocation problems, scientific problems, interpersonal problems, social problems, all of these problems using software and there will be no books. That's what they're saying. That's insane.
00:38:23
Speaker
I agree that there must be many bugs, bugs different from from the how bugs would appear in in traditional software, but kind of a failure, you could say, in the weights of GPT-4, for example. But but these are not these are not so consequential as to as to prevent OpenAI from offering a useful product.
00:38:42
Speaker
So what ah what is it that's going to change in the future? OpenAI has ah has offered this chat product for for a number of years now. Nothing has gone fantastically wrong yet. So what is it you expect to to change in the future such that the bugs you could say in our models become more problematic?
00:39:01
Speaker
To be clear, many things have gone fantastically wrong. like this is like People keep saying this, this is not true. like Social media is now insane, like even more so. This is a massive externality on humanity and its epistemology. This is a huge, huge blow to humanity. This is a massive cost.
00:39:21
Speaker
that every time I go on social media, there is no way for me to know whether this eloquent person responding to my arguments is a person or not. This is a massive cost that was imposed unilaterally on humanity. It is now much harder to build coalitions online. It is much harder to understand. Scams have become much more sophisticated, much more mass-producible, deepfakes, political things, you know you know all the kind of shit that's been happening also like in third world countries. you know we're like and genocides and so on have been being pushed by these kind of things. Let's be very, very clear here. I agree that there is a difference and AGI is a huge problem, but the fact that AGI has not had huge consequences is just wrong. There is a massive unpaid externality. like our Our boomer parents are just driven insane.
00:40:08
Speaker
by like social media and like AI-generated slop. Artists are being you know eviscerated alive. you know All these various entertainment businesses are falling apart. you know like This like human shared tradition of like art and beauty is falling apart. like These are massive costs.
00:40:25
Speaker
like they're not deadly, we're not going to die from them, but they are huge. They are huge, huge, huge, huge costs that we are paying and that like that society is paying and there's no way to do it back. So we could say it's worth it. Like, you know, you can make the argument. Sure. Maybe it supercharges, you know, propaganda and psyops and marketing and like all these other, you know, manipulation stuff. But that's fine because Jack GPT is that good. Fair. like That's an argument you could make.
00:40:54
Speaker
But it's not free. you This was not free. But to get back to your actual question. So your actual question was, what changes? And yes, i so I actually was talking to ah another AI researcher about this recently, who kind of pointed out that like, well, you're concerned about like these like AGI and like these like super intelligent risks and so on. But like, you know, look at chat GPT, it's really smart, we don't have this problem. And the thing is basically, chat GPT is very smart.
00:41:21
Speaker
and like way smarter than people think it is. um And so are like Claude and like other AI systems, but they're not literally human level. Because like I can't just replace a senior and engineer or a senior manager with chat TBT with no modifications yet. So my argument is that there are some things that are still missing. There are some like I can put a human And, you know, as a CEO of a company and it works, you can't currently do that with chat GPT. Chat GPT can help. It can do many of the tasks that a CEO would have to do. Many of the tasks a CEO do can now be automated by a chat GPT, but it's not a hundred percent yet.
00:42:01
Speaker
Very importantly, I don't think intelligence is something magical. It's not a discrete property where you have like some magic algorithm and you add the magic math and then it becomes intelligent, or otherwise it's not. I think there is a very common thing where people say it's not true intelligence, it's not true planning, or it's not true, et cetera, blah, blah, blah. We argue this in the compendium as well. I think this is just ah very... This is pseudoscientific and just like really bad reasoning, where intelligence is is made of smaller parts. Like whatever intelligence, you know, general intelligence, human intelligence is, it's made of a bunch of smaller tasks. And the CEO of a company, you know, when I do something, the way I run my company is composed of smaller tasks. There are many, many smaller things I need to do. I need to talk to people, I need to reason about people's mental states, I need to reason about the economy, I need to reason about products, I need to make decisions, I need to communicate, I need to write things. But all of these things are
00:42:55
Speaker
Tasks, they can be understood, they can be automated. And chat GPT can do a lot of the tasks I do day to day. We can't quite do all of them yet, but I expect as we get to more and more general systems with longer and longer memories and, you know, ah coherency and so on, that slowly but surely eventually it will be all of them. And once you have a system that is stable and long lasting and so enough to completely replace a human,
00:43:24
Speaker
You have a system that can do AI research. You have a system that can improve AI further. And you also have a system that you can instantly copy onto millions of GPUs, a system that never gets tired, that never gets bored, that has read every book ever written, that runs you know tens or hundreds or thousands of times faster than any researcher in the world. you know So the moment you have a system, one system,
00:43:49
Speaker
that is as smart as a human that has this kind of, you know, goes from the 90% to 100% and to 110% or whatever. Once you have such a system, you could instantly scale it up to a thousand, a million X, and you immediately will have something that is vastly smarter than humanity. And it can improve itself, they can get more power, they can develop new technology, and this will be a complete step change, and this can happen extremely quickly.
00:44:13
Speaker
So for me, AGI is defined as a system that can do everything a human could at a computer. Once we have one such system, we will get ASI, artificial super intelligence, which defines a system that is smarter than all of humanity in short order, like I think six months or less.
00:44:31
Speaker
Just because you would have a bunch of call them virtual researchers ah doing a bunch of additional AI research and and quickly making progress. And so i'm I'm guessing I take your point here to be that the box that show up in these kind of agents of the future that can reason, that can plan, that can think long-term, those bugs will be much more consequential because now you have a bunch of virtual people spread out in all institutions quite quickly. Yeah. Is that is is that is that your explanation? If I told you that OpenAI just announced their virtual humans program where they are going to release 80 billion virtual humans onto the net next week, would you feel good about this?
00:45:13
Speaker
Could we even guarantee they wouldn't immediately declare war on us, or steal our resources, or who knows what they would do? right like and like and It's going to be worse than that. These are not going to be humans. These are going to be weird alien machines that are you know have all these weird bugs that we don't understand, that can have extremely weird behavior, that are being built by for-profit corporations to make them profit, to be clear. like These are being built by sociopathic entities to maximize sociopathic functions, profit, and you will have massive amounts of them that you know expand extremely rapidly, that you can you can scale them up extremely quickly. they can all If one of them learns something, you can immediately send it to all the copies. like If one human figures out something, that doesn't mean humanity knows it. With AI, it's different. If one AI figures something out, all AIs can know it in a second.
00:46:05
Speaker
like Even if the system itself is like not massively smarter than a human to start, it will get all of these superpowers for free. It will be able to copy itself, it will be able to run at faster speed, it will have access to vast memory banks, it will never get bored, it will never get tired, it don't hast have emotions it doesn't have all these things. like If I just had a guy who works 24-7, is the smartest John von Neumann, never sleeps, never gets tired and never gets distracted, this is already like ah by far the smartest thing on the planet. You could even trade off speed for kind of raw intelligence where if a machine or an AI could work and incredibly quickly, that could make up for it being kind of dumber than than us in a sense.
00:46:52
Speaker
i would I would expect speed to also be a very important factor. and And as you mentioned, not getting tired, not getting distracted. I mean, when we try as humans to achieve a task, right there's a lot of kind of superfluous activity and anxiety and all kinds of things that distract us. But these models would not kind of face those same same hurdles. and the So they'll probably go very directly and very ruthlessly to the goal they have in mind.
00:47:21
Speaker
One question here is, so when I when i was 18, I considered taking a driver's license. And I decided against it because I saw all of the progress that was being made in self-driving cars. This is a while ago now. And so there I concluded that you know we are we seem to be so close to having kind of full self-driving. And you could say, OK, we seem to be getting quite close to human level in these kind of chatbot models we we have right now.
00:47:50
Speaker
If we can just fix things or perhaps unhabble the models or add some some other features, give them make them talk to themselves and so on, we can get to agents. ah Is there a similar problem to self-driving where you know getting close to to being reliable enough to be useful it is is is useless in a sense? where getting that the last percent or so of reliability makes the entire difference of whether you can deploy an agent usefully in the workforce. I think there are some of this, but you can also think of things differently. If you're on an exponential, and let's say you've been on this exponential, let's say it doubles every two years, right? Let's say just, you know, hypothetical. Now, yeah let's say you've been on this exponential for a hundred years or something, and it's gone from zero to like three or something over this time. yeah you can You can pick an exponential or whatever that like started low enough that it like moved slow enough to get to this. And now you say, wow, we spent 100 years and we only got three units of progress.
00:48:54
Speaker
So like this is going if we want to get to 100 units of progress, this is going to take literally 3,000 years. And this would be a reasonable argument to be made. right But the true answer is is that it will go 3, 6, 12, 24, 48, and you're 100 in 6 years or 12 years later. Although this is this is for the raw intelligence of the models, I'm guessing, right where we've we've seen we see this kind of progress. But we we don't i mean do we have good measures of agency in AI models?
00:49:24
Speaker
There's this important thing, right? Agency. What does that mean? Exactly. It doesn't mean anything. It's pseudoscientific. It's not a real concept. It doesn't defer to anything in reality. It's like it's just a word that people use that come up with like like whether it's agency or planning or consciousness or something. These don't refer to actual physical or like algorithmic properties.
00:49:48
Speaker
you know, some people use them to refer to specific things. And if that's what if you're referring to a specific algorithmic property, great, happy to talk about that. But I expect it's more of a vibe. Yeah, maybe maybe it's a vibe. But we we we can put that word to to the side and then talk about perhaps ability to to to achieve a goal over a longer time horizon, for example. Yeah, and we're getting exponentially better at that.
00:50:10
Speaker
yeah Yeah, maybe we are. maybe we are so so So your point is whenever we we try to kind of specify what we mean by agency, we see that it that it dissolves into something measurable and that which is measurable we're we're improving at.
00:50:26
Speaker
Exactly. This is my fundamental thesis on intelligence. is that If you actually go very, very deep and the and into any of these concepts that seem to be missing, they dissolve. ah Fundamentally, it's all just information processing. It's all just patterns. It's all just smaller tasks. And we are getting better at all of them. you know like you know Some of them more than others, and so on. And some of them are harder than others, and they will take a bit longer.
00:50:50
Speaker
But if you're on an exponential, it doesn't matter. 50% of the progress happens in the last unit of time. like If you're at 50% and they're like, wow, this is going to take quite a while. We're only 50% of the way there, you're wrong. It's only one unit of time away. So if we're at 50% of you know hypothetical last mile driving, well, it means it's only going to take you know one more year or two more years to fix. I'm not saying this is literally what's going to happen, but like this is like the kinds of ways you should be thinking about this.
00:51:18
Speaker
right so like And also like with self-driving cars specifically, there's a bunch of other stuff like regulation. that so things not like like Waymo does have self-driving cars, and they do work. you know like They do exist. In specific cities though, right? Yeah, but like San Francisco is a terrible city to drive in. Have you driven in San Francisco before? That's not an easy city to drive in.
00:51:38
Speaker
So like, it's like, I agree Phoenix is an easy city to drive in, but like, and like San Francisco is not an easy city to drive in. So like, like, sure, we can argue, Oh, it's still, I'm perfect. It still makes mistakes. Sure. Of course, whatever. Right. But I'm just like, also they have like what like.
00:51:53
Speaker
A GPU in there or two or whatever, right? It's like, I expect if you, and like, what we're also seeing right now with like robotics is we're seeing this massive revolution in foundation models for robotics. We're like, I remember two or three years ago, people were like robotics, impossible. Deep learning. No. can't do it. but And now we just like take language models, you know train them on data, and then ask them to do things and they do. like like like If anyone is not, just go to the DeepMind blog and just like look at like the last like three papers they published on like robotics and just like have your mind blown. It's just like literally like, lol, we asked the robot to do something and it did.
00:52:32
Speaker
Yeah, it's it's it's actually amazing to to look into this. I wouldn't have expected it to to work out this way in which you you have ah a vision model looking at a scene, writing down what it sees. You can you can then kind of give instructions in in written language, in in kind of natural language, English, and say, you know, pick up the apple. Of course, there's ah there's a bunch of ah very advanced stuff going on underneath that, but it's just interesting that it could be that simple in in a sense. okay In the Compendium, you you call for an actual science of intelligence. and I think this is this is what we this is what we're we're kind of debating or talking about right now. we are What we're missing is't is a developed science ah of what intelligence is, how it works, how it can be grown.
00:53:21
Speaker
What do you think such a science would look like? I obviously don't know. like This is an extremely hard question. I have some suspicions. I have a suspicion that it will look more like computational learning theory, less like computational complexity theory, but like also lots of computational complexity theory. That that difference, you got it you got to explain that difference. Do I?
00:53:46
Speaker
Okay, but I'll say a couple couple interesting things, and then I won't go into it too much because it's just schizo stuff. it's just like these are something There is something wrong with P space.
00:53:58
Speaker
And so what I mean by this is, is that there is a, everyone knows about P versus NP. It's like there are problems that are in a class called P polynomial, polynomial time, P time. And there's another one called non-deterministic polynomial time. And it seems like these two should be different, but we have proved this. Everyone knows this very famous example in computer science, right? But a lot of people don't know, we also can't prove that there's P time and P space.
00:54:20
Speaker
which is insane. P space means you have polynomial amounts of memory, but infinite time. And we can't prove that there are things that are computable in P space that might also be computable in like P time. This is insane. What this tells me is we got something deeply wrong about computation. Something about how we think about math, computation, intelligence is wrong.
00:54:45
Speaker
I don't know what it is. There's like something about like the unit of work. Like what is a unit of math? If I do one math, what is that unit? Cause like there's a thing. and So there's a little bit of this in what's called algorithmic information theory.
00:55:04
Speaker
and algorithmthing for me I think it was algorithm information theory. I'm actually be getting that wrong. The comments will yell at me if I got this wrong, where you can prove that for a sorting algorithm, there is a minimum complexity of the program. like There's no way to make an um a sorting algorithm that is less than n log n or something like this. This is like the only result that I'm aware of of this type. We can't prove this for almost anything else, but intuitively,
00:55:26
Speaker
What a science of intelligence would look like would be something like to be able to drive a car. You must be at least this intelligent. This is what it would look like. And the closest thing we have to this is algorithmic information theory, which is like, which has proved this thing of like, if you want to sort something, your algorithm must be at least this complex. This is the kind of things you would looking for. I don't think we can even do this because I think we're, our math is wrong. Like there's just like something.
00:55:51
Speaker
super deeply wrong about how we're thinking about mathematics and how we're thinking about computation, that we are just super confused about intelligence. That intelligence is like super, super hard, but we're already getting like computation wrong. And I expect once we're less confused about what computation is and what learning is and these kind of stuff, then intelligence will become more and more clear what it is and how to think about it and how to formalize a statement such as, in order to drive a car, you must be at least as smart. I think we are so far away from this.
00:56:19
Speaker
This would take like generations of mathematicians and like computer scientists. I think this is like probably like as hard or harder than P versus NP to get right. I do think we can make a lot of progress on empirical or approximate theories that are not really true, but I also think that's very, very hard. and It's one that's a surprisingly small number of people are working on to my understanding.
00:56:45
Speaker
How would you categorize scaling laws? Would that be kind of like an engineering of intelligence and not really a science of intelligence? I think it's more like alchemy. Alchemy of intelligence, okay. So we're not even at the at the stage where we have like the engineering textbooks and and some kind of like equations for how ah our models behave.
00:57:06
Speaker
Like the scaling laws are much closer to naturalism. like It's more like Victorians looking at bugs and finding that there's like a symmetry between different bug sizes. You know, it's like there's, there is a pattern there, right? And like, it might be an interesting pattern. It might not be an interesting pattern, but like the way that like, say Victorian naturalist categorize animals into species turned out to be like super wrong. Once we figured our genetics, right. And we find out that there's um a deeper pattern.
00:57:32
Speaker
in nature, then, you know, they didn't get everything wrong. When they said like, wow, lions and tigers sure look similar, they kind of look like cats, probably related. Yeah, they were right, right? Like, I'm not saying naturalists were stupid, I, you know, or anything. Same way I'm not saying that someone who like, does empirical work on AI is stupid. I'm not saying I'm just saying like, there are deeper patterns that you can't find through naturalism.
00:57:57
Speaker
No, no i mean the scaling laws are are a kind of a great insight. It's just the question of what do they mean in practice? right What is it you that you can predict? It's not exact capabilities. it's not You can't predict anything, basically. You can predict a specific like approximate number, which is the loss. But how the loss relates to any capabilities we actually care about, like they like you need this loss to be able to drive a car, which would be the kind of result we actually want to have,
00:58:25
Speaker
It's just like not even the same universe. Research into scaling laws will not get us to an answer there. You would have to go at this from a completely different

Impact on Labor Markets and Alignment Challenges

00:58:32
Speaker
perspective. We have to come from a computational learning or like an algorithmic information theory perspective if we want to answer question or like even formalize a question like this. And we are like, like Scaling laws is not even trying to do this. This is an important thing to understand. It's not that they're trying to do this and they're failing. They're not even trying to do this. It just has a different goal. That the goal of scaling laws is, well, a, create hype and raise money, and, two, empirically allow you to predict if I put in this amount of money or effort or whatever. How should I split my resources to get all things equal, best outcome? Which is a fair thing. That's what a lot of alchemists did. If you read alchemical texts,
00:59:08
Speaker
A lot of chemical techs would be like, oh, if you want sulfur, here's the things you mix and then it makes sulfur. And like, careful, don't heat it too hot or it will be bad and stuff like this.
00:59:19
Speaker
We talked earlier about how it it could be a good thing if big tech takes over from the utopian kind of founders of the AGI corporations. If ah timelines were were long, you don't seem to to believe that you're worried that timelines might be quite short. Does this have to do with specifically kind of agents helping us do AI research? or do you expect Is that a crucial piece of timelines being short for you?
00:59:48
Speaker
It's neither necessary nor sufficient, but it is a part of it. like it is ah It is a part of why I expect things to happen, but there are worlds in which that's not the crucial piece. like For me, like I am a stupid ape. you know I have little neurons, so I try to keep my model simple. I look at intelligence. I look at what it's made of. It's made of all these little parts, all these little parts.
01:00:14
Speaker
systems keep getting better at. I look around in the world, more and more jobs get automated by these things. you know They talk more and more better. And I'm like, well, talk to me about automation. Because when I interview economists, one thing they they tend to agree on is that ah this automation, if it's happening, it's not showing up in the official numbers yet. So when you see you see automation, what is it that you see?
01:00:39
Speaker
Here's an example of something that doesn't show up in GDP numbers, Wikipedia. Wikipedia is one of the greatest triumphs of humanity ever. It's one of the most valuable things ever created by humanity, period. like Just Wikipedia is like more valuable than almost any other artifact ever created by man.
01:00:59
Speaker
it It is kind of on the shortlist of things that you would take with you if you were to travel into space and could only take yeah something like that. You showed me like a massive like dam or a huge building or whatever, and it cost like $10 billion, $100 billion dollars to build it or whatever. And like that's much less than Wikipedia is Wikipedia is obviously worth more than this one building, like from a pure value perspective. Wikipedia contributes zero to GDP.
01:01:24
Speaker
It is not part of the GDP measure. It does not show up. It does not change things. yeah As far as economics are concerned, nothing of value has been created here. I think a lot of people are confused what GDP measures and like what these are like total factor productivity and so on, what they actually measure.
01:01:39
Speaker
measure is I'm not saying they're they're bad measurements. Again, naturalism. I think a lot of i think a lot of economics is naturalism and that's fine. I'm not saying this is as a critique. I'm saying if you know what you're measuring and you know why it's measuring and its and like its limitations and like in what scenarios it breaks down, that's fine. It's very useful to measure GDP. GDP is a very, very useful number to know and helps reason about a lot of very useful things. But you also have to understand that it is not reality.
01:02:07
Speaker
like it's It is a very, very flawed metric that doesn't measure everything. like like okay like Think of the amount of cognition of writing that chat GPT has performed since inception. and Let's say you put an hourly wage of a human on that number. How much do you think that would affect GDP? Do you think it would affect it a lot?
01:02:27
Speaker
Yeah, I mean, just just looking at my own usage of these models, right? i It's a lot. It would probably like destroy the economy 10 times over, right? It would probably be like as big or larger than the entire economy if you had actual humans typing out those numbers. Of course, the economist will say, that's not what we mean by that. Of course, it reduces the price. But I'm like, all right. But then you see how this is misleading. So you're saying if I add more writing labor than the entire economy to an economy, it doesn't change.
01:02:55
Speaker
And I'm saying, okay, you can define your metric that way. But that's not necessarily intuitive to people. That's how the metric works. But it is how the metric works. If I invent something that adds 10,000 times more of a certain type of labor, but makes the labor extremely cheap, GDP doesn't change.
01:03:13
Speaker
And so this is very unintuitive to people. And so I think this is like, or like the same thing with like the the slop, right? The negative externalities of a slop, like AI, missing you know, garbage and misinformation is very high, but it's not measured in GDP.
01:03:26
Speaker
If I measure the amount of like distraction or confusion or annoyance or whatever that's actually created, if I measured it in hours or minutes and then converted that to like working hours, attention, et cetera, qualities, I expect the impact would be humongous.
01:03:45
Speaker
It would be absolutely massive, but it's not measured. It's not part of GDP. So from the economy's perspective, nothing changed. That's GDP. What about employment? We we aren't seeing kind of a massive rise in unemployment numbers either. So this kind of points us in the direction of perhaps there's some automation happening, but people are finding other other jobs or they're they're doing other tasks to make up for it. yeah do you think Do you think that's true or do you think there's also something wrong in the way we're measuring ah employment?
01:04:13
Speaker
I think employment is a good example of a number which is like relatively like relatively intuitive, what it measures. In GDP, I feel like specifically like quite an unintuitive thing if you haven't thought about it. GDP is optimized to measure things that don't change. And this is very unintuitive to people. like People think if a great change happens, GDP must change. But it's actually the opposite of what GDP does. GDP changes little when you have big changes in like prices and like things and so on.
01:04:41
Speaker
Yeah, I mean, I know you just made this point, but we can we can think about the availability of just entertainment or movies or something which has increased kind of yeah a lot since since before the incident, right? But this isn't really showing up because you're paying a Netflix subscript subscription now. Perhaps this is even decreasing GDP numbers because you're not renting as many and movies at Blockbuster as your parents did, maybe.
01:05:04
Speaker
Exactly. It's another great example of of this effect. Employment is a bit stricter, not quite. There's some problems there as well, but like it's still it's much, much clearer. So I think there is ah there is a real thing there. I mean, one thing I think is that jobs are sticky in the economic sense, is that jobs are not liquid in the same way that like many other things in the market are. So it just like literally takes time.
01:05:25
Speaker
Jobs have a lot of transaction costs, a lot of frictions. There's a lot of social aspects to it as well. And the AI is just not quite AGI yet. I expect if we if we even if we froze AI as it is today and we wait 50 years, I think things would be different.
01:05:40
Speaker
I think if people integrate these things more, you know new companies get created who don't have like this like baggage of like previous companies. and Another thing is is that a lot of employment is not about skills. It's not even about labor. It's about responsibility. I'm not saying that like lawyers or doctors don't do skilled labor. They also do skilled labor. But one of the reasons they get hot paid high salaries is they're responsible. You're paying them to be at fault if something goes wrong.
01:06:07
Speaker
This is a lot of what we pay people to do. we ah We pay people lots of money often to be responsible, to be the the part of a chain which owns something. Currently, or at least in our legal systems, AIs cannot do this. This is a type of labor that AI cannot perform, not because they're intellectually incapable of doing it, but because the social contract does not allow machines or AIs to be responsible for things or to own things.
01:06:32
Speaker
I expect if so if we had a different culture that, for example, does allow AIs to own or be responsible for things, it would be very different. yeah and and Perhaps this is why we could see lawyers getting rewarded for AI progress, like earning much more money, at least in the short term, because now they can write whatever, 100 legal briefs or 100 legal documents in the time that they previously could write one.
01:06:57
Speaker
Yeah, I agree with this. This is actually a hot take I have as well. I think paralegals are really screwed because I think paralegals mostly do skilled labor. And I think skilled labor is that is going is going to zero. But- It's going to zero. Okay. Yeah, it's going to zero, obviously. and But like the thing with skilled labor is that lawyers do skilled labor.
01:07:15
Speaker
But one of the main things they do is that they provide authority. They provide responsibility and they take responsibility for things. And if the, and know you know, commoditize your compliment. So if you, if the bottleneck to value becomes authority or responsibility and you have infinite skilled labor, the scarce resource of responsibility can become extremely valuable.
01:07:37
Speaker
Yeah, I mean, this is something that perhaps could be the case for other industries as well. I have heard people talking about kind of entry level programmers also being negatively affected by by AI progress because you know you can now have a model, produce a draft of a code that's pretty good. And so you have the senior programmer review it again, as you talked about with and of authority to to approve and and um actually change the code. Do you think do you think this is this is kind of like this is an effect you see we we're going to see across the economy that entry-level jobs are facing a lot of competition from their models? My prediction is that skilled labor
01:08:16
Speaker
that can be performed on a computer is under threat. Things that are based on authority or responsibility will at least in the short-term benefit and will become more valuable rather than less valuable. And I expect all of this will be completely irrelevant once AGI arose.
01:08:31
Speaker
yeah When we are talking about jobs, I think one prediction I've heard is that we will move into more kind of social jobs or call them perhaps pseudo jobs. So perhaps we could take my job as an example, right? If if if you you talk to my farmer ancestors, they wouldn't consider this an actual job, but this is more of of a social function or something that you might do for fun or something.
01:08:55
Speaker
If we have setting aside issues of existential risk from AI, if we have a a and an economy that is and is let us automated to a high and high degree, could we end up in roles where we but function based on our relationships to other people, based on kind of historical factors, based on being the owners of of various entities and and assets and so on?
01:09:23
Speaker
so Unfortunately, the question you just asked is just nonsensical if we ignore x-risk. like There is no way to answer this question without thinking about AGI. Otherwise, you're just not talking about actual reality. so You could ask the question, for example, assuming we align AGI,
01:09:42
Speaker
what will I be doing? Or like, will I have a job? Or will something else happen? This is a question you could ask, not saying it is the question you did ask. But like, this is like ah as a question you could ask. I'm not saying I have a good answer to it. But like, a good question. But like saying like, assume AGI doesn't happen. Will there be jobs or whatever? It's just like, that's not how the world works. like This is just not what's gonna happen. Yeah, you're saying we can't get to extremely advanced AI without facing the problems of not having solved the alignment issue.
01:10:11
Speaker
Yes, definitely. The alignment problem is how do you connect causally, your desires, values, wishes, whatever, to what actually happens in reality. and if If reality is mediated through the power of an AGI or an ASI system, which it will by definition, once an ASI system exists, the causal control of the future lies in the ASI.
01:10:35
Speaker
Humanity is no longer relevant. so The only way that humanity still has a causal connection to how the future goes is if we have a causal connection from our values through the ASI to the future. like it's the It's the big filter. you know It's the big filter is like either your your human values get through the ASI or they don't.
01:10:54
Speaker
If they don't get through, the ASI does whatever it does. right So if you have solved the problem of how to get your human values causally through an ASI to make it act in ways to instantiate the things we want it to instantiate in the future, you have solved the alignment problem. That is the alignment problem. So there's no way to get around this. And in the Compendium, you write about how this is humanity's most complex technical challenge, or at least it's one of the most complex technical problems we face.
01:11:24
Speaker
Why is it so complex? Why is it that we can't take some of the some of the advances that OpenAI has made in, for example, making the model, making making a chat GPT talk in a nicer way to you, not ah using certain words and so on? Why isn't that progress towards alignment? Why isn't this and an incremental problem ah and add a problem that we are making kind of steady progress towards?
01:11:52
Speaker
so There's a couple of questions there that are things worth kind of disentangling a little bit. like one One question is, like why is alignment hard? Another question is, how hard is it? Another question is, how expensive is it? Which is a related but different question. Another question is, is the things that are being done progress towards AJ? Another question is, why isn't alignment an iterative problem? Is it an iterative problem?
01:12:14
Speaker
Yeah, those are those are actually seven different problems. I'm going to reshuffle these questions a bit in the ordering and answer them in a slightly different order. So I'm going to start by answering the iterative question. Is alignment iterative question a problem? And if not, why not? Like most things are iterative, why is this not? So alignment could be an iterative problem.
01:12:36
Speaker
There is a way to solve alignment iteratively, but it sure as fuck doesn't look like, let's build the biggest model we can as fast as possible before China doesn't just yeet it out onto the world. That's not how you iterate. So if it was that every single time a new language model or a new AI system is built, everyone's like, all right, guys, shut down all the GPUs until we've understood every single neuron, until we have a formal theory of how everything works, until we've unsolved all the bugs. And then after we've understood every single part, then we build the next AI. and Okay, fair enough. Yeah, that would work. Like, I think that would work. I'm not saying this is realistic, but this is like,
01:13:14
Speaker
That would be an example of how to address alignment iteratively, would be to like actually take small steps. What's happening right now is not that we are taking small steps. so When people talk about iterative alignment, they are it's a misnomer. It's like it's just not true. They're not talking about we will take the smallest step possible that is safe. What they're saying is we'll take the largest step possible that I think we I can get away with.
01:13:38
Speaker
and then take them as often and as fast as possible. And this is not a good outcome because again, ASI is the filter. If you build ASI and it is not controlled, it's game over.
01:13:51
Speaker
Simple as that. Humanity has no longer any causal effect on the future, you know whether we die immediately or we hang around for a little bit. Who knows? But fundamentally, humanity just has nothing to say anymore. We're just like you know chimps in ah yeah in a zoo or whatever. right like There's nothing for us to do, and probably the AI stops feeding us. So it's over. So when you are working on a technology which could blow you up,
01:14:19
Speaker
you know If you want to actually succeed at that, you have to be very, very sure that every experiment you do will not blow you up. Otherwise, you blow up. it's It's that simple, right? It's literally that simple. If you are making explosives, you have to be very sure that every step of your process doesn't blow up. Otherwise, you don't make explosives, you die. It's that easy. So that's why it's an iterative problem. it because Not because it couldn't be an iterative problem. It's because people are deciding to approach it. Like, how can I mix chemicals as fast as possible to get to the biggest explosive I can?
01:14:57
Speaker
If you do this, you die. It's not because this couldn't hypothetically work. You can actually make explosives safely. This is a thing that can't be done. But it's not done by doing it as fast as possible before trying to get the explosives. That's just not how the world works. but There is some work that is being done on AI safety and you know allegedly alignment, like getting chatbots to say what we want. And this is all cute, but it's alchemy.
01:15:24
Speaker
Like if you look at the actual things being done, it's not science, it's not engineering. Let me give you an example of something that is like closer to real alignment work, fixing a bug in the C++ plus plus compiler. This is much closer to alignment than most of the work that is being done at OpenAI. The reason is, is that when you're, is that, you know, when you're building a compiler, you are trying to build a system that causally connects the will of the user to the will of a machine.
01:15:51
Speaker
This is literally what you're doing, right? You're literally building a thing where humans speak a language and like confer a will that is then absorbed by the machine and then causes the machine to causally do in reality the thing the human asked for.
01:16:05
Speaker
This is much closer to actual alignment than most of the work that is being done at these labs. Hot take. And most of the work at these labs is of the form, and my AI does something I don't like, I hit it with a stick until it stops doing that. And then once it looks like it's not doing it anymore,
01:16:25
Speaker
I give myself a pat on the back and publish a paper about it. This is cute. It, I think it makes a better product. Like to be clear, I think jack I use ChatGPT as Claude. I mean, you know, Claude's better than ChatGPT. Claude gang rise up, but, and they're good products, right? Like, you know, a lot is stupid sometimes or makes mistakes or so says things I don't like or something or whatever, but like, who cares, right? Like whatever. Like I don't need it to be a hundred percent to like provide me value. Fine. If we build a system with a level of reliability Claude has and we give it causal, you know, physical access to reality and super intelligence, that's not fine. A much, much higher risk scenario.
01:17:01
Speaker
you know You can have a bit of a stupid, funny, goofy thing if it's an entertainment product or like you know ah something like this. You can't have the same level of reliability when we're talking about nuclear control, when we're talking about you know government control, military, like you know real world things like this, which is what ASI will be. ASI will be systems. so The level of reliability you need when you're dealing with an asi scenario is just exponentially higher than the reliability you need more systems and it's kind of like the rocket problem or like to get to the moon you don't build bigger and bigger ladders that will never get you to the moon it's like structurally a ladder cannot get you to the moon it can't be done you need to invent a rocket
01:17:40
Speaker
And a rocket is a way more complicated and way more confusing thing. So if you went back to you know the Middle Ages and you told people, oh, in the future, we're into the moon. And they said, oh, we did the using a special ladder. They'll be like, all right, yep, seems reasonable, I guess. like They may have some like special medal in the future that's like indestructible. So like yeah, okay, seems like that could be possible. If you said we did the rocket, they'll be like, what are you talking about? That word doesn't even exist in our language.
01:18:03
Speaker
You know, like I don't even know what that means. You know, case some people in the middle ages, you know, that would be a gunpowder. But like most people, like that word doesn't translate. I don't know what you're talking about. The same thing applies to AI is that most of what we're doing right now is building ladders. We're hitting you with sticks.
01:18:18
Speaker
know, to be clear bigger and bigger sticks, you know, and like, you know, big sticks have, you know, economic value. But this will not structurally get you to a point where you actually understand the internal competition of these systems. And they're actually safe at a level that you would entrust them with, like, say, ASI level control over reality. And so so the

Reliability and Control of ASI

01:18:38
Speaker
difference, the hurdle we can't get over is that we can't provide good enough feedback ah to to more and more advanced model. Is that the main the main issue that That is another issue. like This is where the thing defeats itself in its own theory. so like I think it fails even before then, to be clear. I think it structurally is kind of nonsensical.
01:19:00
Speaker
so The most common form of these like iterative alignment proposals that people make is something like, We, if we give the machines the right feedback, the things are good or bad, we can steer them towards the directions that we want. I already reject this premise. This is already not true. Have you heard of the concept of lying? I like, I'm not sure if these people have just like never met another human. Like, have you like, Oh, if I just give a human the right feedback, they will do an arbitrary thing I asked them for and never deceive me. Like, have you met another intelligent, even ah like a chimp, a dog.
01:19:35
Speaker
like Dogs know how to lie, you know, if they can get more treats out of you and you can just give them the right feedback. Yeah. the Like they'll just do what gives you more treats, you know, even, you know, maybe, maybe, maybe your wife got up earlier and fed the dog and then he's going to ask you for even more food when you wake up.
01:19:49
Speaker
you know It's like, is the dog unaligned? I mean, it's like there's a deep thing where I already reject this premise. It's just like feedback by itself is not a sufficient mechanism to guarantee the kind of high level of reliability that you require. I think it is enough for like you know dog level alignment.
01:20:07
Speaker
You know, maybe, you know, probably not even that, but like, you know, for chat LGBT level alignment, right? Like it says the nice thing most of the time is mostly fine, you know, whatever, right? I think for that, yeah, it's fine. But this is just not sufficient. So the second thing is, is that there's a, there's a, there's the, the second premise is that this is sufficient for an ASI.
01:20:30
Speaker
is that if we have an ASI that has a line of chat TBD, that's fine. And this is just like obviously blatantly not true to me. like What are you talking about? This is a much more complex scenario, whereas much more on the line, it is much more complex. It has way more edge cases. but is like It's like the thing I was saying earlier. You're saying you want to solve all of the problems that all of our institutions throughout all of history have tried to solve. You want to solve all of moral philosophy, all of economics, all of science,
01:20:58
Speaker
you know all You want to solve all these problems using software with a vast level of bugs. And I'm like, no, you're not. like That's not how reality works. If you are trying to solve a problem that is that hard, like you need a much unbelievably higher level of reliability. And this is one of my core things, is that I think people just like think ASI is just going to be like chat GPT, but bit clever. And this is just not what we're talking about. We're talking more like imagine the US government was sentient, and like smarter than everyone on the planet. Like, that's what we're talking about. That's what an ASI would look like. Yeah, it's it's it's not a more advanced chatbot. It's a much more advanced kind of system that can solve tasks and in many domains and many more domains than than our chatbots today.
01:21:49
Speaker
like Imagine if the entire US government, like every single person in the US government, was out to get you. yeah for for For our schizophrenic listeners, they're not out to get you. Don't worry. This is just a hypothetical. You're safe. But like imagine for a second, the entire US s government was out to get you and screw you over it every way possible. Could you defend against this?
01:22:08
Speaker
but No, you couldn't. you You'd be completely overwhelmed. like you would be They would come left, right, all over, like from all possible angles. You would be outmatched, outclassed. You would be tricked. You would be deceived. You would be brutalized. There would be no possible defense against a thing this right it would just and like Even now, right like we have U.S. government, right which is like so so sort of aligned sometimes. right But still, it's just like heinously evil things for no good reason. you know like Recently there's that story about the like you know you know squirrel that got killed or whatever. right And I don't think anyone really wanted to kill the squirrel. I don't think any individual person was like, haha, I sure love killing lovely little squirrels. But it happened anyways, because the system's a fucking mess. Because it has bugs. Because it's misaligned. It's like like the US s government does stupid things that are not in its interest and hurt its citizens constantly. Not because they're evil. like you know Sure, there's some evil people involved. But like it's mostly just
01:23:04
Speaker
bugs. It's mostly just the system is buggy. It's just there's many things there that are just like stupid and poorly designed and don't do the what they're supposed to do. Now imagine that, but like sentient and a billion times smarter. Like that, that's a problem.
01:23:20
Speaker
If when we are thinking about AGI or super consultants, we shouldn't imagine chat GPT that outputs kind of a genius level text for us. What is it that we should imagine kind of your mainline scenario? what do What do you envision happening here? What what is the the next step might be agents, but after that, what comes what comes next?
01:23:42
Speaker
There's at least two and two questions here. And the two questions I see here are, what will happen? Or like, what will an AGI, how will it work? And the second is, what will we see? I think those are very different questions. Because I think we will see or not what will be would happen what actually happens. like no I expect that AGI takeoff will be so complex. It will be so distributed. It will be so confusing that no actual individual person will actually see everything that is happening and understand what they're seeing.
01:24:12
Speaker
I expect that for us, when AGI takeoff happens, it will mostly be very confusing. It's just a lot of weird shit happens that we usually can't quite explain and it's like weird behaviors. Social media gets confusing, like a lot of like weird stuff, like political things seem happening. and like market does some crazy like weird things or not. And just like a lot of really and like some new technology starts popping up in some like weird areas, we don't really know what made it or whatever. And then eventually just it's just like most people won't even realize anything's wrong until just like one day you're just like stop existing or you know something.
01:24:49
Speaker
like This is what my main line of what I expect will happen. Things will just get confusing. They will get more complex. you will bit Things will move so quickly and things will change so quickly. like like Do you know like what is happening in Ukraine right now?
01:25:02
Speaker
not Not to any level of of detail now. And it's worth it. There is no method by which you could acquire actual knowledge of what's really happening in Ukraine. There's no way. like It's too complex. There's too much going on. The signals are too conflicting. There is too much propaganda. There's like no method by which you could reliably gain actual true information of what is truly happening there.
01:25:25
Speaker
Actually, my my best guess at how to acquire true information is to read the Wikipedia article. That is actually what I do. And that that that is my attempt, right? And i'm i'm I'm certain the comments will tell us all about how terrible of an idea that is and all the problems that has. But I agree. I think Wikipedia is genuinely one of our greatest epistemological victories. I also trust Wikipedia more than I trust most sources, not that Wikipedia is perfect. I'm well aware of many shortcomings. But That also shows us what our level of epistemic defenses are. If you're actually dealing with a coordinated adversary that is like extremely smart, smarter than you, you know, is distributed to you, it can just make you believe anything you want. It wants, right? So I expect AGI Takeoff to mostly be confusing.
01:26:12
Speaker
I don't think it's going to be like epic. They're even necessarily scary. I think it will mostly be just very confusing and just really weird. And people will be upset and there will be shit happening, but like no one will be sure what's happening or who's happening or like whatever. And you know, maybe there'll be a couple of people who get it right by coincidence. It will by coincidence be screaming into the void what's happening and like they will be drowned out.
01:26:36
Speaker
but all the other bullshit happening. And then so just like no one will be able to coordinate, no one will be able to figure out what's happening until it's too late. This is my mainline prediction of what will happen. Yeah, that's, that's quite depressing. That's, it's, if we I mean, if if we're all confused, then we won't be able to even kind of agree on on what's going on or how to respond to it. There isn't really anything

Information, Truth, and Epistemic Improvement

01:26:57
Speaker
to respond to because we, we don't have, we don't have kind of like a shared understanding of what's happening.
01:27:03
Speaker
This is exactly what's happening. I mean, right now, like you know the US is a day before the election. I have people saying things on my Twitter feed or whatever, like good, normal, sane people, where I'm just like, these words are just not connected to reality. This is mental illness. like Not even just to disagree with them like politically, to be clear. like like Just like, whether or not this is true, this is an insane thing to say. like This is just like,
01:27:27
Speaker
This is not a thing that normal, like this is what a person having a mental health crisis would be saying. Like the emotions being expressed here, the words being said here, how they're being said, the viciousness, the fighting. It's like, this is how people in a mental health crisis is act. This is how schizophrenic people act. And this ah go this is pre-AGI.
01:27:46
Speaker
Yeah, just just to push back on this point, that this we can we can talk about our epistemic crisis and we are overwhelmed by information and so on. On the other hand, we also have much, much more information that we had 50 years ago, 100 years ago, 200 years ago. And, you know, they were able to somehow kind of navigate the world. I think more information has helped us being kind of better able to navigate the world, even today, I would say.
01:28:11
Speaker
having kind of governments, businesses, kind of just citizens have much more information. and this you know i can I can't really tell you perhaps in any kind of interesting level of detail what's happening in Ukraine, but I can get you the price of Bitcoin or the price of Apple stock and so on ah quite reliably. Don't you think that us having access to more information is also ah leading to better decision making?
01:28:37
Speaker
So I'm so glad you brought this up because I think this is a great topic to to address some common misconceptions about reality. So Yuval Harari, an author I really like, I recently wrote a book called Nexus. I love the first half. It's some of my favorite writing and I've read in a long time. Second half I found was a bit weaker, but overall the first half is extremely good because he makes a thing I already knew and believed, but like he like relays it out like so much nicer than I've heard anyone else lay it out.
01:29:07
Speaker
Which is that fundamentally, more information like information is not truth. These are two different things. They are just not the same thing. More information does not mean more truth. It is often the opposite. Historically speaking, more access to information usually actually did not mean that there are more true things. So specifically, the common example that people always bring up is the printing press.
01:29:31
Speaker
We always say, what about the printing press? The printing press brought science. It brought the scientific revolution, whatever. And this is just not true. like I don't know if people have just like never read a history book, but like this is like historically false. The printing press came 200 years before the scientific revolution. The direct result of the printing press were the witch burnings.
01:29:51
Speaker
So after the printing press was invented, Copernicus' you know revolutionary theory about you know the ah heliocentric model didn't even sell out its first edition of like 40 copies. But meanwhile, the Malleus Maleficarum, the Hexenhamma, sold gangbuster and it was a huge tome describing a conspiracy theory about a satanic pedophilic cabal which is it you know controls the whole world and is you know trying to take away your children and you know eat them and like whatever. right It was the prototype for every modern conspiracy theory and that sold thousands, tens of thousands of copies.
01:30:26
Speaker
The main effect of the printing press was witch burnings and the 30-year war. like These were the direct outcomes of the printing press, not the scientific revolution. This is just historically inaccurate. Now, was the printing press an ingredient to the scientific revolution? Yes, of course. Being able to reproduce scientific information was an important, useful you know logistical component of the scientific revolution. But if you want to pick one like you know proximal cause of the scientific revolution. It was probably the founding of the Royal Society in London. That's probably a much closer proximal cause. And that because and that was a social innovation. It was an institutional innovation. And it was an innovation where you were allowed to criticize others. You were allowed to question truth you were and you were expected to.
01:31:15
Speaker
but you know This wasn't the only culture in history that had this norm, but like the Royal Society is like a very clear example of an institution that had this peer review concept or this idea of criticizing ideas that nothing is taken on faith and stuff like this. This is a much more proximal cause for the increase in truth versus noise than more information. More information made people less correct. They made people more wrong.
01:31:42
Speaker
And to a large degree, can you say sim for social media is different? Like, can you say that the proximal cause of social media is more social harmony and cohesion and more democracy? This is not obviously true. Remember in the Arab Spring. But but that's a slightly different point, right? Social media hasn't resulted in more social cohesion, for example. i don't I don't think that's... The point is that maybe it has resulted in more information being available.
01:32:07
Speaker
And so then the question is whether people can use that information productively to to kind of develop ah accurate models of the world. But this is like saying I bring you a bunch of like toxic sludge and I say, look here, there's more biomass. Look, I brought you you like food, right? Food is made of biomass. Here's some biomass. Are you happy?
01:32:28
Speaker
What are some systems we could set up today then to help us kind of navigate the world, specifically as as ah as it relates to navigating this dangerous period of of the developing advanced AI?
01:32:40
Speaker
Well, there's a specific question of how do we address AI, risk specifically in the more general question of like how do we build like better epistemic norms, how do we build better institutions, and so on. They're closely related. The the main difference is is that for AI specifically, we're running out of time. So the first thing we need to do is to buy more time. If we ever get ourselves into the scenario where an ASI exists that is not deeply controlled and aligned with humanity, it's game over.
01:33:07
Speaker
So the first and primary policy objective must always be to never get into the situation by whatever means, you know, are the most effective to prevent this from coming about. So my colleagues over at Control.ai wrote a very nice document called The Narrow Path, and which is a set of policy principles, which describes exactly this. Listeners can scroll back and defeat to hear my interview with Andrea about this document.
01:33:33
Speaker
Exactly. So I'm sure Andrea did a fantastic job of explaining these better than I could. But basically, these are the proximal direct policies or the types of policies that would need to be implemented to actually you know not die from AGI in the next couple of years. This is the kind of things we need to do. This does not mean... So the narrow path also talks about flourishing. Like, okay, once we're not dying, what do we do then?
01:33:58
Speaker
I think there's a lot to be said there as well. How do we build better science? How do we build more just systems? How do we causally connect what people want and what makes people happy with the future? This is the fundamental problem of statecraft. The fundamental question of statecraft is how do we build superhuman systems, so systems that are larger than humans,
01:34:18
Speaker
that causally connects what the citizens and the people actually want with the the outcomes that the state or the larger system produces. And this is a very hard problem. But it is something that we have learned a lot about. We know a lot more about how to build good states. We know a lot more about how to about epi epistemology. We know a lot more about many of these things. like We didn't have game theory until the 50s.
01:34:41
Speaker
like It's crazy to me that we still are running states that are based on constitutions and philosophies from Montesquieu and so on that were invented before game theory existed. That's crazy. That's crazy. We haven't even updated this, so I think there are many low-hanging fruits.
01:35:01
Speaker
that are mostly blocked by coordination, as always, and like building institutions is pain because you have to deal with people, God forbid. Now you have to actually work with people and talk to them and build alliances and so on. There are many things that can be done, but they are hard. There are no easy solutions, but they can be done.
01:35:20
Speaker
Previously, you seemed to connect this kind of crisis epistemology to our inability to to solve problems around kind of AI risk. and But but those are not those are not necessarily connected. And perhaps the first step here is the the first step isn't do that we cannot take it from the ground up and try to solve our in a sense making tools. And then we try to address AI risk simply because we don't have time. Am I understanding you correctly? Yeah, look, if we had infinite time, I think it would be fantastic if we could spend three generations of our greatest mathematicians solving the fundamental epistemological problems of mathematics, right? Like this is what we would do if we were not a stupid civilization.
01:36:01
Speaker
if you were like an actually wise civilization, what we would do is we would actually solve deep problems of mathematics, a philosophy of spirituality, of religion. Like there are deep things like, you know, there's always like taboo to talk about among like technical people, but there are deep things about like human psyches that are like the religions and spirituality do talk about that are preformalized that are not necessarily physically correct or whatever, but there is there is stuff there that is very important to people, that is very, very, very important to people, that is a deep part about being human and is just completely neglected and it's just not solved. Like how to make people happy and feel fulfilled and spiritually connected and so on is like <unk>s a problem like that needs to be solved. like This is a deep thing. You can't build a good society without addressing
01:36:55
Speaker
problems like this, but also questions of like how do you get different religions to coexist peacefully? How can you create a net positive environment where different spiritual traditions or groups can coexist in a mutually beneficial world? like These are deep, deep questions and they are far from being solved. right like These are very, very hard problems that I think are solvable to some degree. I think you know whether moral philosophy solved or not is a bit up for debate, but I think there are at least improvements that can be made here a lot. And yes, if we took the time and you know, at least we should spend, you know, Sundays thinking about these kinds of problems, at least I do, but I don't expect we will be able to solve all of these problems before deadline, since deadline is currently in the next couple of years.
01:37:42
Speaker
If you look at

Open-source AI and Innovation

01:37:43
Speaker
OpenAI and Thropic and DeepMind, those are the companies that are that are currently front runners in the AGI race. And if we are very close to to AGI, and do do you expect those companies, one of those perhaps, to be the the one that actually kind of gets there?
01:38:01
Speaker
seems quite plausible. I mean, kind of the default case, I wouldn't be surprised, honestly, if it was an open source thing. I think there's been some work in the open source world, which surprised me. Which which spark specifically? Well, wouldn't you like to know? Okay, we you don't want to you don't want to say what it is public for not drawing more attention. Okay. Okay. Makes sense to me. Yeah, I hadn't actually I'm not thinking of open source AI as kind of competitive at the cutting edge.
01:38:31
Speaker
The main thing with open source is that it is way more high variance. It takes way, because the way we're distributed, so it takes madete way more crazy bets. So if you need a crazy thing to work to get to AGI, it seems more likely to happen in open source at the moment than in one of the labs, because the lags are de-risking. Like as you gain scale, you also have to de-risk. You have to do less crazy things because there's more on the line.
01:38:58
Speaker
Now, my main line, to be clear, is that the de-risking strategy is the thing that gets us to HVI. It's just we scale further, we make more data, and we just like you know do normal amounts, you know normal kinds of engineering R and&D, and it doesn't take anything crazy. But if I'm wrong, it takes a crazy thing, or there is like,
01:39:14
Speaker
So I do think there are crazy algorithms that have not yet been discovered that are like 10, a hundred or a thousand or a million times more efficient than deep learning. I'm pretty sure that's the case. all right And if those get discovered, they're going to get discovered by like some malnourished grad student or the open source movement, probably not by one of the big companies.
01:39:32
Speaker
Yeah, all right. But I mean, my, my main reason for not expecting open source to be at the cutting edge is just that um I don't think meta will fund a kind of increasingly insanely expensive training runs. I think those training runs will be done by, by deep mind open AI, anthropic supported by each of their big tech funders. Yeah, I'm not sure kind of medicine investors will let them spend the money. I don't know what you do. Do do you think that's true?
01:40:00
Speaker
I don't know. I don't really know. I mean, so far, it seems to be working out really great for Meta. I mean, their stock price is doing fantastic. So, I mean, so far, the bet seems to be paying off for them. But that being said, like, intelligence isn't magic. I think you can get AGI probably with the same amount of comp computers as UPT-3. Like, if you knew how to do it, you could probably do it even less than that. Like, I think the lower bound on, like, human or superhuman intelligence is, like,
01:40:23
Speaker
a 4090 or less. like probably you If you probably you have the perfect algorithm, you could even do it just on a CPU and like you know probably couple you probably need a decent amount of memory but like enough that you could like fit it in like a laptop, right like a modern laptop. like It seems like There's no law or like theorem or anything that I know about, again, because we don't have a metr you know science of intelligence, that doesn't say as you can't get human-level AGI on a single M1 MacBook.
01:40:56
Speaker
Maybe there's some limitation, but it's not obvious to me. I don't think we're going to get there before we get AGI. I think the first AGI we're going to make is going to be super clunky and take a billion times more resources than optimally necessary. But if we were to stumble upon some of these big breakthroughs, it's not obvious to me that you need llama four, five, six, seven to make it work. I think it makes it easier.
01:41:20
Speaker
like I think having llama3 makes it easy to build AGI than having llama2. I think we're already past the event horizon in this regard. like I think it's already possible to build AGI with just the current tools that exist if you knew how to do it. I'm not saying I know how to do it, but like I expect that if a dedicated group of hackers you know and like you know a couple of 4090s, a couple of MacBooks, and llama3, and they get very lucky, could already do it.
01:41:50
Speaker
But that's not the most likely scenario. The most likely is that one of the AGI corporations developed this by by you know running an a gigantic training run. Yeah, running a gigantic training run and also scaffolding around them. Like, you know, agent scaffolding, etc. Like, I think I often joke is that, you know, AGI is going to be like, you know, GBT and plus a thousand lines of Python.
01:42:13
Speaker
Yeah. All right. How efficient are these techniques, do you think? Kind of when we talk about unhoppling or scaffolding or all the all the things we do after we have finished training runs? What what does that give us in terms of capabilities?
01:42:28
Speaker
think that's the difference between AGI and chatbot, basically. I think a lot of scaffolding at the moment is stupid and predictable ways that will get solved. So like lots of low hanging fruit, which is like people are just like doing it quite poorly in like dumb ways also for like, and just like, there's just like many, it's a large design space, right? Like it, we've only been doing it for a couple of years. We've only had like non-stupid models for a couple of years. ah Like I don't think GBT2 could get to AGI, no matter how hard you poke it, probably.
01:42:58
Speaker
But like GPT-3, probably not GPT-4. Yeah, no, i i like I think if you had like a GPT-4 level model and you like knew how to do scaffolding correctly, you could get to AGI. There's a lot of kind of latent intelligence in the models that are not being used right now. oh yeah like like I don't know if you've ever seen someone who's really good at prompting. I have. I have observed that and that is that is insane. what they They can make models do things you just like would not believe. They seem like it shouldn't be possible. right so like And again, there's this thing where intelligence is composed of smaller components. And like if you actually zoom down on these smallest dissolved components, they're not not that big. They're not that complicated.
01:43:36
Speaker
GPT-3 can already do formal reasoning. right You can already teach it you know to do math. right it's like That's already like a lot of the way there. I'm not saying it's all the way there, but it's like this is a lot of the way there. and Then you know it they can learn new facts, they can pattern match, they can do metaphor and logic and whatever. right and i just like I don't think intelligence is made out of that many things.
01:44:03
Speaker
Like there are things and you need to get them right, but it's more of an engineering problem, right? It's more like, you know, if I show you a lump of uranium, you'd like or you'd be like, wow, this is not dangerous at all. Look how far this is from fission. And I'm like, yeah, but now it's just an engineering problem.
01:44:19
Speaker
Do you think using inference time compute will play a larger role in in the kind of future ah capabilities? this is This is where whenever you type into chat GPT and it says thinking, that that's that's inference time compute. What do you think? i'm not i'm I'm not explaining that to you. I'm explaining that to listeners.
01:44:37
Speaker
Yeah, I mean, it seems obvious. I mean, humans do a lot of inference. So it seems logical that like if if you if you don't do it in inference time, you have to do it at training time. And then you have to pre-bake it, which is just more expensive. Why is it more expensive? Because you have to account for every possibility. You have to kind of lay out exactly what could be asked in. Yeah, like imagine the difference between writing every possible sorted array versus a sorting algorithm. Yeah.
01:45:06
Speaker
Okay, Carlos, is there anything we haven't touched upon that you want to say to our listeners? That's a good question.

Engagement and Discourse for AI Development

01:45:12
Speaker
I mean, there's, we we could talk a lot about politics and institution design and all these kinds of things. I guess the thing I would really like to talk about maybe just like more thing else direct to listeners a little bit about this, like, well, what do we do? Like, what do we do about this? Like, I haven't justified a lot of the extra stuff in this ah talk. I hope your listeners, your listener that you either buy the hypothesis or, you know, you go read the compendium and read why I think this is a problem.
01:45:41
Speaker
I will link that and listeners can also scroll back to some of our previous episodes in which you do spend a bunch of time justifying a iris or many of the other episodes that we've but' been doing over the last couple of years.
01:45:53
Speaker
So I'm not going to justify that further. So I'm going to assume you're bought into this, you're bought into the things I've been saying, what do we do? Like what are things that you listener might be able to do? And so I think I have a slightly different opinion here or like at least way of thinking about it than I think a lot of other people do.
01:46:12
Speaker
where when you're confronted with a problem as massive as AI extinction, right it feels very natural that the response must be massive. that you know There's something so huge, you must do something big. You must like drop everything and work on this full time. yeah You know, yell at all your friends about it. You need to make it your identity. You need to like sacrifice everything, whatever, right? And I'm here to tell you that that is not the right thing to do. The truth is that, I mean, A, this is just usually very counterproductive because the main thing that will happen if you attempt to do this in most circumstances is just you burn out and that is not going to help anybody.
01:46:57
Speaker
And now you can say, oh, but I should do it. It's the right thing to do. I won't burn it. I'm like, bro, listen to yourself. Like this is just not how the real world works. The actual things that need to be done to solve AI, to solve AI risks, other things are mostly extremely boring. This is very important to understand. Most of the things, the actual work that needs to happen if we want the future to go well is very, very boring. It is not epic, exciting.
01:47:26
Speaker
we need to You know crusade on social media or we need to solve this cool math problem or build this big machine and like all these stuff are like nice and fun and seductive but they're. Mostly not what we need to do most of reality is very boring most of power is very boring.
01:47:47
Speaker
One of the most, at maybe the most powerful entity in the world, the US government, is mostly extremely boring. It's just unbelievably boring. It's bureaucracy and petty politics and writing things down and having long meetings and et cetera, et cetera, right? And now when people see this, they might assume, well, there is some exciting part that I'm not seeing, or that I see on TV, or TV tries to present these things as like exciting, meme-able, or whatever.
01:48:18
Speaker
Or you might think it's defective. like If it was non-defective, it wouldn't be this way. The truth is is that most complex things are made of simple parts. Most big actions are made of simple things. Most of coding is not coming up with some brilliant new algorithm that will be named after you and you're going to win an award for it. Most of it is trying to fucking connect to the stupid web server you know and copying code from Stack Overflow. 99% of your coding work is going to be rehashing things that people much smarter than you have already solved better in the past. It's just how it is, right? The reason I'm harping on this so strongly, and see this as aesthetic, is that
01:49:02
Speaker
There's a lot of boring work that needs to happen. When I talk to policymakers, to the general public, including to extremely powerful people, like you like billionaires and stuff like this, it is utterly shocking how many of them have just never heard the arguments. Literally no one told them. It's not that they like heard the arguments, thought about it carefully and rejected them. That happens sometimes. But most of the time,
01:49:25
Speaker
Literally, just no one sat them down and just politely and patiently explained things to them. like I've had meetings with high-level politicians. Well, I'll sit down, take an hour or something, and they'll ask me a bunch of questions about AI or whatever, and they'll answer all the questions. I remember this, like,
01:49:42
Speaker
Particularly like one politician I've talked to this were like at the end of the meeting He was just kind of like when I answered he asked me like some complicated question and I answered the question And he looked at me and he was kind of like wow You're really answering my questions like he was shocked that like because he that I was not trying to sell him something I was not trying to make him do anything I was just taking the time to explain things to him and he was so thankful for this He was so genuinely thankful for this that just like someone took the time out of their day to just give him information to help him this is how not everyone, typically there are corrupt politicians, evil people, blah, blah, blah. But like most people in most institutions, in most bureaucracies are just normal people. They're just trying to get by. They're overworked. They're overstressed. They don't know what to do. they There's all these things pulling them in different directions. And we need a lot of patience, a lot of work of just explaining things very carefully over and over to different people, to you know helping them do the things that they need to do, and so on.
01:50:41
Speaker
So what do we do? The kind of things that I and you and all of us need to do is we need to just have patience and actually explain things to people. We need to actually ah reach out to politicians and be like, hey,
01:50:57
Speaker
What are you doing about this problem? I'm very concerned. Here's this compendium thing you can read about it. Or I'd be happy to come over and tell you more about this. Or i know my or the here's my friend who works at an AI startup who would be happy to explain this to you.
01:51:12
Speaker
i like I know a guy who has been just emailed all of his local senators and house members and whatever. Every day, he just emails them a summary of like every week, he emails them a summary, a cold email just emails them, a summary of what's happened in AI this week. and He's gotten three meetings out of this because they were like genuinely thankful. They're like, oh, thanks for shit is' sending this. like Hey, could can I ask you a question about this? That's great. like This is awesome.
01:51:39
Speaker
if we could get Two people in every U.S. state to do this, to just like send your politicians a very p polite, helpful email of like, hey, you're just going to act. I'm concerned about this, you know? They're like, I thought you might want to know this. like It sounds like nothing, right? Like it seems so small.
01:51:58
Speaker
These are the kinds of small things that large coordination, that large institutions are built of. These are the kinds of small actions. And I need people to do stuff like this. I need people to help do stuff like this. Read the compendium, you know, talk to policymakers, talk to your friends, talk about this on social media, because this is a civilizational problem. This is not a problem you or me can solve. This is a problem that we have to solve as a group.
01:52:25
Speaker
as a species, as a civilization. And the way we do that is we need to talk. Gus, you were saying earlier that like your ancestors wouldn't consider your job to be a real job. I disagree.
01:52:38
Speaker
I think they would have understood. If I told them, oh, you're a messenger, or you're a diplomat, they would have been like, oh, of course. Yeah, of course. That's very important. The the kingdoms must know the information. That's great of him that he's doing that. I'm glad he's extracting all the information and bringing it to all the people. What a great job.
01:52:56
Speaker
like Our ancestors would have actually understood the concept of a podcast. you know They wouldn't understand you know microphones, the internet, near but the idea of someone who finds interesting people who know interesting things, extracts information, and brings it to people, that's labor. like That's like obviously labor. you know like Maybe the information is not useful. Maybe it's just amusing, whatever, but that's labor. Moving information around civilization, around society,
01:53:21
Speaker
is work, the same way as moving earth or moving you know materials is labor. Moving information around the graph is labor, and this is labor that needs to happen. This is a very, very important thing. Yeah, I think this is very important, is that we need to move this information. It needs to be replicated, it needs to be spoken, it needs to be talked about, because then people can reason about it.
01:53:42
Speaker
like Something is uncool and weird. You can't think about it. You can't talk about it. it's like you If you're a politician and you keep talking about something weird, you lose your job. Something is not weird. If it's an important issue that everyone keeps emailing you about, then you can talk about it. Then you're allowed to think about it. There's another thing you said earlier, which was how more information allows you to know more things. like You could know the Bitcoin price. right This is true. But will you decide to know the Bitcoin price?
01:54:12
Speaker
The scarce resource and right now is not information, it's attention. You could know many things. You could know so many things. Will you? You could read every book in history. Will you? No, you can't. You have a limited budget of attention of what you can know. So the difficulty is no longer, you have, there are a few things to know. The problem is now you must choose what to know. You must choose what to think about, what to process in your mind, what to put into your brain and actually think about.
01:54:45
Speaker
and politicians and all other like you know people who are like busy have this huge problem where there's so much that is vying for their attention. so europe So a lot of what you have to do as a citizen is help these nodes, these connector nodes, your politicians and your influencers and you know even just your local friend group. You have to help them draw their attention to the thing they need to pay attention to. You need to help them do this. like This is a process. And then once they can reason about this, once it's less normal it's more normal, they can spend more attention.
01:55:20
Speaker
They can form better opinions. They can ask better questions. They can find other people to work with. They can go to ask questions, to and to build coalitions. This is the kind of stuff that we need to do.
01:55:31
Speaker
we need We need large coalitions of people across the spectrum, not just tech people. you know I assume a lot of tech people listen to this podcast, but if you're a non-tech person, you're the kind of person I want helping with this. you know Whether you know people from civil society, you know people from NGO backgrounds or or you know unions or just you know day-to-day normal people interested in this topic, whether it's academics or it's you know faith groups,
01:55:59
Speaker
All of this, like this affects everybody. AGI, AI is a thing that affects everybody. It affects your job, it affects your family, it affects your church, it affects everything. You should think about this. I am trying to make the case to you that you should give this a little bit of your attention. Your attention has been extremely valuable. have You have your family to attend to, you have your life to attend to. There are many things that are valuable to you to attend to.
01:56:24
Speaker
I'm making the case, please attend to this as well. Put a little bit of your attention on this. Not a lot. I think it's important. Don't put your whole life on this. Put 10%, 5% of your attention, 1% of it, a couple hours a week, one hour a week, two hours a week.
01:56:39
Speaker
of your attention thinking about this. What do you think about this? How does it make you feel? Who do you trust here or not? where can you What questions do you have? How do you get this question answered? The thing that I really recommend to people is like literally like open a Google Doc and start writing. like What is my plan? What do I think about AI? What are my questions? Where am I uncertain? How do I get to the thing I want to? And like who should i who should I ask? Who should I work with? like This sounds stupid. I'm sorry. It sounds so stupid. It doesn't sound stupid. It it sounds like a valuable device, I think. it may it Maybe it doesn't sound prestigious, but it doesn't sound stupid to me. And I think that's an important difference that you made. You you kind of sketched out yourself.
01:57:21
Speaker
I think you're right about this. If it works, it ain't stupid. And this works. This works. If you, if you, dear listener, actually want to have a causal effect on the future going well, no joke, you actually want to have a causal effect. The thing I think you should do.
01:57:37
Speaker
As you go to your computer, you open an empty Google Doc, start at bullet point, and you start writing out, what do you want the future to be? like What do you want it to look like? And you start asking, well, what do I need to do to get there? like What's currently broken? or like What do I need to do? And then you start to notice, what questions do you have? Where are you on certain? What are resources that you currently don't have access to? you Who do you need to reach out to? or like What actions can you take?
01:57:58
Speaker
and don't put two don't Don't break yourself over this. Put an hour of work into it. Don't do more. you know Maybe once a week. Have a call with someone. you know send Send me or someone else an email. and Just like ask them questions. you know A lot of people are nice, including a lot of famous people are very nice. You know you can just ask them questions. Go on social media.
01:58:17
Speaker
Ask questions. Try to figure it out. Find other people in your area who might be interested in this thing. This is civics. This is the job of a civilian to build a greater civilization. You know, civilians. like Our job is to build a greater civilization for ourselves and for everyone else. And this is how we do it. This is the process.
01:58:36
Speaker
of how we do it. And so I just ask anyone listening to this, like, join me, like, do, do the small things. I think that's valuable advice. Connor, thanks for thanks for chatting with me. It's been great. Thanks. It's been a real pleasure.