Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Special: Flo Crivello on AI as a New Form of Life image

Special: Flo Crivello on AI as a New Form of Life

Future of Life Institute Podcast
Avatar
262 Plays10 months ago
On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI 11:53 AI as a new form of life 15:44 Can AI development be paused? 20:12 Biden's executive order on AI 22:54 How would a GPU kill switch work? 27:00 Regulating models or applications? 32:13 AGI in 2-8 years 42:00 China and US collaboration on AI
Recommended
Transcript

Introduction to the Podcast and Guests

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Dogger. This is a special episode of the podcast featuring Nathan Lebens interviewing Flo Crabello. Flo is an AI entrepreneur and the founder of Lindy AI. Nathan is the co-host of the Cognitive Revolution podcast, which I recommend for staying up to date on AI. Here is Flo and Nathan.

Challenges in Keeping Up with AI Developments

00:00:26
Speaker
Man, you know, it's a tough time for somebody that tries to keep up with everything going on in AI. It's gone from, you know, in 2022, I felt like I could largely keep up and, you know, it wasn't like missing whole major arcs of, you know, important stories. And now I'm like, yeah, I'm like totally let go of,
00:00:52
Speaker
like AI art generation, for example. And this policy stuff is really hard to keep up with, especially this week. Of course, it's like hitting a fever pitch all at once. But yeah, I love it. So I can't really complain at all. It's just, at some point, got to admit that I have to maybe narrow scope somehow or just let some things fall off. I'm just kind of wrestling with that a little bit.
00:01:16
Speaker
Which I think is just like a natural, yeah, I mean, you know, I hear you. I think it's just a natural part of like the industry evolving. It's like, imagine, you know, talking about like keeping up with computers, right? In like the 80s or something. It's like, I'm sure at some point it was possible to keep up with computers at large. And now it's like keeping up with tech is just like, it's like, okay, dude, it's like half the GB over there, right?

Citizenship and Immigration Policies

00:01:38
Speaker
You're doing all this in your second language, right? This is, I assume English is your second at least.
00:01:44
Speaker
I have an excuse. Yes, I can. I'm actually getting my American citizen shiva hatsie until you just yesterday. Wow, congratulations. That's great. I know it's not an easy process, although maybe it's about to get streamlined. I haven't even read that part of the executive order yet, but I understand that there is kind of an accelerated path for AI expertise.

Accelerationism and Regulatory Critiques

00:02:05
Speaker
Have you seen what that is?
00:02:07
Speaker
No, but generally those good stuff being done in immigration, like they're relaxing a lot of requirements. They're like closing a lot of loopholes. They're doing another. Yeah, I've been thinking of. Just as kind of a general communication strategy, if nothing else.
00:02:24
Speaker
calling out the domains in which I am accelerationist, which are in fact many, I think you and I are pretty similar in this respect, where it's like, I'm perhaps the singular question of the day, I am not an accelerationist, but on so many other things, I very much am an accelerationist and like,
00:02:44
Speaker
streamlining immigration would be one of those, you know, I would sooner sign up for the 1 billion Americans plan than kind of, you know, the build the wall plan, certainly. And I just did a right before this was doing an episode on autonomy and, you know, self driving. And that's another one where I'm like, holy moly, you know, I don't know if you have a take on this, but the the recent cruise episode, I find to be
00:03:13
Speaker
you know, kind of bringing my internal marketing Jason very much to the fore where I'm like.
00:03:18
Speaker
we're gonna let one incident, A, like shut down this whole thing in California, that seems crazy enough, but then the fact that they go out and like do this whole sort of performative self, I mean, whether it's performative or not, maybe it's sincere, but do this whole self flagellation thing and shut the whole thing down nationwide, I'm like, where is our inner Travis on this, people? Somebody has to stand up for something here at some point. Totally, I agree.

AI Regulations: Past and Present Comparisons

00:03:45
Speaker
I think it's just the natural order of things, right? It's like, I don't know if you know that piece of history about when the automobile came about. There was this insane law that said you needed to have someone walking with a flag in front of the automobile at no more than like four miles an hour.
00:04:01
Speaker
It's part of the process, man. It's infuriating, I hate it, but in some way, and maybe it's cool, but I make peace with it. I'm like, it's part of the process. You can't really stop power. It's going to do its thing. It doesn't really matter anyway because the self-driving cars are not really deploying at a very large scale. It's not a bottleneck anyway. I don't think it is. I guess I have two reactions to that. One is if nobody fights through this moment,
00:04:29
Speaker
then there is this potential for the nuclear outcome where we just get stuck and it's like, sorry, the standards are so insane. We do have a little bit of a chicken and egg problem where if you had a perfect self-driving car, they'd let you deploy it, but you're not going to get to perfect unless you can deploy it. To me, this technology is just an incredible example of where
00:04:54
Speaker
you know, the relative risk is already pretty high. As far as I can tell, they already do seem to be as safe or marginally safer, you know, maybe as much as order of magnitude safer already, depending on exactly what stats you look at. And I would just hate to see us get kind of, you know, as we're like kind of close to maybe some sort of tipping point threshold, whatever, to get stuck in a bad equilibrium of
00:05:18
Speaker
And then maybe get stuck and never get out of that chicken and egg thing would just be so frustrating. I drive a 2002 Trailblazer that

Positive Experiences with Tesla's Full Self-Driving

00:05:27
Speaker
I have sworn never to replace unless it's with a self-driving car. And it's becoming increasingly difficult to keep this thing going. So I'm like, how long do I have to wait?
00:05:41
Speaker
My other take on this is I think Tesla is actually like really good. I borrowed a neighbors. I don't know if you've done the FSD mode recently. My grandmother came up for a visit. It was fun. I actually took my 90 year old grandmother on a trip back to her home, which is like a four hour drive there. And then I did four hours back all in one kind of big.
00:06:02
Speaker
fsd experiment set up my laptop in the back put a seatbelt on my laptop so it was like recording me and recording us you know driving so i could kind of look at the tape later and i was like. Man this is really good i had no doubt in my mind coming out of that experience that.
00:06:22
Speaker
it's a better driver than like other people I have been in the car with, you know, for starters. So I'm like thinking through my like personal life, like, yeah, I'd rather be in the car with an FSD than this person and that person and this other person, you know, and I'd be definitely more likely to let it drive my kids than this other person. So I felt like it was really good. And then the other thing that was really striking to me was the things where it messed up.
00:06:47
Speaker
I mean, there weren't many mess ups for one thing, but like the few mess ups that we had, there were, there were a couple in an eight hour thing. It was like.
00:06:55
Speaker
If we actually had any mojo and we went around kind of cleaning up the environment, we could solve a lot of this stuff. Like there was one that my neighbor who lent me the car said, you know, you're gonna get to this intersection right there on the way to the highway and it's gonna miss the stop sign cause there's a tree in the way. And I was like, you know, for one thing, probably people miss that too. Like let's trim the trees, you know? And then there's another one where you're getting off the highway and there's a stop sign that's kind of ambiguous. Like it's meant for the people on the service road, but it,
00:07:24
Speaker
appears to be facing you as you're coming off the highway. And so the car saw that and stopped there. And that was probably the most dangerous thing that it did was, you know, stopping where people, you know, coming up the off ramp, like do not want you or expect you to be stopped there. But that's another one where you could just go like, put up a little blinder, you know, to just very easily solve that problem. And I imagine people must have that problem too. And we just have no
00:07:49
Speaker
no will when it comes to that. Again, I feel like I'm turning into Mark Andreessen the more I think about self-driving over the last few days. No, I'm losing on that.

Impact of Extractive Institutions and AI Regulation

00:07:59
Speaker
Where else are you accelerationist that may not be obvious as we think about this AI safety and regulation moment that we're in?
00:08:12
Speaker
You know, honestly, pretty much everywhere, man. I'm a libertarian. I used to work at Uber where I saw regulatory capture and I saw cartels and I do believe it's the deepest level that cartels and regulatory capture, and generally, I think it's Menkel Olsen who calls them extractive institutions.
00:08:35
Speaker
who are just in the business of they don't want to grow the pie, they just want to grab a little bit more of the pie for themselves. Even if it actually shrinks the pie, they don't care as much as they get to be a chunk. And I think that the world is just rotten with thousands and thousands of these institutions, with other private or with other unions or governmental, it doesn't matter. We just have so many of these cartels floating around and it's killing everything.
00:09:01
Speaker
right? It's a tragedy. And I totally understand how folks like Mark Woodryson would be. They have built such a deep and justified hatred and reaction for this nonsense that is destroying everything that they immediately just
00:09:22
Speaker
is a pattern recognition immediately triggers when they see what's happening with the eye. They're like, ah, it's happening again. They're doing it again. And it's like, chill, I totally get it. But this time is really different. This is really something special that's happening. Not just in the markets, not just in the economy, not just in the country, in the universe. Like there is a new form of life that's being built. And this is, we're like a new territory and we need to be careful right now. Right? And so that's, that's where I'm coming from is like,
00:09:50
Speaker
I totally see that point of view, and I'm like, regulation, for sure, there's going to be cartels, for sure, we're going to screw up 90% of it, politics is going to get messy, and transgenderists are going to get into play. And it's all worth it, because what may very well be on the line, it sounds alarmist, but I'm sorry, we need to say the words, may be literally human extinction, right? And this is not some tin foil hat,
00:10:18
Speaker
theory, there's more and more experts that are coming around and saying that. It's actually funny, for a lot of reason, if you dig it up, I'm sure you could find it. I think it was an interview from him, I want to say between 2017 and 2020. That doesn't help him because he gives so many of those. But I think he said something like, at the time, he was actually appealing to an argument of authority.
00:10:38
Speaker
He was like, look, he was saying the same thing he's saying today, poverty is good, it's just a tool. And by the way, the experts say there's nothing to worry about. So I don't know, you guys don't know anything about AI. I don't know anything about AI, they do. And they're telling us there's nothing to worry about. That argument is not true anymore. The experts are telling us there is something to worry about. And now it's just like, oh, regulatory capture. No, it's not regulatory capture. Like open AI,
00:11:05
Speaker
was founded on that premise from day one. So if it was regulatory capture, there's like one hell of a plan. It's like, oh my God, we're going to create this industry and we're going to start regulatory capturing right now, right? It's like, that makes no sense. It was literally the plan from day one. Yeah, that's where I'm coming from.

Is AI a New Form of Life?

00:11:22
Speaker
I'm largely in the EAC camp. I am in team technology, team progress, team anti-regulation, but here is something very special and potentially very dangerous is happening.
00:11:33
Speaker
So let's go back to your use of the phrase, a new form of life. I, as you may recall, am very anti-analogy as a way to understand AI. Cause I think it's so often misleading and like, you know, I often kind of say,
00:11:50
Speaker
AI, artificial intelligence, alien intelligence, it may be tempting for people to kind of hear or not tempting, but it may be sort of natural for people to hear you say a new form of life and understand that as an analogy. But do you mean it as an analogy? Or, you know, I guess we might start to think about like, is that actually just literally true? And what what conditions would need to exist for it to be literally true? And you might think about things like, can AI systems like
00:12:18
Speaker
reproduce themselves? Are they subject to the laws of evolution? But for starters, how literal do you mean it when you say that there's this new form of life in AI? I mean it pretty literally. I think if you zoom all the way out literally from the build of the universe, the evolution of the universe has been towards greater and greater degrees of self-organization of matter.
00:12:43
Speaker
And there's actually a case to be made that this is just a natural consequence of the second law of thermodynamics. There's this amazing book that Iac people love to quote. Yeah, I was going to say, you're sounding very Iac all of a sudden. It's a good point. It's called Every Life is Entire. And so if you look at the Big Bang, a few fractions of a second after the Big Bang, it was just subatomic particles. And then they ganged up together and formed atoms. And then the stage after that was the atoms ganged up together and formed
00:13:13
Speaker
molecules and then the stage after that the molecules became bigger and bigger because they formed into stars and exploded and caused all sorts of reactions and so a few generations of stars later we have like pretty big molecules and pretty heavy ones and then these molecules form into sort of like protein and RNA and forms of protolife we don't totally understand there's a chain here that we don't totally understand but there's a form of protolife that formed and then life and so you can think of like I guess it was just a
00:13:42
Speaker
Do you know what's going on in a new place of sale sale may come to react to me to that and then ok we have a sale and then the sales started getting up together and i have multicellular organisms. And then we have brains at some point like this is a big leap but where brains like i'm a great mom to work with a great degrees of self organization and at some point we have us.
00:14:06
Speaker
which, with a little bit of hubris, perhaps I am considering the apex of that scene for now. It just seems crazy to me that everybody is saying, like, one, this is totally normal. Like, well, oh, this is normal. We're normal. It's like, dude, this is quintillions of atoms that are organized in this wheel, super coherent fashion that are pursuing a good in the universe. Like, what's happening right now on Earth is all you will to begin with.
00:14:29
Speaker
Right. So people are all deep thinking that this is normal and that's what it is. And that this smart is going to stop at that. Right. And they're like, well, maybe we're going to get slightly, slightly smarter or maybe we're going to get augmented. And I'm like.
00:14:41
Speaker
You are such a leap compared to an atom, or compared to a bacteria, that there is no reason to expect that there wouldn't be another thing above you that is as much more complex or bigger than you as the newer to the bacteria. There's nothing in the universe that forbids that from happening, from a being to exist that is about as big as a planet or a galaxy. There's nothing forbidding that in
00:15:04
Speaker
And from the first time now with his squint, we can sort of see how that happens. And silicon-based intelligence certainly seems to have a lot of strengths up its sleeve versus carbon-based intelligence. And so no, I actually sort of means that pretty literally. It is sort of in line with the march of the universe, and this is the next step, perhaps. It's significant. And so I am hopeful that we can manage this transition without us being destroyed.
00:15:34
Speaker
That's what I want to happen. Does that imply an inevitability to advanced AI? I guess, you know, a lot of people out there would say, hey, let's pause it, slow the whole thing down. And then you get kind of, you know, the response from like an open AI, where they're sort of saying, yeah, we do take these risks very seriously. And we want to, you know, do everything we can to avoid them. But
00:15:58
Speaker
We can't really pause or we don't think that would be wise because then the compute overhang is just going to grow and then things might even be more sudden and disruptive in the future. Where are you on the inevitability of this increasingly capable AI coming online? I don't think it's totally inevitable. I am generally

Pausing AI Development: Feasibility and Risks

00:16:21
Speaker
a huge believer in human agency. I think we can do pretty much anything we set our minds to. I see a contradiction, by the way, in the EAC argument that, on the one hand, it's inevitable to try to stop it. On the other hand, oh, my God, if you do this, I'm going to stop it. It's like, you've got to decide here. So, unfortunately, it's not necessarily inevitable. I am actually worried. As much as the next guy, I agree, there is a risk that we overregulate and miss out on the upside.
00:16:49
Speaker
And the upside is significant. And if you look like during the Middle Ages, we successfully, as a civilization, stopped progress. And in a lot of countries, if you look at North Korea, they did it. They successfully stopped progress. So you can't stop progress. Progress is not inevitable. And arguably, it could be quite tragic. So no, I don't think it's inevitable. And I'm hopeful that we can, again, I want us to get the upside without experiencing the downside. I mean, the North Korea example is an interesting one.
00:17:18
Speaker
you know, if I was gonna kind of dig in there a little bit more, I might say.
00:17:21
Speaker
Okay, I can understand how like if things go totally off the track, then we could maybe enter into like a low or no, or even negative progress trajectory. Yeah, if there were a nuclear war, you know, then like, we may not, you know, come back from that for a long time. Or if, whatever, an asteroid hit the earth, or you know, a pandemic wiped out 99%. Like there's extreme scenarios where it's pretty intuitive for me to imagine how
00:17:49
Speaker
progress might stop or just be whatever greatly reversed or whatever. If I'm imagining kind of a continuation-ish of where we are, then it's harder for me to imagine how we don't kind of keep on this track. Because it just seems like everything is, we're in this, I would call it, I don't know if it's going to be a long-term exponential, but if not, we seem to be entering a steep part of an S-curve where
00:18:17
Speaker
Hardware is coming online by the order of magnitude and at the same time, algorithmic improvements are taking out a lot of the compute requirements and we're just seeing all these existence proofs of what's possible and all sorts of little clever things and scaffolding along the lines of some of the stuff that you're building is getting better and better. Do you think it is realistic to think we could
00:18:43
Speaker
meaningfully pause or even like stop without like a total derailment of civilization?
00:18:49
Speaker
The derailment of civilization thing, like you could imagine the most extreme scenario, which I am not proposing, but you could imagine the most extreme scenario, which is no more Warsaw, right? You do not exponentially improve your semiconductors anymore. That'd be crazy, right? But that wouldn't derail civilization, right? Civilization is not predicated upon Warsaw. Like we would do just fine with the chips we've got today. And if anything, I think we have a lot of overhang from the chips we have today. Huge, huge, huge overhang, right?
00:19:17
Speaker
So I actually think it is possible to do that if we wanted to, and I don't think that even this, which I think is the most extreme scenario, would actually derail civilization. Well, we are actually lucky in that there are a few choke points in the industry, actually more than a few. There is ASML, there's TSMC, there's NVDA, all of those three are individually choke points.
00:19:43
Speaker
like a regulator at any point, grab one of them and be like, no more. You just stop, right? Oh, you add this GP to all of your GPUs moving forward, so we have a kill switch. At the very least, we have that. So if she really hits the pen, we have an automatic thing in place that shuts down every GPU on this. Now that would be disruptive, but potentially less disruptive than a rogue ASI. So now I actually think it is very much possible these things are on the table, and I don't think they would be all that disruptive.
00:20:13
Speaker
Maybe that's a good transition to where we are right now. We just had this executive order put out this week. I think everybody's still absorbing the 100 plus pages and trying to figure out exactly what it means. What's your high-level reaction to it? Then I'll get into some of the specifics.

Analysis of AI Executive Orders

00:20:31
Speaker
First of all, it's an executive order for now. It is not law. It's very early.
00:20:37
Speaker
Overall, I am pleasantly surprised, not by the specifics, but by the facts that were reacting quickly, by the fact that the measures that are proposed are not insane.
00:20:52
Speaker
Like, I was afraid of like, there's a really good case to be made that's like, look, we have a general enterprise in place. Now a bunch of 70, 80 years olds go running as they don't know anything when they were born, there was no mobile phone, right? Can't really blame them for not really understanding anything. And so I was afraid that the regulation would go something like, if you install Microsoft Office in your AI, then you have to make a report. And so what like, so the regulation actually sort of makes sense. It's talking about flaps, it's talking about all those maintenance of training.
00:21:18
Speaker
so i think it's a step in the right direction i'm actually i'm actually happy about what's happening with this executive order now the specifics look it's um the problem is that it's almost impossible to regulate ai in a way that doesn't have any loophole
00:21:34
Speaker
So they are regulating it according to the number of flops and that's okay, but that's the end of the day. And then you get stuck into, okay, what happens when you have algorithmic improvements? What happens when you do a rail instead of fine tuning? And like, that's just a lot of different loopholes that researchers are going to find. And so I think overall it's an encouraging first step. It's funny. I've, you know, there've been proposals around even like a flop threshold that wouldn't drop progressively over time in kind of anticipation of the algorithmic improvements. That's a
00:22:03
Speaker
even a more probably challenging one to put out into the world, especially given people are not in general great at extrapolating technology trends or don't want to accept regulation in advance of stuff actually being invented.
00:22:20
Speaker
So we've got this flop threshold thing where basically, as I understand it so far, it's like if you're going to do something this big, you have to tell the government that you're going to do it and you have to bring your test results to the government. I would agree with them. That seems like a pretty good start. And also the threshold seems like pretty reasonably chosen at 10 to the 26.
00:22:43
Speaker
Any refinements on that or quibbles that you would put forward that you think maybe the next evolution of this should take into account? I think ultimately, we're tiptoeing around the issue, but ultimately, we need to come to an actual technical blanket solution. We will not solve ASI alignment by asking for reports from AI companies.
00:23:12
Speaker
That's not how it's going to happen. So again, I think it's a step in our direction. I'm happy we're taking action.

Managing AI Risks: 'Kill Switch' Proposals

00:23:17
Speaker
I'm happy the action is not totally nonsensical. But at the end of the day, we're going to have to talk about the kill switch. The proposal I just made is one that I see more and more talked about. And that's the one that I would feel best about. You've got to put this chip into your H100s and the government and those centralized entity that can shut down all GPUs all at once.
00:23:41
Speaker
And by the way, it wouldn't necessarily shut down every computer, because your laptop doesn't have an H100, your phone doesn't have an H100, like that's fine. Over the long term, most of our makes it so that your laptop and your phone actually end up with an H100, but at least that dies us a few years to make progress on AI safety and alignment. Ideally, we would then automate just like reportedly the Russians did during the Cold War.
00:24:04
Speaker
we would automate. We would set up some detection systems to, God knows how we would do that, but hey, there's an ASI going rogue. The world is really changing rapidly. Assuming it's not too late, which it may be, because at that point, God knows. Basically, that would give us the best weapon against the ASI. We would have a gun against the ASI's hand and boom, kill all the GPUs. You cannot operate anymore.
00:24:30
Speaker
God knows how effective that would be because at that point all bets are off if you have an ASI that knows what it does and how it connects itself. But that would be what I would feel best about. Do you have any sense for how that would be implemented technically? It seems like you would almost want it to be something that you could kind of broadcast. You know, you almost want like a
00:24:51
Speaker
a receiver on chip that would react to a particular broadcast signal and just kind of, cause you would not want to have like, you know, an elaborate chain of command or, you know, relying on like the dude who happens to be on the night shift at the, you know, the individual data centers to go through and like, you know, pull some lever, right? So do you know of anybody who's done kind of advanced thinking on that? That stuff is like, you know, you hear a lot of these like kill switch things, but in terms of how bad actually
00:25:17
Speaker
happens so that it's not dependent on a lot of people coming through in a key moment.

Focus of AI Regulation: Development vs. Application

00:25:23
Speaker
I haven't heard too much, to be honest.
00:25:25
Speaker
No, I haven't seen too much research done on that. But you know, I think the technical challenge does nothing in principle that makes the technical challenge insolvable. Like we already have a chip that can be broadcasted to for like a dollar from space. Like the GPS chip does a lot of chips and like it does, you have one on your phone. And so why not put the GPS like, maybe we could literally piggyback the GPS protocol, I don't know. But why not put the chip like that in every GPU?
00:25:52
Speaker
Again, if you have an ASI, God knows, maybe it hacks the chips before you get a chance, it hacks the satellites, the broadcast, I think I have no idea. But again, I think pointing in this direction is where I would like things to go. Is the limit
00:26:09
Speaker
I think basically, and that's like the most extreme version of this proposal, but like the Yudkovsky L strike proposal. That's like, you cannot accumulate billions and billions of dollars of H100s and build this thing else we will go up to L striking here. That's the most extreme version of this, but that actually I think is directly correct.
00:26:32
Speaker
This is going to be the most powerful force in human history, maybe even in the universe. You cannot accumulate that stuff any more than you can accumulate enriched plutonium. We've got a 4-bit stat that's the lowest level possible. And so that level cannot be the application layer. Because the application layer is just to diffuse. Those like 1,000 startups everywhere, any kid in that garage can build one. It's got to be at a choke point. And the choke point today is the silica.
00:27:00
Speaker
Yeah, let's unpack that a little bit more. Cause I think that has been an interesting debate recently. You'll hear this kind of call for let's not regulate model development. Let's regulate applications. And then, you know, we can kind of have medical regulation for the medical and everything can be more appropriate and like fit for purpose. And, you know, maybe there's something else to be said for that.
00:27:27
Speaker
But yeah, I mean, if you're really worried about tail risk, it's like probably not going to be sort of mega medical, you know, device style regulation of, you know, diagnostic models or whatever that is going to keep things under control.
00:27:45
Speaker
maybe you could even do a better job of steel manning the case for the application level regulation, but I guess, you know, why do you think that give your account of why that's not viable in a little bit more detail? Yeah, I think the steel man here is like,

Power and Risk of AI Technologies

00:28:00
Speaker
Look, people are going to use forks to poke each other in the eye. That's not a reason to forbid the fork. Like forks are awesome. We love forks just for people from poking each other in the eye with them. Right. The problem is that as the fork in this analogy becomes more and more powerful, the argument loses more and more of its defense because ultimately it's just a risk-benefit analysis. Right. And so the risk becomes greater and greater as the artifact becomes more and more powerful. So more powerful than a fork, an AL15.
00:28:30
Speaker
And so, you know, opinions vary about that. But look at this point, if you look at the data, you actually save lives by heavily regulating the sale of L-15s. You can't just be like, oh, sell them to everyone and just for people from treating each other with them. It's an L-15, what do you expect people to do with them, right? Now, in the most extreme scenario, enriched uranium.
00:28:51
Speaker
You can't be like, you can buy all the enriched uranium you want. You don't even need to fill up a full, which by the way, that is all the executive order says right now. At least fill up a full. Can you please at least tell us what you have to do? So, hey, you can build all the enriched uranium you want. Just don't bond us with it, please. We wrote it and it's just so you can do it. Oh, no, that's not how it works. So that is why I think it's important to regulate the silicon layer.
00:29:17
Speaker
Do you have an intuition for sort of how likely things are to get crazy at kind of either various timescales or potentially various like compute thresholds? I was realizing, I did an episode with Jan Talman a couple months back, just in the wake of the GPT-4 deployment. And he said, we dodged a bullet with GPT-4 or something like that. Like in his mind, we didn't know if, you know, even at the GPT-4 scale,
00:29:46
Speaker
Like that might've already been, you know, no real principled reason to believe that with any, with like super high confidence that the GPT-4 scale was not going to cross some, you know, critical threshold or whatever.

GPT-4's Evolution into AGI

00:29:59
Speaker
I guess I don't really have a great sense for this. I just kind of feel like, and this was purely like gut level intuition that yeah, we could probably do like GPT-5 and it'll probably be fine. And then kind of beyond that, I'm like, I have no idea. Do you have anything more,
00:30:16
Speaker
specific that you are working with in terms of a framework of how when you hear, for example, Mustafa from Inflection say, oh yeah, we're definitely going to train orders of magnitude bigger than GPT-4 over the next couple of years. Are you like, well, as long as you stay to two to three orders of magnitude more, we'll be okay.
00:30:35
Speaker
I just have no, you know, we're just flying so blind, but I wonder if maybe you're flying slightly less blind than I am. I am of the opinion that GPT-4 is the most critical component for AGI. And that's the gap from GPT-4 to proper AGI is not research, it's engineering. It sits outside the model. So I think we have a capabilities overhang here that can turn GPT-4, as it is today, into AGI, into proper AGI.
00:31:04
Speaker
I think generally that's the case for any technology. If you look, for example, at Bitcoin, what changed from a technological standpoint that allowed Bitcoin to happen? It was the same technology we'd had for a while, and yet Bitcoin took a while to happen. So there was this overhang, and Bitcoin, whatever your opinion about crypto, changed a lot of games. I think there's this huge overhang with GPT-4. I think we basically have the reasoning module of AGI.
00:31:34
Speaker
I don't know if you saw these people that found literally just asking it, hey, take a deep breath and take a step back. Just take a step back apparently also makes a huge difference. So I think there's a lot of tricks like that that will make a difference. And also the sort of coming keep architectural layers around it.
00:31:48
Speaker
I think can bring it to AGI. That is also why you asked me about what sort of regulation I wish was put into place. We need to stop open sourcing this model. We don't know what kind of overhang exists out there. I don't think LAMAT 2 is there, but like I said, I think GPT 4 is there. So LAMAT 3, if it's GPT 4 level, boom, it's too late. The weights are out there. Okay, now you can do, maybe you can put scrap on there. So we need to stop open sourcing these things. I expect my timelines for proper AGI to emerge is
00:32:19
Speaker
two to eight years, I think there's a more than even chance of AGI emerging in two to eight years. I think the base scenario is things are going to go well, just for the record, I don't think there's like a 99% chance of doom, but even if it's 10%, I think it's worth being very, very worried about. That's enough for me. That's enough for me, 10% of all of us dying, like let's talk about it, please. So two to eight years, 15% chance of AGI, things probably will go well.
00:32:46
Speaker
Except for, you know, civil judicial destruction, those kind of like stuff, those kind of crazy shit happening, but two to eight years. And after that, all bets are off. I have no idea what the bootstrapping to ASI look like, but I don't expect ASI to take more than 30 years. So I expect that you and I, in our lifetimes, we're going to see ASI. So that's a pretty striking claim. I think it probably puts you in a pretty small minority. And I don't think I'm really there with you when you say that you think GPT-4
00:33:16
Speaker
It kind of already contains the, you know, the kind of necessary core element for an AGI. So I'd like to understand that a little bit better. I mean, you'll have a lot of people who will say, you know, look, it can't play tic-tac-toe.
00:33:32
Speaker
I think on some level those kind of, oh, look at these like simple failure objections are kind of lame and sort of missed the point because of all things obviously can do. But I do, you know, if I'm thinking like, does this system seem like it has this kind of sufficiently well-developed world model or, you know, I'm not even sure exactly how you're conceiving of the core thing, but
00:33:56
Speaker
for a question like that, I would say those failures maybe are kind of illuminating. On the other hand, I'm sure you've seen this Eureka paper out of Nvidia recently where they use GPT-4 as a superhuman reward model author to teach robot hands to do stuff. And I thought that one was pretty striking because as far as I know, and I actually use the term Eureka moment, I've many times said,
00:34:22
Speaker
We don't see yet Eureka moments coming from...
00:34:29
Speaker
highly general systems. We see eureka moments from like an alpha go, but we haven't really seen like eureka moments from a GPT-4 until maybe this, this seems like maybe one of the first things where it's like, wow, GPT-4 at a task that requires a lot of expertise, that is designing reward functions for robot reinforcement learning, GPT-4 is meaningfully outperforming human experts. And so I think it's very appropriate that they call it eureka.
00:34:57
Speaker
What do you think is the core thing? Is it this ability to have Eureka moments? Is it something else? Why do you feel like it's there and does it not trouble you that it can't play tic-tac-toe?
00:35:07
Speaker
For the sake of this conversation, I'm going to define AGI as a seed AI, an AI that can recursively self-improve. That's a much more narrow definition of AGI than most people use, but that's actually what I care about. Can we, until this recursive loop of self-improvement that puts trap service to ASI? In order to get there, you don't need to play the correct role. You need to be a good enough, and the world good enough here is important, a good enough either software engineer,
00:35:36
Speaker
or cheap designer or AI and ML researcher. One of these things. So something that can get you to bootstrap. And so good enough does not mean better than the best human. It doesn't even mean better than the average human. It just means good enough that you can make a difference, a positive difference in your own ability to get better.
00:35:58
Speaker
So if you can tell that we're just a group of self-improvements, then mathematically it's over. And yeah, when I see the NVIDIA paper, I see that. When I see our own experience with the model, so today we are using Lindy to write her own integrations, and Lindy is writing more and more of her own code, I see that. Even as it corresponds to AI researchers and ML researchers,
00:36:21
Speaker
My hypothesis is that OpenAI is using GPT-4 more and more internally to perform AI research. My not hypothesis, the fact is that NVDI is releasing papers that's like, well, not only can we use it for AI research through this Eorica paper, but we can also use it for chip design. It works super well. We trained an AI model that does chip design super well. So we are starting to see the glimpses of that kind of recursive group of self-improvement.
00:36:46
Speaker
Basically, the world model question, I kind of want to sidestep because I feel like at this point, the debate becomes silly for people who argue that it's bad or doesn't have a world model. What matters is, is it good enough? And so even if it just overfit,
00:37:02
Speaker
It's a training set, even if it's just predicting the next token and not actually understanding anything. I actually really do believe it understands a lot. But even if it's not, you can imagine there's this many-dimensional space with a ton of data points in there, and it's good by interpolating between the data points, and it needs much more data points to understand anything than a human.
00:37:21
Speaker
And so there's that envelope in that space where the data points are dense enough that it can perform. And so that's called the convex hole. And then there's data points outside that convex hole, and it does really poorly outside the convex hole, much more poorly than humans. Convex hole requires a lot more density than humans to exist.
00:37:39
Speaker
There's multiple questions, which are, one, all these data points inside, the convex hole is the sum of all human knowledge. GPT for today knows more than you. I don't know that it can reason better than you. That's expanding the convex hole thing, but it knows more than you inside that convex hole. Inside that convex hole, an AI research shell that's read every paper ever, not just in AI, but in mass biology, every paper ever, and it's the entirety of the internet, is it better than a human?
00:38:05
Speaker
than a human AI or a soldier. I think the answer is yes. Even if it's not better, does the outside of that convex hole? And this is my point about the capabilities overhang. Can we get this AI model through prompting through cognitive architecture to do better outside this convex hole? And we're seeing that all the time. We're seeing papers come out about like, hey,
00:38:28
Speaker
we have found an automatic way to rewrite a prompt that makes it a lot better. We have found a way that people that came out a few days ago, that's like, hey, if you ask the model to take a step back and to rephrase the problem you're giving it in terms of a universal problem, it performs a lot better.
00:38:45
Speaker
And that makes total sense because the specific of the problem is probably not seen as that specific problem in its data set. But if you ask it to reframe it, it's basically translating the problem into a form in which it's comfortable. And so we're actually getting it to grow its convex hole like that. That's my take, is I think the convex hole is good enough to get to that good enough point. And I think we can grow that convex hole. And so I think that basically, if GPT-full is not a CDAI, for sure GPT-5 is one.
00:39:15
Speaker
Yeah, it's an interesting framing. I find your analysis there pretty compelling. The idea that, you know, given what we have seen from like a Eureka, you know, with this robot training, or there was another interesting one recently, I think it was out of Microsoft. I covered this in one of the research rundown episodes on recursive or iterative improvement on
00:39:39
Speaker
on a software improver. So basically take a real simple software improver, you know, that can improve a piece of software. And then they feed that software improver to itself and just run that on itself over and over again. And, you know, it kind of tops out because it's not, it doesn't, you know, in this framework, it doesn't have access to like tinkering with, you know, possible methods for training itself, but it
00:40:03
Speaker
makes significant improvement and gets us some pretty advanced algorithms where it starts to do like genetic search and, you know, a variety of things where I'm like, I don't even really know what that is, you know, like simulated annealing algorithm. Like what, you know, but it comes up with that and, you know, uses that to improve the improver. And, you know, this is all measured by how effectively it can do the downstream task. It does seem like it's not a huge stretch to say,
00:40:29
Speaker
that could you take the architecture of GPT-4 and start to do parameter sweeps and start to mutate the architecture itself?
00:40:42
Speaker
It seems like it probably can do that. And I would agree, you know, it probably does are certainly just, just based on what I do, you know, with CPT four for coding, I would have to imagine that it is in heavy use as they're performing all that kind of exploratory work, you know, within an open AI.
00:41:00
Speaker
I think we are seeing enough of the size of life across the board in a lot of different areas and other institutions like a little bit of a little bit here, a little bit here. It's not very hard to imagine it getting to a steep velocity, to imagine it going super critical and passes on threshold, where it's like, okay, now, boom, it didn't really take up.
00:41:20
Speaker
So, and I've actually heard multiple people from OpenAI say that they believe, and I agree with that conclusion. And they actually told me that before I agreed with them, they told me that at the very beginning of the year, so before GPT-4 was widely available.
00:41:35
Speaker
And they told me, you know, we, I think we're like, you know, we have a GI and willingness to take off. And that's also like, that's crazy. You will not achieve internally. Well, they didn't say, sorry, they didn't, like, they basically were talking about GPT-4. All right. Now, like, I think, and I am not representing that this is the universal position of OpenAI, but I've heard multiple people from OpenAI and other labs tell me that we have a GI and willingness to take off. So,
00:42:02
Speaker
Given that, OK, we've got this compute threshold. We maybe need a kill switch.

International Cooperation on AI Safety

00:42:08
Speaker
Now we started this conversation with me, with my EAC side coming out and being like, why can't we get my self-driving car on the road and tolerate some reasonable amount of risk to do that?
00:42:24
Speaker
Now my other side is coming out and I'm like, okay, what else might we do, right? We've got the AI safety summit going on right now in the UK. I thought it was cool to see today that there's some kind of joint statements between Chinese and Western academics and thought leaders in the space where they're kind of saying, yeah, we need to work together on this, like human extinction is
00:42:44
Speaker
is something that we think could happen if we're not careful. Do you have a point of view on kind of collaborating with China or coordinating with China? I mean, that's a tough question, obviously. Nobody really knows China, I don't think, super well. But what do you think about that? I mean, are we naive to hope? I guess I kind of feel like what else are we going to do except give it a shot?
00:43:07
Speaker
Yeah, 100%. And there is ample precedent. You know, everybody is always talking about these, uh, called initial problems. Like they've taken like the one-on-one course of game theory and are like, look, we can't coordinate. Well, like if you take game theory one or two, it's like solutions to the core initial problem. Right. And so the solution to collaboration problem is, um, few players in a very iterated game.
00:43:29
Speaker
And that is the game right now. There's very few players and they're all in a very iterated game. They're not the best buddies, but they are actually able to agree on a lot of things. And so we can cool the net with China. And again, to your point, what choice do we have anyway, right? And even if we do not cool the net with them, again, there's not too cold, enough of which are American.
00:43:50
Speaker
Right i need to use an american company last time i checked and so it doesn't have to close that we could actually do very much not give them a choice i hate the GPUs not have the chip right here you know and so we don't like it or not we have a satellite and we can tell you you know i'm there and you know that that wouldn't be.
00:44:09
Speaker
We could even just downright forbid be used by the way to be sold in China. We've done stuff like that before. So, no, I think coordination is definitely possible and I actually think it's going to happen. I'm actually really very much encouraged by well-winning. I think the safety side is making really good progress. There is rising public awareness. I think Jeff Hinton is doing an amazing work here.
00:44:32
Speaker
The regulation is coming. It's mostly sensical. There's this sort of progress that's happening across the board. AI labs are investing more and more in safety and alignment, even from a technical standpoint, the work that Ansible think is doing, I think is absolutely brilliant. So we're making really good progress across the board here. I don't want to represent that it's all good. Yeah, I totally agree. I would say my kind of high level narrative on this recently has been, it feels like we're at the beginning of chapter two of the overall
00:45:01
Speaker
a story and chapter one was largely characterized by a lot of speculation about what might happen and. Amazingly kind of at the end of chapter one beginning of chapter two not all that like a large share of the key players seem to be really serious minded and you know well aware of the risks and it's easy to imagine for me a very different scenario where.
00:45:26
Speaker
Everybody, you know, all the leading developers are like highly dismissive of the potential problems. But it's hard for me to imagine a scenario that would be like all that much better than, you know, the current dynamic. So I do feel, you know, like overall, you know, pretty, pretty lucky or pretty grateful that, you know, things are shaping up at least, you know, to give us a good chance to try to get a handle on all this sort of stuff.

Does AI Have Subjective Experiences?

00:45:52
Speaker
One last question. This is super philosophical. I know you got to go.
00:45:56
Speaker
How much depends in your mind on whether or not, let's say, silicon-based intelligence or AI systems or whatever might become, or maybe already are, or I'm not sure how we would ever tell, the kinds of things that have subjective experience. Does it matter to you if it feels like something to be GPT-4? Have you heard of the Will Moo?
00:46:21
Speaker
I think it's in Zen philosophy, in Buddhism, there's this story that's like someone asks someone else, like, hey, does Canada have the essence of a Buddha? If the Buddha is everywhere and in every being Canada have the essence of a Buddha. And the answer to that is moo. And moo means neither yes or no. It's a way to unask the question. It's a way to reject the premise of the question.
00:46:47
Speaker
And basically, in this sense, it means that there is no such thing as the essence of the Buddha, right? It's like the same question is like, hey, what happened before the universe existed? Moo! There was no before, because the builds of the universe were the builds of time. So the will to be full only makes sense in the context of the universe. And so anyway, that's all of my instill. Whenever I ask a question, whenever someone asks me a question about subjective experience and cautiousness, I'm like, Moo!
00:47:12
Speaker
It doesn't exist. It doesn't matter. It's immeasurable. It's not a scientific thing. And so move. All righty. Well, some questions bound to remain unanswered. And I appreciate your time today. This is always super lively. Next time, I want to get the Lindy update. And at some point, I want to get access. But for now, I'll just say, Flo Cravello, thank you for being part of the Cognitive Revolution. Thanks, Nathan.