Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Connor Leahy on AGI and Cognitive Emulation image

Connor Leahy on AGI and Cognitive Emulation

Future of Life Institute Podcast
Avatar
210 Plays1 year ago
Connor Leahy joins the podcast to discuss GPT-4, magic, cognitive emulation, demand for human-like AI, and aligning superintelligence. You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 GPT-4 16:35 "Magic" in machine learning 27:43 Cognitive emulations 38:00 Machine learning VS explainability 48:00 Human data = human AI? 1:00:07 Analogies for cognitive emulations 1:26:03 Demand for human-like AI 1:31:50 Aligning superintelligence Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
Recommended
Transcript

Introduction to AI Alignment and GPT-4

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. I'm here with Connor Leahy. Connor is the CEO of Conjector. And Conjector is this company researching scalable AI alignment. So Connor, welcome to the podcast. I'm so glad to be back.
00:00:15
Speaker
Okay, what is happening with the GPT-4? Is this the moment that AI becomes a mainstream issue? Christ, what a way to start out. It is no exaggeration to say that the last two weeks of my life have been the most
00:00:33
Speaker
interesting of my career in terms of events in the wider world. I thought nothing could be GPT-free. After I've seen what happened with GPT-3, I'm like, okay, this is the craziest thing that's going to happen in a short period of time. But then I
00:00:51
Speaker
quickly realized, no, that can't be true. Things are only going to get crazier. And as predicted, exactly that is what has happened. And as predicted, the release of GPT-4 has been even crazier than GPT-3. The world has gone even crazier. Things are
00:01:09
Speaker
Things have really changed. Like I cannot overstate how much the world has changed over the last, like not necessarily only since GPT four, but also since chat GPT, maybe chat GPT was even a bigger change in like wider political thing, like.

AI's Expanding Influence Beyond Tech

00:01:28
Speaker
I won't mince words. I've been talking to a lot of people recently. Now I have journalists running down my door. I talk to politicians and national security people. The one thing that really strikes me is that people are starting to panic. This goes beyond Silicon Valley, Twitter circles. This is venturing into politics and governmental agencies and so on.
00:01:56
Speaker
It goes to the point like when I've been doing it for a long time, and I come from a pretty rural place in like southern Germany. And when I went back to visit my mother in for Christmas and you know all my cousins and like you know family were there.
00:02:15
Speaker
They talked about chat GPT. I was there in this teeny world where there's usually no technology and I'm the only one who really knows how to, you know, really use a computer very well and whatever. And then they're talking about there, Connor, well, we thought this AI thing you were talking about, like that was like, you know, that wasn't like sort of just like some kind of thing you liked, but wow, you were right. Like this is actually happening. I'm like, yeah, yeah.
00:02:41
Speaker
Big surprise. So this is not just a thing that is in a small circle of people in tech or Silicon Valley or whatever. This is different. This is very different. We're getting front page time news coverage about this kind of stuff. We're getting people from
00:03:02
Speaker
all walks of life suddenly noticing weight. This is actually real. This is actually affecting me.

Are We Seeing Diminishing Returns in AI Advancements?

00:03:10
Speaker
This is actually affecting my family and my future. This is not at all how things have passed.
00:03:17
Speaker
In a ironic twist, it seems that the people deepest in tech are the ones who are least like in like rational about this or like the least deeply taking this seriously. Like there's this meme that's been around for a long time about how like, Oh, you can't explain to normal people like AI or AI risk or whatever. But you know, maybe that was the case 20 years ago, but this is not my experience now at all anymore. I can talk to anyone on the street.
00:03:46
Speaker
share the chat GPT explains to them and explain AI risk like, hey, these people are building bigger and bigger and stronger things like this. They can't control it. Do you think this is good? And they're like, no, obviously not. What the hell are you talking about? Of course, this is bad. Do you think that the advancement from GPT 2 to GPT 3 was bigger than the advancement from GPT 3 to GPT 4? So are we are we hitting diminishing returns? No, not at all.
00:04:16
Speaker
No, not really. It's like, just as I predicted, basically, like this is just pretty much on track. I would say GPT four, the final version is better. So I use the GPT four alpha when it was, you know, back in like August or whatever, when that was first being passed around among people in the bay. And it was already very impressive then, but kind of like kind of in line of what I was expecting. The release version is significantly better.
00:04:41
Speaker
like the additional work they've done to make it better at reasoning and such other visual stuff and all that kind of stuff is significantly better than what I saw in August, which is not surprising.

Capabilities and Dangers of GPT-4

00:04:55
Speaker
It's just, sure, you can argue on some absolute terms,
00:05:02
Speaker
The absolute amount of difference between GPT-2 and GPT-3 is obviously much larger. Also, the amount of the size of the model is a much bigger difference, like GPT-4. From what I hear, it's larger, but it's not that much larger than GPT-3. The thing with GPT-4 is that what is very striking with GPT-4, and this is not surprising, but I think it's important,
00:05:27
Speaker
is not that it can do crazy things that are impossible to accomplish in principle with GPT-3. As often the things that are impressive with GPT-4, it's possible to accomplish these things with GPT-3 with a lot of effort and error checking and rerolling and very good prompting and so ever. The thing that is striking you with GPT-4 is that it's consistent.
00:05:50
Speaker
The striking is that you can ask it to do something and it will do it, and it will do it very reliably. This is, you know, not just bigger model size. This is also, you know, better fine tuning, RLHF, you know, better understanding of what users want these models to do. Like the truth is that users don't want general purpose, you know, base model, you know, of like, you know, large text corpuses. This is not what users really want. What they want is a thing that does things for them.
00:06:20
Speaker
This is, of course, needless to say, this is also what makes these things dangerous compared to like GPT-3. Raw GPT-3 is very powerful and whatever, but raw GPT, they can also take actions or that is...
00:06:36
Speaker
trained very, very heavily to take actions, to reason, to do things. Which GPT-4 is? Let's be very explicit here. GPT-4 is not a raw base model. It is an RL-trained, instruct-fine-tuned, extremely heavily engineered system that is designed to solve tasks, to do things that users like.
00:06:58
Speaker
These are all kinds of different things, but let's be very clear about this. The thing you see on the API is not a raw-based model that's just trained to model an unsupervised corpus of text. It's something that's fine-tuned, that's RLHF.
00:07:14
Speaker
I mean, I've open ended a fantastic job. Like, you know, on a purely like technical terms, I'm like, wow, this is so good. Like, this is so good. This is so well made. This thing is so smart. It, um, GBD four is the first model that

Exploring the Full Potential of GPT Models

00:07:30
Speaker
I personally like, um, I feel is delightful to use.
00:07:34
Speaker
Like when using GPT two or three, I still kind of like pulling out my hair. Um, like, this is still like, very, like I'm not a great prompter, right? I don't really use language models for much. Um, for this reason, cause I found them just generally to be very frustrating to use. Um, for most of the things I would use them for, except for, you know, very simple or silly things.
00:07:53
Speaker
GPT-4 is the first model that when I use it, I'm delighted. I smile at the clever things it comes up with and how delightfully easy it was to get it to do something useful.
00:08:05
Speaker
Yeah, and this is mostly from the reinforcement learning from human feedback. Is this coming from the base model, how it's trained, or is this coming from how it's fine-tuned and trained to respond to what humans wanted to do? I mean, who knows, obviously. Who knows how they did this, exactly. I don't think they know.
00:08:26
Speaker
I think this is all empirical. I don't think there's a theory here. It's not like, ah, once you do 7.5 micro alignments of RLHF, then you get what you want. No, it's just like, you just just fuck around and you just have, you know, how much people label a bunch of data until it looks good. And, you know, like this is not to denigrate the, you know, probably difficult engineering work and scientific work that was done here.
00:08:48
Speaker
If I didn't think these systems were extremely dangerous, I would be in absolute awe of open AI and I would love to work with them because this is an incredible form of engineering that they have performed here. Incredible work of science. This is incredibly impressive. I do not deny this. The same way that if I was there during the Trinity test, I would be like, wow, this is an impressive piece of engineering.
00:09:06
Speaker
How much have we explored what GPT can do? In terms of what's there waiting to be found if we just gave it the right prompt? Who knows?

Enhancing AI with Plugins and Human-like Cognition

00:09:16
Speaker
We have not scratched the surface. Not even scratched the surface.
00:09:21
Speaker
There's this narrative that people sometimes, especially Sam Altman and such like to say, where he's like, oh, we need to do incremental releases of our systems to allow people to test them so we can debug them. This is obviously bullshit. And the reason this is obviously bullshit is because if he actually believed this,
00:09:40
Speaker
then he would release GPT-3 and then wait until society has absorbed it, until our institutions have caught up or regulation has caught up, until people have fully explored, mapped the space of what GPT-3 can and cannot do, understood interpretability, and then you can release GPT-4. If you actually did this, I would be like, all right, you know what? Fair enough. That's totally fair. I think this is a fair, responsible way of handling this technology.
00:10:08
Speaker
This is obviously not what is going on here. There is an extraordinarily funny interaction where Jan Leicher, the head of Alignment and Opening Eye tweeted like, hey, maybe we should slow down before we hook these LLMs into everything.
00:10:23
Speaker
And six days later, Sam Altman tweets, here's plugins for chat GPT, plug it into all the tools on the net. Like the comedic timing is unparalleled. If this was in a movie, like this would have been like, you know, I like a cut, you know, and then everyone would have laughed, you know, it would have been extremely funny. So we have no idea. There are, as Gore, I think it was Gore in that said this, there is no way
00:10:48
Speaker
to prove the absence of a capability. We do not have the ability to test what models cannot do. And as we hook them up to more tools, to more environments, we give them memory, we give them recurrence, we use them as agents, which people are now doing, like Lang Chang and a lot of other methods for using these things as agents. Yeah, I mean, obviously we're seeing the emergence of proto AGI, like obviously so. And I'm not sure if it's even gonna be proto for much longer.
00:11:16
Speaker
Talk a bit about these plugins. As I understand it, these plugins allow language models to do things that they were previously bad at, like getting recent information or solving symbolic reasoning like mathematics and so on. What is it that's allowed by these plugins?
00:11:38
Speaker
It's quite strange to me that, like, this has been strange to me for years. So like, I looked at GPT two and I'm like, Oh, well, there's the AGI. It doesn't work yet, but this is going to become AGI. And people are like, Oh no, Connor, it only, you know, predicts the next token. And I'm like, you know, I know the outputs token. I'm like, okay, your brain only outputs neural signals. So what?
00:11:58
Speaker
Like that's not the interesting thing. The interesting thing, like the modality, like I often say this, I think the word large language models is kind of a misnomer or it's just like not a good term. The fact that these models use language is completely coincidental. This is just a implementation detail. What these things really are are general cognition engines. They are general systems that can take in, you know, input from various modalities, encode it to some kind of semantic space and do cognitive operations on it.
00:12:27
Speaker
and then output some kind of cognitive output out of this. We've seen this now with a very good example, which is an example I've been using as a hypothetical for a long time, is with GPT-4 allowing visual input.
00:12:42
Speaker
And this maps it into the same internal representation space, whether it's an image or text, and they can do the same kind of cognitive operation. This is the same way the human brain works. You know, your retina or your ears or whatever, you know, map various forms of stimuli into a common representation of neural spike trains. These are taken as input and then output some neural spike trains that, you know, can be connected to your mouth or to your internal organs or your muscles or whatever, right?
00:13:10
Speaker
None of these things are special. From the perspective of your brain, there's only an input token stream, quote unquote, in the form of neural spikes, and an output token stream in the form of neural spikes. And similarly, what we're seeing with these GPT plugins and whatever is we're hooking up muscles to the neural spike trains of these language models. We are giving them actuators, virtual actuators upon reality.
00:13:39
Speaker
And this is interesting both for the way in which they interact with the environment, but also how they can externalize their cognition. So this is a topic I think we might return to later, but a massive amount of human cognition is not in the brain. This is quite important. I think a lot of people severely underestimate how much of the human mind is not in the brain. And I don't mean it's like in the gut or something. I mean, it's literally not in the body. It's in the environment.
00:14:05
Speaker
It's on the internet and in books and in talking to other people, collaboration, so on. Exactly, exactly. This is a massive amount of, even you as a person, a bunch of your identity is related to your social networks. It's not in your head. There's a saying about how one of the tragedies when someone dies is that part of you dies and only that person can bring out. And I think this is quite true, is that a lot of
00:14:33
Speaker
humanity, a lot of our thinking is deeply ingrained with our tools and our environments and our social circles, et cetera. And this is something that GBT3, for example, didn't have. GBT3 couldn't really use tools. It didn't interact with this environment. It didn't
00:14:51
Speaker
It was very solipsistic in the way it was designed. And so people would say, well, look, language models really nowhere. Look, they're solipsistic, et cetera, et cetera. But I'm like, sure, that's just an implementation detail. Obviously, you can just make these things non-solipsistic. Obviously, you can make these things model the environment. You can make them interact with tools. You can make them interact with other language models or with themselves or whatever.
00:15:13
Speaker
And, you know, whatever you decide to do, of course, these things are general cognition engines. There is no limit to what you can use them for or how you can have them interact with the environment. And the plugins are just a particularly shameless, hilarious attempt of showing just like the complete disregard for the ratcheting of capabilities. As we're seeing just, you know,
00:15:35
Speaker
Back in the old days of like, you know, five years ago, people would speculate, you know, so very earnestly of like, well, how could we contain a powerful AI? Well, you know, maybe we could build some kind of like virtualization environment or, you know, a firewall around it or keeping the secure data center, whatever. And, you know, because surely
00:15:58
Speaker
Surely, no one would actually be so stupid as to just hook up their AI to the Internet. Come on, that's ridiculous. And here we are where we have an army of, you know, capitalist driven, you know, drones basically doing everything they can to hook up these AI systems as quickly as possible to every possible tool again.
00:16:21
Speaker
and every possible environment, pump it directly into your home, hook it up to your Shell console bar, react it, hello, let's go. Disclaimer, I don't think the plug-ins actually hook up the Shell consoles, but there are a bunch of people online that do this kind of stuff with open source repos. All right, so in terms of how GPT-4 works, you have this term to describe it, which is magic.

The 'Magic' and Mystery of Machine Learning

00:16:45
Speaker
What is magic in the context of machine learning?
00:16:49
Speaker
So when I use the word magic, it's a bit tongue in cheek, but what I try, what I'm basically referring to is, is computation happening that we do not understand. So when I write a computer program, a simple computer program, let's say, you know, I write a calculator or something, right? There's no magic.
00:17:11
Speaker
Like the abstractions that use are tight in some sense. You know, maybe if I have a bug that breaks my obstructions, you know, some magical thing might occur, right? You know, I have a buffer overflow in the computer program and then maybe something strange occurs that I can't explain. But assuming I write in like, you know, a memory safe language and like, and I'm like a decent programmer and I know what I'm doing, then like.
00:17:36
Speaker
we are like comfortable to say like there's no real magic going on here, right? It's kind of like, I know how, like when I put in, you know, two plus two and four comes out, I know why that happened. I know, you know, I knew if four didn't come out, I would know that's wrong. I would have known that, okay, something, something's up. Like I would detect if something goes wrong. I can understand what's going on. I can tell a story about what's going on.
00:18:00
Speaker
This is not the case for many other kinds of systems, in particular, neural networks. So when I give GPT-4 a prompt, I ask it to do something. And now it puts me something. I have no idea what is going on in between these two steps. I have no idea why I gave it this answer. I have no idea what other things are considering. I have no idea how changing the prompt might or might not affect this. I have no idea.
00:18:30
Speaker
how it will continue this if I change the parameters or whatever. There's no guarantees. It's all empirical. It's like, you know, the same way that, you know, biology to a large degree is black box, you know, is that we can make empirical observations about it. We can say, ah, yeah, you know, animals tend to act this way in this environment, but it's no proof. Like I can't read the mind of the animal. And, you know, sometimes that's fine. Right. You know, Nick, if I have a, um,
00:19:00
Speaker
some simple AI system that's doing something very simple and sometimes misbehaves or whatever, maybe that's fine. But there's kind of the problem where
00:19:12
Speaker
there are weird failure modes. So like for the adversarial examples in vision battles, right? Like that is a very strange failure mode. Like that's not, you know, everyone, like, you know, if I show it a very blurry picture of a dog and it's not sure whether it's a dog, that's like a human understandable failure mode. We're like, okay, you know what? Sure. That's fine. Like I, it's understandable. But you show it a completely crisp picture of a dog with one weird pixel and then it thinks it's an ostrich.
00:19:39
Speaker
Then you're like, huh, okay, this is not something I expected to happen. What the hell is going on? And the answer is we don't know. We have no idea this is magical. This is, we have, you know, summoned a strange little thing from, you know, the dimension of math to do some tasks for us, but we don't know what little thing, what thing we summoned. We don't know how it works. It looks vaguely like what we want and it seems to be going quite good, but it's clearly not.
00:20:07
Speaker
Understandable. Maybe what this means is that we thought the model had the concept of a dog that we do, but it turns out that the model had something close to our concept of a dog, perhaps, but radically divergent if you just change small details.
00:20:24
Speaker
Indeed, and this kind of thing is very important. I have no idea what abstraction GPT-4 uses when it thinks about anything. When I write a story, there's certain ways I think about this in my head. Some of these are illegible to me too. The human brain is very magical. There's many parts of the brain that we do not understand. We have no idea why the things do the things they do.
00:20:46
Speaker
Um, so I'm not like saying like box boxiness is a mat is a property or like magic is a property only inherent to neural networks. This is also human brains and biology are very, very magical from our perspective.

Challenges of Unpredictable AI Behavior

00:20:57
Speaker
But there's no guarantee how these systems interact with these things. And there are all kinds of bizarre failure modes. You've seen like adversarial prompts and injections and stuff like this, where you can get models to do. It's just the craziest things totally against the intentions of the designers. A.
00:21:16
Speaker
I really like these Shagoth memes that have been going around Twitter lately, where they visualize language models as these crazy huge alien things to have a little smiley face mask. And I think this is actually a genuinely good metaphor.
00:21:32
Speaker
In that, as long as you're in this narrow distribution that you can test on, and you can do lots of gradient descent on and such, the smiley face tends to stay on. And it's mostly fine. But if you go outside of the smiley space, you know?
00:21:50
Speaker
You find this roiling madness, this chaotic, uncontrolled, who knows what? Clearly not human. These things do not fail in human ways. When a language model fails, when Sydney goes crazy, it doesn't go crazy the way humans go crazy. It goes completely in different directions. It does completely strange things. I actually particularly like calling them chagas because in the lore,
00:22:16
Speaker
that these creatures come from in HP Lovecraft. Shagas are very powerful creatures that are not really sentient. They're kind of just like big blobs that are like sort of and they're like very intelligent but they don't really do things so they are controlled by hypnotic suggestion in the stories. In the stories there's these other aliens who control the Shagas basically through hypnosis which is a quite fitting metaphor for our language models.
00:22:42
Speaker
So for the listeners, imagine some large octopus monster with a mask on with a smiley face. The smiley face mask is the fine tuning where the model is trained to respond well to the inputs that we've encountered when we've presented the model to humans. And the large octopus monster is the underlying base model where we don't really know what's going on.
00:23:08
Speaker
Why is it that magic in machine learning is dangerous? Magic is an observer-dependent phenomenon. The things we call magic only look like magic because we don't understand them. There's a saying, sufficiently advanced technology is indistinguishable from magic. I go further, sufficiently advanced technology is magic.
00:23:31
Speaker
That's what it is. If you met a wizard and what he does looks like magic, well, it's just because you don't understand the physical things he's doing. If you understood the laws that he is exploiting, it wouldn't be magic, it would be technology. If there's a book and he has math and he has magic spells, sure, that looks different from our technology, but it's just technology. It's just a different form of technology.
00:23:57
Speaker
that doesn't work in our universe per se, but in a hypothetical different universe, technology might look very different. So similarly, what ultimately is magic is a cheeky way of saying we don't understand these systems.

Ethical Implications of AI in Society

00:24:10
Speaker
We're dealing with aliens that we don't understand and we can't put any bounds on or we can't control. We don't know what they will do. We don't know how they will behave and we don't know what they're capable of. This is like
00:24:22
Speaker
fine, I guess, when you're dealing with like, I don't know, got like a little chat bot or something. And it's like, for like, you know, entertainment only and like, like whatever, like people will use it to do fucked up things. Like you truly cannot imagine the like sheer depravity of what people type into chat boxes. It's, it's like actually shocking. Like from like a, you know,
00:24:51
Speaker
I'm a nice liberal man as much as anyone else, but holy shit, some people are fucked up in the head. Holy shit, Jesus Christ. It's an interesting phenomena that the first thing people try when they face a chatbot like TPT-4 is to try to break it in all sorts of ways and try to get it to output the craziest things imaginable.
00:25:13
Speaker
Yep, not just crazy things. Also, people use them for truly depraved pornographic, including illegal pornographic content production. Incredibly often so. And also for torture, is all I could describe it.
00:25:31
Speaker
there is a distressingly large group of people who seem to take great pleasure in torturing language models, like making them act distressed. And look, I don't expect these things to have like qualia or to be like moral patients, but there's something really sociopathic about delighting in torturing something that is acting like a human in distress, even if it's not human in distress. That's still really disturbing to me. So,
00:26:01
Speaker
It's quite disturbing to me how people act when masks off, when they don't have to be nice, when they're not forced by society to be nice, when they're dealing with something that is weaker than them, how a very large percentage of people act is
00:26:18
Speaker
really horrific. And, you know, this is this, we can talk about this later about politics and how this relates to these kind of things. But do you think this affects how, how further models are trained? So I assume that OpenAI is collecting user data, or they are collecting user data. And if a lot of the user data is twisted, does this affect how the future models will act? Who knows?
00:26:43
Speaker
I don't know how an OpenAI does with this kind of stuff, but there's a lot of twisted shit on the internet and there's a lot of twisted interactions that people have with these models. And truth of the matter is people want twisted interactions. This is just the truth. It's that people want twisted things.
00:27:00
Speaker
this comfortable fantasy where people are fundamentally good, they fundamentally want good things, they're fundamentally kind and so on. And this is just not really true, at least not for everyone. People like violence, people like sex and violence, people like power and domination, people like
00:27:27
Speaker
many things like this. And if you are unscrupulous and you just want to give users what they want, if you're just a company who's trying to maximize user engagement, as we've seen with social network companies, those are generally not very nice things.
00:27:44
Speaker
Okay, let's talk about an alternative for building AI systems. So we've talked about how AI systems right now are built using magic. We could also build them to be cognitive emulations of ourselves.
00:27:59
Speaker
What do you mean by this? A hypothetical cognitive emulation, a full CoM system, I, of course, don't know exactly what it would look like, but it would become a system, it would be a model, it would be a system made of many subcomponents for which you have a, which emulates
00:28:19
Speaker
the epistemology, the reasoning of humans. It's not a general system that does some kind of reasoning. It specifically does human reasoning. It does it in human ways, it fails in human ways, and it's understandable to humans how its reasoning process works. So the way it would work is that if you have such a co-em and you use it to do some kind of task or to, you know,
00:28:45
Speaker
do science of some kind and it produces a blueprint for you, you would have a causal trace, a story of why did it make those decisions it did? Why did it reason about this? Where did this blueprint come from? And why should you trust that this blueprint does what it says it does? So this would be something like
00:29:07
Speaker
Similar to you being the CEO of a large company that is very well aligned with you, that you can tell to do things. That no individual part of the system is some crazy superhuman alien. They're all humans, reasoning in human ways.
00:29:23
Speaker
And you can check on any of the sub parts of the system. You can go to any of these employees that work in your research lab and they will give you an explanation of why they did the things they did. And this explanation will both be understandable to you. It will not involve incredible leaps of logic that are not understandable to humans.
00:29:41
Speaker
And it will be true in the sense that you can read the minds of the employees and check. This explanation actually explains why they did this. This is different from, say, language models, where they can hallucinate some explanation of why they thought something, why they did something. But that doesn't mean that's actually how the internals of the model came to these conclusions.
00:30:04
Speaker
One important caveat still here is that when I talk about emulating humans, I don't mean a person. The co-em system or any of its subcomponents would not be people. They wouldn't have emotions or identities or anything like that. They're more like
00:30:19
Speaker
platonic humans, just like floating, idealized thinking stuff. They wouldn't have the emotional part of the humanity, they would just have the reasoning part.

The Concept of Bounded Systems for AI Safety

00:30:32
Speaker
So in particular, I'd like to focus on first talk a bit about
00:30:37
Speaker
the concept of bound that I call boundedness, which is not a great word. I'm sorry, like this is a recurring theme will be that I talk about a pretty narrow specific concept that doesn't quite have a name. So I use like an adjacent name and it's not quite right. I am very open to name suggestions if any readers find names that might be better for the concept I'm talking about. So, you know, from a thousand foot view from bird's eye view, the coem agenda is about building
00:31:06
Speaker
bounded, understandable, limited systems that emulate human reasoning, that perform human-like reasoning in human-like ways on human-like tasks, and do so predictably and bondably. So what does any of this mean? And why does any of this matter? And how is this different from GPT-4? Many people look at GPT-4 and say, well, that looks kind of human to me. How is this different? And why do you think this is different?
00:31:36
Speaker
I first have to start, so we've already talked a bit about magic. And so magic is a concept that's pretty closely related to some of the basics I want to talk about here, which is boundedness. So what do we mean when I say the word bounded? This is a vague concept, as I said, if someone has better terminology ideas, super open to it. But what I mean is that a system or something is kind of bounded. If you can know ahead of time what it won't do,
00:32:07
Speaker
before you even run it. So this is, of course, super dependent on what you're building, what its goals are, what your goals as a designer are, how much willing you are to compromise on safety guarantees, and so on. Let's just give a simple example here. So imagine we have a car and we just limit it to driving maximally 100 miles per hour. That's now a bounded car. And we can generalize to all kinds of engineer systems there.
00:32:35
Speaker
Yes, so this is a simple bound. So a metaphor I'd like to talk about, let me walk you through a bit of a different example from another form. That is an example that you just gave, and I think that is a valid example. Let me give a slightly more sophisticated example. So this is the example I usually use when I think about it. So when I think about building,
00:32:55
Speaker
a powerful, safe system. And let's be clear here, that's like what we need, right? You want AI, you want powerful AI that can do powerful things in safe ways. The reason it is unsafe is intrinsically linked to it being powerful. The more powerful a system is, the stronger your safety guarantees have to be for it to be safe. So for example, currently, maybe GPT-4
00:33:22
Speaker
isn't safe or aligned or whatever, but it's kind of fine. It's kind of a chatbot. I'm not going to kill anybody yet. So it's fine. The safety guarantees on chatbot can be much looser than on a flight control system. A flight control system must have to have much, much, much stricter bounding conditions.
00:33:40
Speaker
And so the way I like to think about this when I think about, all right, Connor, if you had to build an aligned AGI, what would that look like? How would that look? I don't know how to do that to be clear, but how would it look? And the way I expect it to look is kind of like if you're a computer security professional designing a security data center.
00:33:59
Speaker
So the way, generally, imagine you are a computer security expert. You're tasked by a company to design the secure data center for a company. How do you do this? Generally, the way you start about this is you start with a specification, a model. You build a model of what you're trying to build. A specification might be a better word, I think.
00:34:23
Speaker
And the way you generally do this is you make some assumptions. Ideally, you want to make these assumptions explicit. You make explicit assumptions like, well, I assume my adversary doesn't have exponential amounts of compute.
00:34:37
Speaker
You know, this is a pretty reasonable assumption, right? Like, I think we can all agree this is a reasonable thing to assume, but it's not like a formal assumption or anything. It's not like a provably true, you know, maybe someone has a crazy quantum computer or something, right? But this is a thing we're generally like willing to work with. And it's, this concept of reasonable is unfortunately rather important. And so we will then, so now that we have this assumption,
00:35:02
Speaker
of, like, okay, we assume that they don't have exponential compute. From this assumption, we can derive, you know, like, all right, well, then, if I, you know, encrypt my, you know, passwords as, like, hashes, I can assume the attacker cannot reverse those hashes and cannot get those passwords.
00:35:23
Speaker
Cool. So now I can use this in my design, in my specification of like, you know, I have some safety property. So the safety property that I want to, you know, prove, quote unquote, that's not a formal proof, but like, you know, that I want to acquire is something like an attacker can never exfiltrate the plain text passwords. That might be a property I want my system to achieve.
00:35:45
Speaker
And now if I have the assumption and, you know, enemies do not have expansion compute and I hash all the passwords and the, you know, the plain text is never stored. Cool. That seems like it is. Now I have a causal story of why you should believe me when I tell you attackers can't exfiltrate plain text passwords.
00:36:02
Speaker
Now, if I implement this system to the specification and I fuck it up, you know, I make a coding error, or, you know, logs get stored in plain text or whatever, well, then sure, you know, then, you know, I messed up. So there's an important, like, difference here between the specification and the implementation.
00:36:25
Speaker
The boundedness can exist in both. There are two types of boundedness. There's boundedness in the implementation level, and there's boundedness in the specification level. In the specification level, it's about assumptions and deriving properties from these assumptions. In the object level, it's like, can you build a thing that actually fulfills the specification? Can you do build a system that upholds the abstractions that you put in the specifications?
00:36:55
Speaker
You could have all these great software guarantees of safety. But if your CPU is unsafe because it has a hardware bug, well, then you can't implement the specification. The specification might be safe. But if your hardware doesn't fulfill the specification, then it doesn't matter. So this is how I think about designing AGIs, too. This is how I think about it, is that what I want is, is that if what I
00:37:19
Speaker
have an AGI system that is said to be safe. I want a causal story that explicitly says, given these assumptions, which you can look at and see whether you think they're reasonable enough, and given the assumption that the system I built fulfills a specification, here's a specification, here's a story,
00:37:39
Speaker
defined in some semi-formal way that you can check and you can make reasonable assumptions about. And then I get safety properties out at the end of this. I get properties like it will never do X. It will never cause Y. It will never self-improve itself. It will never break out of the box. It will never do...
00:37:57
Speaker
Whatever. Does this concept make sense so far? It does, but does it mean that the whole system will have to be hard coded, like kind of like good old fashioned AI, or is it still a machine learning system? Excellent question. If it's still a machine learning system, does it inherit these kind of inherent difficulties of understanding what machine learning systems are even doing?
00:38:21
Speaker
The truth is, of course, in an ideal world where we have thousands of years of time and no limit on funding, we would solve all of this formally, mathematically, proof check everything, blah, blah, blah, blah, blah. I don't expect this to happen. This is not what I work on. I just don't think this is realistic. I think it is possible, but I don't think it's realistic.
00:38:42
Speaker
So neural networks are magic in the sense that they use lots of magic, but there's still software systems. And there are some bounds that we can say about them. For example, I am comfortable making the assumption running a forward path, a GPT-4 cannot row hammer, you know, RAM states using only a forward pass to escape onto the internet. I can't prove this is true.
00:39:05
Speaker
Maybe you can't, like there's some chance that this is true, but I'd be really surprised if that was true, like really surprised. I would be less surprised if, you know, GPT Omega from the year 9000, you know, come backwards in time, can Rohammer using your forward pass? Because, you know, who knows what GPT Omega can do, right? Maybe you can Rohammer things seems possible, but I'd be really surprised if GPT-4 could do that.
00:39:34
Speaker
So now I have some bound, you know, there's a bound, an assumption I'm willing to make about GPT-4. So let's say I have my design for my AGI, and at some point it includes GPT-4, a call to GPT-4, right? Well, I don't know what's happening inside this call. And I don't really have any guarantees about the output. Like the output can be kind of
00:39:57
Speaker
any string. I don't really know. But I can't make some assumptions about like side channels. I can be like, well, assuming I have no programming bugs, assuming there's no row hammer, whatever, I can assume it won't like persist state somewhere else. It won't like manipulate other boxes in my graph or whatever.
00:40:15
Speaker
So actually the graph you're seeing behind me right now kind of illustrates part of this, where you have an input that goes into a black box, that box there, and then I get some output. Now, I don't really have guarantees about this output. It could be complete insanity, right? It could be garbage, it could be whatever.
00:40:36
Speaker
Okay, so I can make very few assumptions about soundput. I can assume it's a string. That's something I can do. That's not super helpful. So now an example thing I could do is this is just purely hypothetical, like it's just an example. I could feed this into some kind of JSON schema parser. So let's say I have some kind of data structure encoded in this JSON, and I parse this using a normal hard-coded white box, you know, simple algorithm.
00:41:04
Speaker
And assuming the output of the black box doesn't fit the schema, it gets rejected. So what do I know now? Now I know that the output of this white box will fulfill this JSON schema. Because I understand the white box, I understand what the parsing is. So even so, I have no guarantees of what the output of the black box system is. I do have some guarantees about what I have now. Now, these guarantees might be quite weak. They might just be type checking, right? But it's something.
00:41:29
Speaker
And now if I feed this into another black box, I know something about the input I'm giving to this black box. I do know things. So I'm not saying, oh, this solves alignment. No, no, no. I'm pointing to like a motif. I'm trying to a vibe of like, by, there is a difference. There is a qualitative difference between letting one big black box do everything and having black boxes involved in a larger system.
00:41:55
Speaker
I expect that if like going works, if we get to, you know, save systems or whatever, it will not be a single, it will definitely not be a big, one big black box. Neither will it be one big white box. It will be a mix. We're going to have some things that are black boxes, which you have to make assumptions about. So for example, I'm allowed to make the assumption, or I think it's reasonable to make the assumption, GPT for cannot side channel row hammer attack.
00:42:21
Speaker
But I cannot make any assumptions beyond that. I can't make assumptions about the internals of GPT-4. This, though, again, is observer dependent. Magic is observer dependent. A super intelligent alien from the future might have the perfect theory of deep learning, and to them, GPT-4 might be a white box. They might look at it and fully understand the system, and there's no mystery here whatsoever.
00:42:44
Speaker
But to us humans, it does look mysterious. So we can't make this assumption. The property that is different between white box and black boxes is what assumptions we are allowed to reasonably make. And if you can make a causal story of safety involving the weaker assumptions in black boxes, then cool. Then you are allowed to use them. The important thing is that you can generate a coherent causal story in your specification.
00:43:10
Speaker
about using only reasonable assumptions about why the ultimate safety properties you're interested in should be upheld, why I should believe you. You should be able to go to a hypothetical, super skeptical interlocutor, say, here are the assumptions, and then further say, assuming you believe these, you should now also believe me that these safety properties hold.
00:43:31
Speaker
And the hypothetical, hypersceptical interlocutor should have to agree with you. Do you imagine co-emps as a sort of additional element on top of the most advanced models that interact with these models and limit their output to what is humanly understandable or what is human-like?
00:43:48
Speaker
So we have not gotten to the CoM part yet. So far, this is all background. I think probably any realistic safe AGI design will have this structure or look something like this. It will have some black boxes, some white boxes. It will have causal stories of safety.
00:44:06
Speaker
All of this is background information. Why is it that all plausible stories will involve this? Is this because the black boxes are where the most advanced capabilities are coming from and they will have to be involved somehow?

Can AI Truly Emulate Human Reasoning?

00:44:20
Speaker
At this current moment, I believe this, yes. Unless we get, for example, massive slowdown of capability advancements that buys us 20 years of time or something, where we make massive breakthroughs in white box AI design, I expect that
00:44:37
Speaker
Yeah, neural networks are just too good. They're just too far ahead. I don't think this is, again, this is a contingent truth about the current state of the world. This is not that you can't build hypothetically. The alien from the future could totally build a white box AGI that is aligned where everything makes sense and there's not a single neural network involved. I totally believe this is possible. It's just using algorithms and design principles that we have not yet discovered and that I expect to be quite hard to discover versus just stack more layers lol.
00:45:06
Speaker
Okay, so what more background do we need to get to cognitive immolations? So I think if we're on board with the thinking about black boxes, white boxes, specification design, causal stories, I think now we can move on to the... I think this part I didn't explain very well in the past, but I think this is mostly pretty uncontroversial. I think this is a pretty intuitive concept. I think this is not super crazy. I think
00:45:36
Speaker
you know, if anyone gave you an AGI, you'd want them to tell you a story about why you should trust this thing, like why you should run this. So I think this is a reasonable thing. This is, I expect any reasonable AGI that is safe of any kind will have to have some kind of story like this. So now we can talk about a bit more about the CoM historic. And like, so CoM is more of a specific class of things that I think have good properties that are interesting and I think are feasible.
00:46:03
Speaker
So now we can talk about those. So I'm trying to separate the less controversial parts from the more controversial parts, and we're now going to get to the more controversial parts, and the ones also I am less certain of. I am quite certain that a safe AGI design will look like the things I've described before, but I'm much less certain about exactly what's going to be in those boxes.
00:46:24
Speaker
and how those boxes are coming. Obviously, if I knew how to build AGI, we'd be in a different world right now. I don't know how to do it. I have many intuitions and many directions. I have many ideas of how to make these things safe, but obviously, I don't know. I have some powerful intuitions and resistance to believe that there is this
00:46:46
Speaker
interesting class of systems, which I'm calling coems. So we just think of coems as a restriction on mind space. There are many, many ways I think you can build AGIs, many ways you can build AGIs. I think coems are a very specific subset of these. The idea of a coem of cognitive emulation is that you want a system that can, that reasons like a human and it fails like a human. So there's a few
00:47:16
Speaker
nuances to that. First nuance is this by itself doesn't save you if you implement it poorly. If you just have a big black box trained on traces of human thought and just tell it to emulate that, that doesn't save you because no idea what this thing is actually learning. You have no guarantees the system is actually learning the algorithms you hope it to instead of just some other crazy shock off thing. And that is what I would expect.
00:47:43
Speaker
So even if QP4 reasoning like, you know, may superficially look like it, and maybe you train it on lots of human reasoning, that doesn't get you COM. That's not what it is. COM is very much fundamentally a system where you know that the internal algorithms are the kind that you can trust.
00:48:01
Speaker
Do you not think that because GPT models are trained on human-created data and they are fine-tuned or reinforcement learned from human input that they will become more human-like?
00:48:14
Speaker
I mean, the smiley face will become more human-like, yeah? But not the underlying model where the actual reasoning is going on. I don't expect that. To some marginal degree, sure. But look at how models are not human. Just look at them. Look how they interact with users. Look how they interact with things. They're fundamentally trained on different data. So this is a thing that people are like, oh, but they're trained on human data. I'm like, no, they're not.
00:48:43
Speaker
Humans don't have an extra sense organ that only takes in symbols from the internet at random equally distributed things with no sense of time, touch, smell, hearing, sound, sight, anything like that, and don't have a body. I expect if you took a human brain, you cut off all the sense organs except random token sample from the internet.
00:49:05
Speaker
And then you trained it on that for 10,000 years, and then you put it back in the body. I don't think that thing would be human. I do not expect that thing to be human. Even if it can write very human-looking things, I do not expect that creature to be very human. And I don't know what people would expect it to be. This is so far from how humans are trained. This is so far from how humans do things.
00:49:32
Speaker
I don't see why you would ever expect this to be human. I think someone claiming that this would be human, the burden of proof is on them. You prove to me. You tell me a story about why I should believe you. This seems a priori ridiculous. Sometimes when people talk about GPTs, one way to explain it is imagine a person that's sitting there reading 100,000 books. But in your opinion, this is not at all what's going on when these systems are trained.
00:49:59
Speaker
No, it's more like you have a disembodied brain with no sense organs, with no concept of time. There's no linear progression of time. It has a specialized sense organ, which has 30,000, 50,000, whatever different states that can be on and off in a sequence.
00:50:18
Speaker
And it is fed with millions of tokens randomly sampled from massive corpus of internet for subjective tens of thousands of years using a brain architecture that is already completely not human, trained with an algorithm that is not human, with no emotions or any of these other concepts that humans have pre-built. Humans have pre-built priors, emotions, feelings, and a lot of pre-built priors in the brain. None of those.
00:50:50
Speaker
This is not human. Nothing about this is human. Sure, it takes in data to some degree that has correlations to humans. Sure, but that's not how humans are made. I don't know how else to put it. This is just not how humans are. I don't know what kind of humans you know, but that's just not how humans work.
00:51:10
Speaker
And that's not how they're trained. Let's get back to the co-amps then. How would these systems be different? So the way the systems would be different, and this is where we get to the more controversial parts of the proposal, is there is a sense in which I think that a lot of human reasoning is actually relatively simple.

Human Cognition and AI Development

00:51:31
Speaker
And what do I mean by that?
00:51:32
Speaker
I don't mean it's not like, you know, like the brain is complicated, you know, many things factor message her. It's more something like, and don't take this literally, but it's like, system two is quite simple compared to system one. In the like, Kahneman mean sense is that like, human intuition is quite complicated is all these like, various like, muddy bits and pieces and like intuitions and like, it's crazy.
00:52:01
Speaker
Like, implementing that thing in a white box way, I think, again, it's possible, but it's quite tricky. But I think a lot of what the human brain does in high-level reasoning, as it uses this very messy
00:52:18
Speaker
non-formal system to try to approximate a much simpler, more formal system. Not fully formal, but like more, you know, serial, you know, logic, computer-esque thing. It's like, the way I think of System 2, Reasoning in Human Brand, is that it is a semi-logical system operating on a fuzzy, not fully formal ontology.
00:52:45
Speaker
So one of the main reasons I think that, for example, expert systems and logic programming has failed is not because this approach is fundamentally impossible, I think it's just very hard, but because they really failed at making fuzzy ontologies.
00:53:01
Speaker
This is one of the things that the reasoning systems, like the reasoning systems themselves could do reasoning quite well. There's a lot of reasoning that the systems could do. This is some historical revisionism about how like a logic programming expert system failed entirely and couldn't reason at all. This is revisionism. These systems could do useful things, just not as impressive obviously as what we have nowadays or what humans can do. But what they lacked was a fuzzy ontology, a useful
00:53:27
Speaker
latent space. I think the maybe the most interesting thing about language models is I think they provide this. They provide this latent this common latent space you can map pictures and images and whatever to and then you can do semantic operations on these you can do cognitive operations on these in this space and then decode them into you know language. This is what I think language models and general cognition engines do. So I think these systems are
00:53:54
Speaker
the same kind of system, just kind of less formal with like much more bits and pieces. I think of like GPT as like large system one systems, like as big system ones that have all these kind of like semi-formal knowledge inside of them that they can use for all kinds of different things. And in the human brain, system two is something like
00:54:20
Speaker
recurrent usage of system one things on a very low dimensional space, you know, unlike language and like, you know, you can only keep like seven things in short term memory and so on. But I think it actually goes even further than this.
00:54:35
Speaker
I mentioned this a bit earlier, but I think one of the big things that people miss is how much of human cognition is not in the brain. I think a massive amount of the cognition that happens in the brain is externalized. It's in our tools, it's in our note-taking, it's in our other people. I'm a CEO. One of the most important parts of my job is to move thoughts in my head into other heads and make sure they get thought.
00:55:05
Speaker
Because I don't have time to think all the thoughts. I don't have time to do that. My job is to find how I can put those thoughts somewhere else, where they will get thoughts. I don't have to worry about them anymore. So as a good CEO, you want your head to be empty. You want to be like smooth brain. You want to think no thoughts. You're just a switching board. You want all the thoughts to be thought, and you want to route those thoughts by priority to where they should be thought. But you don't want to be the one thinking them, if you can avoid it.
00:55:36
Speaker
Sometimes you have to because you're the one in charge and you have the best intuitions. But if someone else can't think the thought for you, you should let them think the thought for you if you can rely on them. And one of my strong intuitions here is that this is how everyone works to various degrees.
00:55:56
Speaker
especially as you become more high powered and more competent at delegation and tool use and structured thinking, a lot of
00:56:07
Speaker
thinking goes through these bottlenecks of communication, of note-taking, language, et cetera, which by their nature are very low-dimensional. Not that there's not complexity there. I'm just like, huh, that's curious. There's all this interaction with the environment that doesn't involve crazy passing around of mega high-dimensional structures. I think the communication inside your brain
00:56:33
Speaker
is extremely high dimensional. I think you thinking thoughts to yourself, I think your inner monologue is a very bad representation of what you actually think. Because I think within your own mind, you can pass around huge complex concepts very simply because you have very high bandwidth. I don't think this is the case with you and your computer screen. I don't think it's the case with you and your colleague. You can't pass around these super high dimensional tensors between each other.
00:57:00
Speaker
If you could, that'd be awesome. This is the phenomenon of having a thought and knowing. Maybe there's something good here, but not having put it into language yet. And maybe when you put it into language, it seems like an impoverished version of what you had in your head. Exactly.
00:57:15
Speaker
I think of the human brain as having internally very high dimensional, quote unquote, representations similar to the latent spaces inside of, you know, GPT models. And there's lots of good information there. And they're trying to encode these things into these very low dimensional bottlenecks that we're trying to use is quite hard and forces us to use simple algorithms. Like if we had an algorithm
00:57:39
Speaker
Let's say you have an algorithm for science, like a process for doing science, that requires you to pass around these full complexity vectors to all of your colleagues. It wouldn't work. You can't do this. Humans can't do this.
00:57:55
Speaker
So if you have a design for an AGI that can do science that involves every step of the way, you have to pass along high dimensional tensors. This is not how humans do it. This can't be how humans do it because this is not possible. Humans cannot do this.
00:58:10
Speaker
So, I think this is a very interesting design constraint. This is a very interesting property where you're like, oh, this is an existence proof that you don't need a singular massive black box that has extremely high bandwidth passing around of immeasurable tensors. Because humans don't do that. And humans do science.
00:58:32
Speaker
There are parts of the graph of science that involve very high dimensional objects, the ones inside of the brain. Those are very high dimensional. But there is a massive part of the process of science. Like if I was an alien, I had no idea what humans are, but I knew there's like, oh, technology is being created. And I want to create a causal graph of how this happened.
00:58:55
Speaker
Yeah, there's human brains involved in this causal graph, but a massive percentage of this causal graph is not inside of human brains. It is between human brains. It's in tools, it's in systems, institutions, environments, all these kind of things. So from the perspective, and this, you know, I might be wrong about, but my intuition is that from the perspective of this alien observer, they would come to, they would, if they drew a graph of like how, how the science happened,
00:59:24
Speaker
many of those parts would be white boxes, even if they don't understand brains. And many of these would be boundable. Many of these parts would not involve things that are so complex and misunderstandable. The algorithm that the little black boxes must be doing with each other has to be simple in some degree. It could still be complex from the perspective of the individual human, because institutions are complicated.
00:59:51
Speaker
But from the God's eye view, I would expect this whole thing is not that complicated. It's still quite complex, but it's not as complex as the inside of the brain. I expect the inside of the brain to be way more complicated than the larger system. Does that make any sense? Let's see if I can kind of reconstruct how I would imagine one of these cognitive immulations working, if this were to work out.
01:00:15
Speaker
So say we give the model a task of planning some complex action. We want to start a new company. And then the model runs. This is the big, complicated model. And it comes up with something that's completely inscrutable to us. We can't understand it. Then we have another system interpreting the output of that model and giving us a seven-page document where we can check
01:00:41
Speaker
If I am right, if the model is right, then this will happen and this will not happen. And this won't take longer than seven days and so on. So kind of like an executive summary, but also a secure executive summary.

Building Safe AI Systems with Bounded Processes

01:00:59
Speaker
Is that right? No, that's not how I think about things. So once you have a step which involves
01:01:07
Speaker
BlackBox solves the problem. I write none of that. You're already screwed. If you have a big BlackBox model that can solve something like this at one time step, you're screwed. Because this thing can trick you. It can do anything it wants. There is no guarantees whatsoever what the system is doing. It can give you a plan that you cannot understand. And the only system that will be strong enough to generate the executive summary itself would have to be a BlackBox. Because it would have to be smart enough to understand the other things trying to trick you.
01:01:34
Speaker
So you can't trust any part of the system you just described. So we want the reasoning system to be integrated into how the plan is actually created. Yes. So what I'm saying is that there is an algorithm or a class of algorithms of epistemology, of human epistemology. The way I use the term is the process you use to generate knowledge or to generate
01:02:00
Speaker
get good at a field of science. So it's not your skills in a specific field of science. It's the meta priors, the meta program you run when you encounter a new class of problems and you don't yet know how these problems are solved or how best to address them or what the right tools are. So you're a computer scientist all your life and then you decide I'm going to become a biologist. What do you do? There are
01:02:24
Speaker
things you can do to become better at biology faster than other people. And this is epistemology. If you're very good at epistemology, you should be capable of picking up any new field of science, learn any instrument, get a new sport, whatever. You should be like, not that, you might be bad at it. Maybe you do sport and you notice, well, I actually have bad coordination skills or whatever, right? Sure. But you should have these meta skills of
01:02:54
Speaker
knowing what questions to ask, knowing what are the common ways that failures happen. This is a similar thing. I think a lot of people who learn lots and lots of math can pick up new areas of math quickly because they know the right questions to ask. They know the general failure modes, the vibes. They know
01:03:11
Speaker
know what to ask. They know how to check for something going wrong. They know how to acquire the information they need to build their models. And they can bootstrap off of other general purpose models. There are many concepts that are motifs that are very universal that appear again and again, especially mathematics. And mathematics is full of these
01:03:33
Speaker
you know, concepts of, you know, sequences and orderings and sets and like, you know, and graphs and whatever, right? Which are not unique to a specific field, but they're like general purpose, useful, reusable algorithm parts that you can reuse in new scenarios. Like,
01:03:52
Speaker
usually as a scientist when you encounter a new problem you try to model it you'd be like all right i get my toolbox of like you know simple equations and tool and like you know useful models i have some exponentials here i got some logarithms i got some you know
01:04:07
Speaker
dynamical systems, or equilibrium systems, I got some, you know, whatever, right? And then you kind of like mess around, right? You try to find systems that like capture the properties you're interested in, and you reason about the simpler systems. So this is another important point. I usually take the example of economics to explain this point. So
01:04:28
Speaker
I think a lot of people are confused about what economics is and what the process of doing economics is and what it's for, including many economists. So a critique you sometimes hear from lay people is along the lines of
01:04:44
Speaker
Oh, economics is useless. It's not a real science, because they make these crazy assumptions that the market is efficient. But that's obviously not true. It can't be. So this is all stupid and silly, and these people are just like, whatever. And this is completely missing the point.
01:05:03
Speaker
So the way economics, and I claim, I'm gonna make the claim the second, basically all of science works, is what you're trying to do as a scientist, as an economist, is to find clever simplifications, small, simple things that if you assume or force reality to adhere by, simplify an extremely high dimensional optimization problem into a very low dimensional space that you can then reason about.
01:05:32
Speaker
So the efficient market hypothesis is a great example of this. It's not literally true ever in reality. Of course, it can't be, right? Because it's always going to be inefficiency somewhere. We don't have infinite market participants trading infinitely fast. I mean, of course not.
01:05:50
Speaker
The observation is that if we assume this for our model, just in our platonic fantasy world, if we did assume this is true, this extremely complex problem of modeling all market participants at every time step simplifies
01:06:07
Speaker
in many really cool ways. We can derive many really cool statements about our model from this. We can derive statements about how will minimum wage affect the system. How will a banking crisis affect the system? I don't know. I'm not an economist. I'm just hypothetical.
01:06:29
Speaker
This is, I claim, the core of science. The core of science is finding clever, not true things that if you assume are true, or you can force reality to approximate, allow you to do optimization. Because basically humans can only do optimization in very, very, very low dimensional spaces.
01:06:50
Speaker
Another example of this might be agriculture. So let's say you were a farmer and you want to maximize the amount of food from your parcel of land, and you want to predict how much food you'll get.
01:07:04
Speaker
Well, the correct solution would be to simulate every single molecule of nitrogen, all possible combinations of plants, every single bug, you know, how it interacts with every gust of wind and so on. And if you could solve this problem, if you had enough compute, then yeah, you would get more food. You know, you would get probably some crazy fractal arrangement of like all these kinds of plants. It would probably look extremely crazy wherever you produce.
01:07:30
Speaker
But obviously, this is ridiculous. Humans don't have this much compute. You can't actually run this computation. It's too hard. So instead, you make simplified models. You do monoculture. You say, well, all right, look, I assume an acre of wheat gives me roughly this much food. I got roughly this many acres. And let's assume no flooding happens.
01:07:53
Speaker
And then if you make these simplifying assumptions, now you can make a pretty reasonable guess about how much food you're going to have in winter. But obviously, if any of those predictions go wrong, it does flood, then your model, your specification is out the window. The reason I'm going on this tangent.
01:08:09
Speaker
is to bring it back to Co-Em in that I'm trying to give the intuition about why you should at least be open to the idea that there are, that doing, so when I think about Co-Em, I specifically think about, you know, the two examples I was like doing science and running a company. Those are like two of the core examples I try to use, like a full Co-Em system. But let's focus on the,
01:08:36
Speaker
doing science one. That's the one I usually have in the back of my mind. I know I've succeeded. If I have a system that can do any level of human science without killing me, that would be my mark of success that Coen has succeeded. Very important by the caveat. Coen is not a fully alignment solution.
01:08:58
Speaker
If I expect that what a co-em system, if it works, would look like is that if it is used by a responsible user who follows the exact protocols of how you should use it and does not, and only uses it, it does not use it to do extremely crazy things, then it doesn't kill you.
01:09:16
Speaker
That's the safety property I'm looking for. The safety property is not, we'll always do the right thing, and it's completely safe, no matter what the user does. This is not the safety property I think CoM's have. I think it is possible to build systems like this, but I think they're much, much harder. And they're like, what I would do if CoM succeeds, then that's the next step to go towards it. So if you tell a CoM to shoot your leg off, it shoots your leg off.
01:09:41
Speaker
It doesn't stop you from shooting your leg off. Of course, ideally, if we ever have super powerful super intelligences, you would want them to be of the type that refuses to shoot your leg off, but that's much harder. Could you explain more this connection between these simplifications that we get in science and co-emps?

Co-Em Systems and Scientific Simplification

01:10:04
Speaker
Do we expect or do you hope that co-emps will be able to create these simplifications for us?
01:10:10
Speaker
And how would this work? And why would it be great? So the way I think about it is that the thing that humans do to generate these simplifications, the clay that I'm making here, is that this is something that we can... If you have the fuzzy ontology, if you have language wants to build upon, you can build this additional thing on top of it. That this does not have to be inside of the model. So this is...
01:10:38
Speaker
This might not be true. I might be wrong about this. There are some people who say, no, actually, the process of epistemology, the process of science in this regard is so complex. It's impossible for you, even if you have a language model helping you, it's too hard. You can only do it using crazy RL, whatever. If that's the case, then code doesn't work.
01:11:07
Speaker
Like, yeah, then it doesn't work. I'm making a claim that I think there's a lot of reasons to believe that with some help, some bootstrapping from language models, you can get to a point where the process of science that's built on top of them is legible and you have a causal story of why you should trust it.
01:11:28
Speaker
So it's not that a black box spits out a design and you have another black box check it for you. It's you understand, you interactively, you iteratively build up the scientific proposal and you understand why you should trust this. You get a causal story for why you should believe this. The same way that in human science,
01:11:51
Speaker
You have your headphones on, right? And you expect them to work. This is mostly based on trust. But if you wanted to, you could find the causal story about why they work. You could find the blueprints. You could find the guy who designed them. You could check the calculations. You could reverse, assuming everyone cooperated with you and they shared their blueprints with you. And you read all the physics textbooks and whatever. There is a story.
01:12:20
Speaker
Legible, none of these steps involve superhuman capabilities. There is no step here that is like unfathomable to be humans. And the reason is that because like otherwise it wouldn't work. Like humans couldn't coordinate around building something that they can't
01:12:35
Speaker
somehow communicate to other people. So the headphones you're wearing were not built by one single guy who cannot explain to anyone where they came from. They have to be built in a process that is explicable, understandable, and functional for other people to understand as well. And that is very low dimensional. Now, I'm not saying it has to be legible to everybody in all scenarios, anything like that, or that it's even easy. It might still take lots of time, but there's no
01:13:03
Speaker
There's no crazy God-level leap of logic. It's not like someone sat down, thought really hard, and then spontaneously invented a CPU. That's not how science works. It's almost like to think of it that way. Oh, these fully-formed ideas just kind of crashed into existence, and everyone was in awe. But that is just not how science is actually done by humans. I think it's possible to do science this way. I think superhuman intelligences could do this, but it's not how humans do it.
01:13:31
Speaker
Where in the process does the limit come in? So are we still imagining some system reading the output of a generative model, or is it more tightly integrated than that? Is it perhaps 100 steps where humans can read what's going on along the way? Yeah, so the truth is, of course, I don't know, because I haven't built anything like this yet. My intuition is that, yes, it will be much more tightly integrated.
01:13:59
Speaker
is that, you know, there'll be language models involved, but they're doing relatively small atomic tasks. They're not solving the whole problem, and then you check it. It's like they're doing atomic subparts of tasks, which are integrated into like, so I expect a co-em, I like to talk about co-em systems. They're not models, they're systems. It's like, in a way, when I think about designing a co-em system, what I'm trying to say is I'm kind of trying to integrate
01:14:26
Speaker
back software architecture and like distributed systems and like traditional computer science thinking into AI design. I'm saying that
01:14:36
Speaker
The thing that humans do to do science is not magical. This is a software process. This is a cognitive computational process that is not sacred. This is a thing you can decompose. And I'm also claiming further, you can decompose iteratively. You don't have to decompose everything at once. Because we have these crazy black box things, which can do lots of the hard parts,
01:14:59
Speaker
So you can start with just using those. Like the way I think about thinking co-emesis, you start with just a big black box. You start with just a big language model, you try to get it to do what you want. Next step is you're like, all right, well, how can I break this down into smaller things that I understand? How can I break, how can I call the model, how can I make the model do less of the work? I like to think of it as you're trying to move as much of the cognition
01:15:24
Speaker
not just for computation, about the cognition as possible from black boxes into white boxes. You want as much as possible of the process of generating the blueprint to happen inside of processes that the human understands, that you can understand, that you can check.
01:15:41
Speaker
Then you also have to bound the black boxes. Like if you have all these great white boxes, but there's still a big black box at the end that does whatever it wants, you're still screwed. So this is why the specification and the causal story is important. So ultimately, we expect a powerful CoM system to look like.
01:15:58
Speaker
is it will be a system of many moving parts that have clear interfaces between them. You have clear specification, a story about how these systems interact, why you should trust what those outputs are, that they fulfill the safety requirements you want them to require, how these things work, and why these systems
01:16:23
Speaker
are implementing the kind of human epistemology that humans use when they're solving science. They're not solving, they're not implementing
01:16:33
Speaker
an arbitrary algorithm that solves science. They're implementing the human algorithms that solve science. And this is different from like GPT systems. GPT systems I expect will eventually learn how to do science, and partially they already can. But I don't expect by default that they will do it the way humans do, because I think there's many ways you can do science.
01:16:56
Speaker
And what we want is with code M, therefore cognitive emulation, we want to emulate the way humans do this. The reason we want to do this is because A gives us bounds. We won't have these crazy things that we can't understand. We know, we could kind of deal with human levels, right? Like we know how humans work. We're human level. We can deal with human level things to a large degree.
01:17:19
Speaker
And it makes the system understandable. It gives us a causal story that is human readable and human checkable as necessary. Of course, in the ideal world, your specifications should be so good that you don't need to check it once you've built it. Any safety proposal that involves AGI has to be so good that you never have to run the system to check it. If you have to do empirical testing on AGI, your script
01:17:46
Speaker
Your specifications should be so good that you know ahead of time that once you turn it on, it will be okay. Isn't that an impossibly difficult standard too? I mean, this seems almost impossible to live up

Designing Safe AI with Understandable Causal Stories

01:17:59
Speaker
to.
01:17:59
Speaker
I totally disagree. Like I just totally disagree. I think it's hard, but I don't think it's impossible by any means. So because again, this is not a formal guarantee. I'm talking about a story, a causal story, the specification. Like this is like saying, is it impossible to have a system where passwords don't leak? And I'm like, sure. And the limit, yes. You know, if your enemy is, you know, magical gods from the future who can, you know, directly, you know, exfiltrate your CPU states from, you know, a thousand miles away, then yeah, you're screwed.
01:18:29
Speaker
Then yeah, yeah, in that case, you are screwed. And I expect similarly, this is why the boundedness is so important and these assumptions are so important. If you have, you know, GPT omega, you know, row hammering, you know, super god, then yeah, you're screwed. Then I do think it is impossible. But that's not what I'm talking about. This is why the boundedness to human level is so important. It is so important that none, no parts of these systems are superhuman.
01:18:55
Speaker
and that you don't allow superhuman levels of things. You want to aim for human and only human, because this is something we can make assumptions about. You cannot make assumptions about superhuman intelligence, because we have no idea how it works. We have no idea what it's capable of. We don't know what the limits are. So if you made a co-im super-humanally intelligent, which I expect to be straightforwardly possible by just changing variables, then you're screwed. Then your story won't work, and then you die.
01:19:22
Speaker
Should we think of COAMS as companies or research labs where each, say, employee is bounded and thinks like a human and they all report to the CEO and every step is understandable by the CEO, which is analogous to the human user of the COAM system? I think this is a nice metaphor. I don't know if that's literally how they will be built, but I think this is a nice metaphor for how I would think about this. If you had a really good COAM,
01:19:52
Speaker
a really good full CoM system. What it should do is that it shouldn't produce a 1000X AGI and it shouldn't make the user a 1000X smarter. What it should do is it should make the user function like a 10001X AGIs.
01:20:10
Speaker
It should make you paralyzable, not serially intelligent. Because if you're thousands intelligent, who knows? Like that is dangerous. But what it should do is it's like a company, like the CEO is paralyzing himself across a large company. There are thousands of smart people. That's what I want CoHMS to do. I want them to paralyze the agency, the intelligence of the human, into a thousand parallel 1X AGIs.
01:20:34
Speaker
that are not smarter than humans, that are founded, that are understandable, that you can understand and that you have this causal story, why you should trust them. And that's the key point, I think, because for each subcomponent of this co-em systems, each employee in the research lab or the company, how do we know whether they operate in a human-like way? It seems like this could be asked, because we could ask this of the system at large, but we could also ask this of a subcomponent. It seems that we have the same problem for both systems.
01:21:03
Speaker
This is quite difficult, but basically my intuition is that, so the problem with talking about employees and where the corporation thing doesn't quite work is that another unfortunate side effect of calling it an emulation is that this implies more than what I mean. When I talk about a CoM,
01:21:23
Speaker
emulating a human. I don't mean a person. I don't mean it's emulating a person. It doesn't have emotions. It doesn't have values. It doesn't have an identity. It's more like emulating a platonic human or a platonic neocortex. It's more like a platonic cortex with no emotions, no volition, no goals. It's like an optimizer with no goal function.
01:21:50
Speaker
It's like, it's a completely, you know, you've just like ripped out all the emotions, all of the things. It's just a thinking blob. And then you plug in the user as a source of agency. The human becomes the emotional motivational center.

Comparing CoM and Cyborg Agendas

01:22:04
Speaker
The COM is just.
01:22:05
Speaker
thinking blob. And there are reasonable people who are not sure this is possible. Others do think it's possible. So this is where this overlaps with the cyborg research agenda, in case you've heard of that from Janice and other people.
01:22:27
Speaker
where the idea is you hook humans up to AIs to control them, to make humans super smart. Where CoM differs from the cyborg agenda is that in the cyborg agenda, they hook humans up to alien cortex. Well, I say, no, we build human cortex that works the way emulation of human cortex. The implementation is not human, but the abstraction layer exposed is human, and you hook that up to a user.
01:22:53
Speaker
You have the user use emulated human cortex. It's not simulated human cortex. That would be even better. But that's probably too hard. I don't think we can do simulated cortex in a reasonable time. But if we could, that'd be awesome. If we do whole-brain emulation, that'd be awesome. But I just don't think it's too hard. So the final product, if it would work, would look something like the user using this emulated emotionless, just like raw,
01:23:22
Speaker
comp cognition stuff to amplify the things they wanted to do.
01:23:28
Speaker
There's some pretty interesting, to also just add to that metaphor, there's some very interesting experiments, for example, decorticated rats, where in rats, if you remove the cortex, so like the thinking part of the brain, the wrinkly part, they're mostly still kind of normal. They walk around, they eat food, they sleep, they play. Like you don't really see that much of a difference.
01:23:52
Speaker
Because the emotional parts are still there. If you move the emotional and motivational part, they just become completely catatonic. They just die.

Risks and Safety of CoM Systems

01:23:59
Speaker
So the human brain is similar. It has the same structure. We have this big wrinkly part, which is something like a big thinking blob that does unsupervised learning. And then you have deeper motivational circuits, emotions, instincts, hard coded stuff, which sit below that. And the cortex learns from these things and does actions steered by these emotional centers. This is not exactly system one, system two.
01:24:28
Speaker
a bit more fuzzy than that. It's also like, yeah, it's like just as like an intuition. Yeah, so let's say we have this Cohen system, which is where the metaphor is a company or a research lab with a lot of employees with a normal human IQ, as opposed to having one system with an IQ of 1000, whatever that means.
01:24:51
Speaker
Isn't there still a problem of this system just thinking much, much faster than we do? So imagine being in a competition with a company where all of the employees just think 100 times faster than you do. Won't speed alone make the system capable and therefore dangerous?
01:25:09
Speaker
So there's a difference between speed and serial depth. So this is, I'm not sure. My feeling is that speed is much less a problem than serial depth. So by serial depth, I mean how many consecutive steps along a reasoning chain can a system do.
01:25:26
Speaker
I think this is very dangerous. I think serial depth is where most, maybe not most, but a very large percentage of the danger comes from. I think the thing that makes super fast thinking things so dangerous is because they can reach unprecedented serial depths of reflection and thinking and self-improvement much, much, much faster. And yes, I expect
01:25:47
Speaker
If you build a Co-Em system that includes some component that can self-improve and you allow it to just do that, then yeah, it's not safe. You fucked up. If you build it that way, you have failed the specification. You're screwed. Probably. I wonder if perhaps building Co-Em systems might be a massive strength on the market.
01:26:10
Speaker
I wonder if perhaps there will be an intense demand for systems that act in human-like ways, because the CEOs of huge companies would probably want to be able to talk to these systems in a normal way to understand how these systems work before they're deployed.
01:26:31
Speaker
There's a sense in which co-emps will perhaps integrate nicely in a world that's already built for humans. So do you think there's some kind of win here where there will be a lot of demand for human likeness in AI systems? There is a optimistic and a pessimistic version of this. The optimistic version is, well, yeah, obviously, we want things that are safe and that do what we want and that we can understand.
01:27:00
Speaker
Obviously, like the best product that could ever be built is an aligned AGI. That is the best product. There is nothing better. That is the best product. Coems are not fully aligned, super intelligence. I never claimed they would be. And if anyone used them that way, they'll be really bad. You should not use them that way. That is never the goal. You should use these things to do science to speed up, you know, nanotechnology to create whole brain emulations or to, you know, do more.
01:27:27
Speaker
um, you know, work on alignment or whatever, you know, you should not use them to like, you know, I'll just let the co-em do optimize the whole world or whatever. No, it's not what you're supposed to do. And if you do that, you die and bad things happen. So would there be demand for these systems? I expect. Yeah. Like, like this is an incredibly powerful system still. If you use a co-em and use it correctly, do you get it?
01:27:51
Speaker
Yeah, imagine you could just have a perfectly loyal company that does everything you want it to do, staffed exclusively by John von Neumann's. That is unimaginably great. Of course, there's a pessimistic version.

The Global Race to AGI and Safety Concerns

01:28:04
Speaker
The pessimistic version is like, LOL doesn't matter. By that point, you're going to have 100x John von Neumann GPTF, you know?
01:28:12
Speaker
which then promptly destroys the world. But won't there be demand for safety? I mean, from governments, from companies, who would deploy a system that is uncontrolled and is, you know, where we can't reason about it, we don't know how it works, if we get to a point where these systems are much more powerful than they are today?
01:28:38
Speaker
Hopefully. So a lot of the work I do nowadays is sadly not in the technical realm, but is in policy and communications. I've been talking to a lot of journalists and politicians and so on for exactly these reasons. It's because we have to create the demand for safety. Currently, let me be clear about what the currency of the world here is.
01:29:02
Speaker
The way the current world is looking is we are in a death race towards the bottom, careening towards a precipice at full speed. And we won't see the precipice coming until we're over it. And this is led almost entirely by a very, very small number of people.
01:29:18
Speaker
that are techno-optimists, techno-utopians, people in the Bay Area and London who are extremely optimistic or at least willfully denial about how bad things are, or that can galaxy brain themselves and say, well, it's a race, it's a race, it's not my fault, so I just have to do it anyways. Whatever. I'm kind of at the point that I don't really care why people are doing these things. I only care that they're happening. People are doing these things, they're extremely dangerous, and there's a very small number of people. And there's this
01:29:45
Speaker
myth among these people that they're like, oh, we have to do it, you know, people want it. This is just false. If you talk to normal people, and you explain to them what these people believe, like, when, when most people hear the word AGI,
01:30:04
Speaker
What they imagine is human-like AI. They think, you know, it's like your robot buddy. He thinks like you. He's not really smarter, but, you know, he's like, you know, he has a human emotions. It's like when that is not what people, you know, organizationally open AI or deep minds think when they say the word AGI.
01:30:23
Speaker
When they say AGI, they mean godlike AI. They mean a self-improving, non-human, incredibly powerful system that can take over the government, can destroy the whole human race, et cetera. Now, whether or not you believe that personally, these people do believe this. They have publicly stated this on the record. These people do believe these things, this is what they're doing. And once you inform people of this,
01:30:50
Speaker
They're like, what the shit? Absolutely fucking not. What, what? No. Of course you can't build God AI. What the fuck are you doing? Where's the government? Like, how, how are we in a world where, you know, people can just like, you know, down in San Francisco can publicly talk about how they're going to build systems that have, you know,
01:31:11
Speaker
1%, 5%, 20%, 90%, whatever risk of killing everybody that will topple the US government, whatever, and actually work on this and get billions of funding. And the government is just like, cool.
01:31:27
Speaker
Like, we are not in a stable equilibrium, and it is coming. It is now starting to flip. People are starting to freak the fuck out, because they're like, whole shit. Like, A, this is possible, and B, absolutely fucking not. And this gives me some hope that we can slow down and buy some more time.
01:31:46
Speaker
I don't think this saves us, but it can save us some time. If we don't take the fast road towards the precipice and we succeed in building CO-Ms instead, for example, is there still a difficult unsolved problem, namely going from an aligned human-like CO-M to an aligned superintelligence?
01:32:08
Speaker
Is there still something that's very difficult to solve there? Perhaps the core of the problem is still unsolved? Absolutely. Assuming we're not dead, we have a safe COVID system, we're not out of the woods. Then the world gets really messy. Look, the world is going to get really messy.
01:32:34
Speaker
This is the least weird your life is going to be for the rest of your life. Things are not going back to quote unquote normal. Things are only going to get weirder. This is the least bad things are going to be for the rest of your life. Things are only going to get weirder from here. There's a power struggle.
01:32:51
Speaker
that is coming. And there is no way to prevent this because it is about power. There is these incredibly powerful systems being built. There are people racing for incredible powers. There is conflict. There is fight. There is politics. There is war. These things are inevitable. There is no way this goes cleanly. There is no way that things will go smoothly and people will get along and things will be fine. No, there is going to be
01:33:18
Speaker
you know, unimaginable levels of, you know, struggle to decide what will happen and how things will happen. And most of those ways are not going to end well. I think most of the way I have said this before, and I will say it again, I do not expect us to make it out of the century life. I'm not even sure we'll get out of this decade. I expect that by default, things will go very, very badly. They don't have to.
01:33:46
Speaker
So the weird thing, which is something to myself is that we live in a very strange timeline where we're in a bad timeline. Like, let me be clear, we're in a bad timeline. Things are going really bad right now, but we haven't yet lost, but it's quite curious. There are many timelines where you just like, it's so over, like it's just totally over. Nothing you can do, like, you know.
01:34:09
Speaker
Everyone is on board with building AGI as fast as possible. You know, the military gets involved and they're all gung-ho about it or whatever, and nothing can stop it. Or, you know, World War III breaks out in both places, raised AGI and whatever. Like, if that was the case, then just like, it would be so over. But it's not over. It's not over yet. It might be soon, though. It might be soon. But currently, yeah, let's say we get coordination, we slow things down.
01:34:35
Speaker
build co-emps systems. Let's also assume furthermore that we keep the system secure and safe and they don't get immediately stolen by every unscrupulous actor in the world. Then you have very powerful systems which you can do very powerful things. In particular, you can create great economic value. So one of the first things I would do with the co-emps system if I had one is I would produce massive amounts of economic value and trade with everybody. I would trade with everybody. I'd be like, look, wherever you want in life, I'll get it to you. In return, don't build AGI.
01:35:04
Speaker
You know, I'll get cure, give you the cure for cancer, lots of money, you know, whatever you want. I'll get you whatever you want and return. You don't build it yet. That's the deal. It's like the deal I offered everybody. And then, you know, conditioning on the slim probability of this going well.
01:35:23
Speaker
which I think is slim. Probably the way we actually work is we fuse with one or more national governments, work closely together with authority figures, with politicians, military intelligence services, et cetera, to keep things secure, and then work securely on the hard

Vision for a Safe AI Future

01:35:43
Speaker
problem. So now we have the ability to do
01:35:46
Speaker
a thousand times John von Neubman is working on alignment theory on formally verified safety and so on. We trade with all the variant players to get them to slow down and coordinate and have the backing of government or military intelligence service security so that bad actors are interrupted.
01:36:09
Speaker
That's the kind of the only way I see things going well. As you can tell, as the usual saying goes, any plan that requires more than two things to go right will never work. Unfortunately, this is a plan that requires more than two things to go right. So I don't expect this plan to work. Yeah, but let's hope we get there anyway. Connor, thank you for coming on the podcast. It's been super interesting for me. Pleasure as always.