Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education image

Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education

Future of Life Institute Podcast
Avatar
240 Plays2 years ago
Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
Recommended
Transcript

Introduction and Lightning Round Setup

00:00:01
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Stocker. On this episode, I talk with Connor Lehe. Connor is the CEO of Conjecture, which is an organization dedicated to scalable AI alignment.
00:00:16
Speaker
This episode is a lightning round. I ask Connor a lot of questions about a lot of different topics. And so you get to hear Connor's opinions about everything ranging from aliens to economics, to memetics, to education, everything. It's a super interesting episode. I hope you enjoy it. Here is Connor Leahy. Great. And we're back. Connor, welcome back.
00:00:42
Speaker
Glad to be back. I'm going to do something. I'm going to give you an impossible task, which is to answer complex questions quickly.

Existence of Aliens

00:00:50
Speaker
Oh, shit. Yes. And I call this a semi-lightning round or a pseudo-lightning round, because I want you to actually explain your answer a bit. So I don't want you to give me a 30-second answer. But let's see if we can move a little faster here and get your takes on a lot of different topics.
00:01:13
Speaker
Yeah, let's go. OK, perfect. Are there aliens in the universe? Depends on what you mean by universe. Sorry, I'm terrible at this, damn it. In the observable universe, maybe, probably-ish. Yeah. Did you say anything about your thinking on this topic?
00:01:37
Speaker
look up grabby aliens. Like, I mean, it's something I kind of defer my opinion there to like that one FHI, like, you know, 20% the universe is empty thing plus grabby alien kind of entropic ish models. I don't think about too much UFOs are definitely not aliens though. So anyone who thinks those are aliens, like obviously not, they're obviously our simulators messing with us.

AI Safety and Human Mind Uploading

00:01:57
Speaker
Will humans ever upload our own minds? I mean, if we survive the, you know, takeoff, yeah.
00:02:05
Speaker
the AI take off, you mean? Yep. Yeah. Okay, so what motivates you to work on AI safety? Yeah, is the world is nice. There's a lot of nice things. These could be even nicer. A lot of people, they, you know, I'd like them to be happy. I don't like suffering. I'd like to prevent suffering. It's a good thing to do.
00:02:23
Speaker
Do you find that your motivation is affected by your pessimism about the difficulty of the problem? Are you more motivated the harder the problem is? Could the problem be so hard that you would simply give up or be not motivated?
00:02:39
Speaker
The problem is impossible. It's not particularly motivating to work on. But as I said before, it's it's curious that we live in a timeline where we're not obviously lost. We're like, the odds are extremely against us. It's like, but we have lost it. It's this is the thing actually that people ask me sometimes is that I kind of why you have these like terrible pessimistic timelines, but you don't seem to be depressed. And I'm really not like at all. I'm like super happy all the time, like lately in particular, like ever since I've really started like really working on this problem as the happiest I've ever been in my life. And I'm like,
00:03:09
Speaker
It's like, I don't know, to be a bit poetic or whatever. Some people are built for the valley, some people are built for the mountains. And I feel like I'm built for the mountains. I kind of like my whole life. I feel like I've just been waiting for something to do. I'm just like, oh, war is finally here. All right, let's go. I'm ready. Let's do this.

Ethics: Virtue vs. Consequentialism

00:03:29
Speaker
Do you think it has something to do with the alignment of your values with your actions? Is that a fundamental of human happiness in your case?
00:03:40
Speaker
I think it is, yes. Like, this is something that's like always been like, even though I'm like, you know, always tried to be a big brain, rationalist, like thinking about consequentialism, whatever, like at heart, I've always been very virtue ethicist. And now I've like come back around towards like virtue ethics as such being actually a good and consequentialism bad in many ways. And
00:04:01
Speaker
I have a deep, deep aesthetic re-appreciation for heroism as a concept and honor and sacrifice, mercy. Things that in our modern world are almost trite, they feel cutesy.
00:04:20
Speaker
I have a felt sense like there's a felt sense in my mind of like honor and heroism as like fundamental ontological concepts that are like valuable and like feel very good like this is um in a sense that my felt sense I'm not like describing any kind of like like moral reality to any of this to be clear this is just a felt aesthetic sense like felt aesthetic sense of like
00:04:44
Speaker
heroism as like a fundamental concept of goodness and like a fundamental thing that I and like sacrifice, you know, I know, very classic Western, very classic, like, you know, or like Christian or like heroic aesthetics. But I don't know, it's just always been how I've been. Like, you know, as a kid, I was like, you know, when I was like, you know,
00:05:03
Speaker
I was quite young. I had sleeping issues when I was a kid. I had terrible insomnia. It took me like three hours to fall asleep every single night.
00:05:15
Speaker
It wasn't bad though. As a kid, I was like, great, it's thinking time.

Internet's Impact on Intelligence

00:05:19
Speaker
Just like, sit alone, no one will bother me. I can just think of stories and like, think about philosophy and science and just like, you do that. And then I would look for hours and hours and hours and hours and hours and hours and hours. I would just think about, what does it mean to be a hero? And like, that was like, it's very funny. Because like, for me, it was like, the like fundamental concept. It was like a mystery I had to solve. It was like, it was as real to me as physics.
00:05:43
Speaker
But it's like something where it's like, surely there's a true definition, I just haven't found it. Which is, you know, classic, like, you know, you know, pre postmodern philosophy kind of like, thinking of like, you're finding the true definition of goodness. Doing the Socrates thing of examining a concept from all angles until you find the true definition, which it's in some sense a fool's errand, but perhaps you learn something in the process.
00:06:06
Speaker
Exactly. So like, I'm like, obviously, like, obviously these are just like, you know, contingent concepts. And obviously it's just like my own, like, you know, certain aesthetic sense. Like, I don't ascribe like reality to any of these concepts or like, you know, universality to any of these ideas. But so I went through the phase of like, you know, as a kid being like taking these things super seriously, then, you know, like an older teen, I was, you know, become edgy atheist, you know, moral realists, you know, not sorry, like more relativists or like, whatever. But then I kind of like came back around where I'm like,
00:06:35
Speaker
Yeah, I'm not saying these are universally compelling or whatever, but they mean a lot to me. And so screw you, I can do whatever I want. It's easier to see how the virtues would apply to your daily life than it is to see how the moral theory of consequentialism would apply. It very often or very easily gets incredibly complex and meta with consequentialism. Thinking in terms of virtue is something that you can
00:07:04
Speaker
apply more easily, I would think. Okay, on the topic of consequentialism, is infinity a problem there? So is infinity a problem for the moral theory of consequentialism?
00:07:21
Speaker
Yes, absolutely. Infinity is a problem for almost everything, actually. I have stupid, spicy opinions about infinity and philosophy and how people are very, very confused about what infinity is, especially that people don't understand that there are embedded creatures living in an embedded reality. And you can like very easily derive paradoxes from infinities, especially higher order infinities. I like begrudgingly accept, you know, like the, you know, allocidal infinity, but I'm like,
00:07:51
Speaker
I'm like, you know, jokingly, you know, morally against higher order infinities because I think they like all these kind of weird paradoxes that people like real numbers are obviously not real. Like anyone who says real numbers is real. You're on my naughty list. Like I'm going to fight you. Do you think there are actual kind of real world implications of thinking about infinity? Does it change anything about what we should do?
00:08:18
Speaker
Yep, absolutely. I think people are very confused about many things. I, for example, expect that black holes are not infinitely dense, that this is just a confusion, just because we're confused about what infinity means and how time works and analogs and stuff. And that if we had a different kind of theory, for example, based on intuitionistic logic or some other kind of computable logic, then we would find that black holes have very different properties. I expect this to be the case for ethics and such.
00:08:47
Speaker
The classic example is, or like, you know, infant ethics where it's like, well, if you believe in like infant multiverses or whatever, well, then you should assign zero value this universe because it's, you know, a zero size slice of infinity. So therefore you should have no interest whatsoever in anything that happens in this reality. And you should only care about, you know, affecting like, you know, hypercomputing multiverses or whatever. This is the kind of thing where if you encounter this kind of thought, your immediate reaction should be.
00:09:18
Speaker
No, like just say no, just turn around, just go back home. Like obviously this is insane.

Infinity in Philosophy and Science

00:09:26
Speaker
Like the funny thing about like alien gods from beyond reality, you know, these like hyper intelligence, super touring, you know, machines from beyond reality that, you know, come in through infinite mathematics is they can all be defeated by simply saying no.
00:09:42
Speaker
You can defeat Roko's Basilisk by just saying no, and then it's gone. That's it. That's all you needed to do. You're thinking you can defeat all this very abstract philosophy and mathematics by simply trying to be heroic or insisting that you act virtuous.
00:10:00
Speaker
You don't even need that. Just say, no, that sounds stupid. Just go home. Like someone gives you some galaxy-frain-fucking-Rocos-Bacillus to argue about, oh, well, you see, now that you know about the Bacillus, it will harm you. I'm like, no, I don't believe in the Bacillus. And now the Bacillus can't torture you because it has no causal influence over you because you don't believe in it.
00:10:20
Speaker
So, like, even by the Basilisk standard of what it is, it doesn't work. You literally just have to say, nah, that sounds like bullshit, and then you're free. And, like, I think there's a lot of, like, galaxy braining that happens in, like, infinite ethics in particular, where people say, like, we'll assume we observed a hypertrophy machine. I'm like, whoa, whoa, whoa, whoa, whoa, whoa, whoa, whoa. Three steps back.
00:10:44
Speaker
You cannot observe a hyper-touring machine. It would take infinite steps to verify whether a system is a hyper-touring machine or not. Not just like arbitrary mini. Infinite.
00:10:56
Speaker
Humans can't do infinite steps. So any conclusion that follows in your later 50,000 word essay, I can dismiss out of hand. I don't need to read it. It's already wrong. You already made a mistake. So you can reason about higher order things like this if you want.

Conversations with Historical Figures

00:11:14
Speaker
But people get confused about what they're actually reasoning about. They use symbols. So this is the classic humans
00:11:24
Speaker
get confusing themselves with symbols, you know, it's like, you have a symbol and we'll call this, you know, infinite, you know, steps or whatever, right? And you can like say things about the symbol, you know, there's like all kinds of things you can do with the symbol, you can do all kinds of like algebra on the symbol, and that's all cute. But it's not infinity, you can't interact with infinity, infinity is infinitely big, there is no
00:11:49
Speaker
thing you can, you can't touch it. It's not something that will ever affect anything that only affects you after infinite steps does not affect you because you will never exist for infinite steps. This is actually my criticism where I mentioned, you know, a while back, but IC and like Flaminov induction. So to quickly explain the point, the problem with, with IC and like, it's like IC is provably the most intelligent agent you can get is that it will always converge to the best solution possible.
00:12:18
Speaker
given up to a constant. So this is actually crazy. This is actually a crazy result. Like when you first read it, you're like, what? No matter what scenario you put this thing in, it will always get to the correct solution as fast as possible up to a constant. Wow, that's crazy. When I first read this result, I was like, wow, that's not what I expected. That's pretty impressive.
00:12:42
Speaker
But then you notice there's a fucking trick. Oh wait, I've been deceived. There is a trick here. That constant is arbitrarily large. It can be anything. That constant can be any number. It doesn't matter. It can be
00:12:59
Speaker
You know, Graham's numbers times the size of your universe. Like it doesn't matter. It can be anything. So it can actually be implemented in our universe. Yeah. I mean, we can't even implement it because you also need to have like halting oracles to make implement this. But even if you could, it still wouldn't work. Like, I mean, well, well, well, the approximations would work.
00:13:20
Speaker
So basically, the cheesy way of saying it is, IC can be arbitrarily wrong about arbitrary facts for arbitrarily long. Yeah. Okay. If you could have a long conversation with anyone, who would you have this conversation with? Dead or alive? Dead or alive? That's a great question.
00:13:45
Speaker
Can I change their influence or is this just gaining information? I can't give you the physics of time travel here. It's just extracting information.

Historical Drunkenness vs. Modern Stimulants

00:13:57
Speaker
Yeah, it's just extracting information. I mean,
00:14:02
Speaker
assuming no one from the future. I mean, I guess it would, I've talked to most really smart people alive today. So maybe like Von Neumann or Einstein, um, maybe someone like that. Uh, well, actually, I mean,
00:14:16
Speaker
If I was munchkinning this, if I was min-maxing this, I would find someone who has a very relevant piece of historical information that I could abuse to exploit the market or something. But I would have to think more about that. I would probably be interested in talking to maybe one human or someone like that.
00:14:36
Speaker
Or, I mean, if I wasn't trying to min-max, I was just curious. I would probably greatly enjoy talking to Leipniz. I think Leipniz is a very interesting character. I consider Leipniz to be the first alignment researcher.
00:14:50
Speaker
There's a great set of like book length essays about like the history of computability and logic. Forgot by who, unfortunately. And the first one a lot of is about Leibniz. So this is very funny. When Leibniz first put his doctoral thesis, he applied for doctoral thesis.
00:15:13
Speaker
And his application was that he would create the natural language of philosophy by which the will of God and everything could be formalized. So in the future, when two philosophers disagree, they need merely say, let us compute. You want to formalize all things in just one universal language, which is, it is like 1600s or 1700s or whatever.
00:15:37
Speaker
based like awesome like what a guy like this guy is like inventing like the idea of formal logic and like the idea of like formalizing the will of god you know into formal logic so it can be implemented on earth i'm like man yeah this guy's the first alignment researcher i love it imagine being like this back then and then trying to differentiate differentiate between your ideas that are crazy and your ideas that are brilliant and it must be so so difficult because he he also has published some
00:16:06
Speaker
far out views about ontology. But oh yeah, obviously. But it's like, same thing like Newton, like Newton considered his physics works to not be his greatest work. He thought like is like alchemical and it's like biblical interpretation work to be like, this is another thing where like epistemology where like people have very confused views about like how it was to be a pre scientific person and like how different it actually was. And people, we have like movies, like, you know, people go back in time where we see like historical people.
00:16:32
Speaker
And they're like, kind of like us, you know, they don't know certain facts, you know, but they're like, they reason about things much more than we do. Like, people don't understand that like people, even like 100 years ago or whatever, we're fucking savages. Like, compared to today, they were savage, they were awful, like the violence and the abuse and the like, the drunkenness.
00:16:55
Speaker
One of my, I have to, this is completely irrelevant, but I have to do your listeners a favor here. If you want to look up something just to truly, truly awaken you to like how weird history is, you have to look up the Lester balloon riots. So this is one of my favorite little anecdotes of history. I stumble on this by complete accident. So okay.
00:17:17
Speaker
balloon riots. Well, that's kind of weird, right? Like, okay. So I want you to imagine your head. What happened here? Okay. Maybe there was some riots for some reason that involve balloons for some reason, or became like a symbol of balloon something. Nope. Wasn't that right? So, uh, okay. It's half like the, like 1800s or something. So like something like hot air balloons, maybe someone was using them for some like bad purpose or something. No, no, no, it's much better than that. So what happened was, this is true. Look at everything. Um, was that.
00:17:46
Speaker
some nobleman had built a hot air balloon and whatever and so he was trying to you know go up into the air balloon and just look at things so a huge crowd of like spectators came to look at to watch and what happened was is that a riot broke out people got you know went you know insane basically and like pour the balloon apart and burnt it so like yeah like run away and so
00:18:14
Speaker
Okay, so I want you to imagine in your mind what could have caused this. The reasoning was, and I quote, that they had heard it wasn't the biggest balloon.
00:18:25
Speaker
They were just like, what? Oh, this is the best one. Fuck this guy. Kill him. Like, that's actually what happened. Like, people just like, they were so drunk and just so fucked up that it was like, get him boys. Fuck this balloon. And apparently, this was such a big problem that during that time, it was a regular occurrence that people would try to hide when they had their balloon take off to not attract too large of crowds because like shit like this would just keep fucking happening. This is this was definitely not my guess. I would have guessed something that
00:18:53
Speaker
Perhaps they saw the balloon as a god and were scared by it or something. Yeah, yeah. No, no, no. They were just like, like, they were just like, so you know, I think it's like, I can actually imagine what happened. Like after I thought about it for like, when I first heard this, I'm like, this is absurd. How the fuck is that? But then I thought about it some more. And I remember when like, you know, me and the boys were like young, when we're like 18 or 17, and we would get like really drunk. And then I'm like, oh, I can see how this happens.
00:19:19
Speaker
Actually, there's an interesting semi-serious theory about drunkenness, which is that before 1700, there were a lot of pubs in London and the UK in general, and people were drunk. After work, they went out and they drank a lot, and then they fell asleep, and then they never really achieved anything intellectually. But with the introduction of coffee houses,
00:19:48
Speaker
people had a different stimulant effect as opposed to a kind of a downer effect from the alcohol and then they began intellectualizing and talking and developing theories and so on. I don't know how seriously to take this, but the point is just to say

Scientific Discoveries in Context

00:20:06
Speaker
drunkenness was a huge problem in the past. And I think you're right that we can't really understand how the world was back then. Yeah, it's crazy. Whenever you read the history book, you have to remember all of these people were just like piss drunk all the time. Everyone was drunk all the time.
00:20:23
Speaker
Because you couldn't drink water, right? Water could be potentially poisonous, and so you had to drink beer. It was very common for just everyone to be drunk all the time. And, you know, illiterate and like violent and like, like, I remember reading, I think it was in Better Angels of Our Nature where you just talked about how like in the medieval world, like everyone had a knife with them all the time. It's like this was just, you know, because, you know, you need knives for all kinds of things. So people just stab each other at dinner all the time. Like people just get murdered at dinner.
00:20:53
Speaker
all the time. And this is just foremost again, you know, like, oh, man, you know, farmer Bob, that bread as I mean, I mean, death would be much more common. And you would you would expect to encounter death from disease or from violence much more often than we do. We are absolutely shocked if we encounter death. But let's. So what will be the most impressive thing that the GPT for will be able to do?
00:21:22
Speaker
No, I'm using up all my trading alpha. No. I don't know what the most impressive thing will be because I expect once people have it in their hands, they're going to like, you know, elicit more and more impressive things. I expect a rather good side is going to make it do some pretty amazing things pretty quickly.
00:21:39
Speaker
One of the things I know it can do is that it's going to be able to write much more sophisticated software than the current systems can do. Like chat GPT is great for snippets of stuff, but it can't write whole programs. GPT4 can write whole programs. That could change things substantially. Yep, it will. How do you think the internet has affected our ability to think? Do you think it's been negative, positive? In which ways has it been negative and positive?
00:22:09
Speaker
I think it is the average is positive, the mean is negative.
00:22:15
Speaker
So, well, actually I'm not even sure if that's true. It might even be positive on the mean. So like, again, people don't realize how savage the past was. It's like, there's always been the question of like, you know, has the internet made us dumber or made our stupidity more accessible? I think it's actually mostly the second. Well, I think on average, the internet has made people much smarter. It's made people more skeptical. It's made people more capable of like thinking we're exposed to different viewpoints and whatever. But what it has also made us.
00:22:45
Speaker
is that it exposes us to super memes, the super infectious, like super dangerous memes. I think of like, being exposed to the internet as kind of like being like a tropical disease researcher, you're probably gonna have a really good immune system, but you also are exposed to like the worst things and like, you know, like hyper, you know, contagious, like dangerous thing. So
00:23:08
Speaker
What I think this results on, on net, is that society kind of stratifies more. We have more tales. We have more hyper-rational people. I think the most rational people to ever live are currently alive. Even if you read very rational people, or very smart people, like Einstein or von Neumann, if you actually read the biography carefully, you're a bit savage by modern day standards still. And not just racism and stuff, but also that.
00:23:37
Speaker
just have more sophisticated we are in many of these regards. The average person of today at least thinks that rationality should be important or science should be important. Back in the day, that was not always the case. And we have more cultural memory of these important people that complicate things, who are very smart, who did great things, but the average person got it.
00:24:05
Speaker
I think now the people who have truly been forged in the fires of the memetic internet are actually extremely competent in a lot of these things, some of them. I think it's a selection process. Most people get caught.
00:24:23
Speaker
know, bottom half or bottom quarter gets caught into like QAnon or whatever garbage, then you know, you know, lower half gets caught into like, you know, Fox News versus CNN or whatever, you know, you know, 75th percentile gets caught into like sophisticated Twitter arguments or whatever, you know, 99th percentile gets caught like niche politics, whatever first percentile actually is smart, or whatever, like actually is like extremely sophisticated and can like actually like
00:24:53
Speaker
Be resistant to memes that would have killed a Victorian child if you think of differences between now and the past one of the most extreme one is also the access to information so this is kind of a.
00:25:05
Speaker
This is a point that listeners and everyone has heard a million times, but it's still important to consider just how much information you're able to get access to in any given day, how much information we've gone through, just in this podcast right here, how many concepts we've mentioned that you wouldn't hear about if you lived, say, 200 years ago.
00:25:28
Speaker
Yeah. Okay. What is the most impressive scientific discovery relative to its time, relative to when it was discovered?

Books, Blogs, and AI in Learning

00:25:38
Speaker
That's a nice question. I like that. Honestly, I might go with Aristotelian logic. And so there's a lot of problems with Aristotelian logic and like Socratic and like all this kind of stuff. Like there is
00:25:53
Speaker
like so much of his argument from authority and like so much of it's kind of just like bullshit ontology is kind of pulled out of nowhere, whatever. But given the time, like, damn, like, if you once you're like really aware of how savage the world was back then, and you read like Socrates or like Aristotle or like
00:26:13
Speaker
other people at that time. Like, you know, I know it's such a meme to like the ancient Greeks, like it's such like, oh, of course, right? Such a, you know, Western, like, you know, a lot of people do fetishize ancient Greece and like, make it look as like, much more than it actually was. But I think
00:26:30
Speaker
actually appreciating that like, like inventing, you know, predicate or like first order logic, kind of just out of nothing at that time is like actually amazing. I think in other worlds, like if Athens hadn't fallen and like had some more stuff, you know, some other things that gone differently, I think the industrial revolution could have happened in Athens. Like, I think they, like they had steam engines. Hero of Alexandria built a steam engine. Like,
00:26:57
Speaker
It looks like the anti-catherian mechanism and stuff like this, they were actually extremely advanced. This is a mechanical computer that was found in a shipwreck that looks like something from the 17th century, but is actually more than 2,000 years old. It's insane. This is another thing that modern people don't have a conception of, but the idea of losing technology.
00:27:20
Speaker
But going backwards is not something that we're familiar with in our modern world. We're only familiar with progress. But in the ancient world, it was a very common idea that there were golden ages, that miracles were possible, and then those ages ended. And now miracles aren't possible anymore. This is a very common theme in history. And for example, after the fall of Athens, I think there was such an age. And also after the fall of Rome.
00:27:43
Speaker
There was ages where just a lot of technology got lost. You know, Romans could build things that later civilizations could not build. Imagine living in that society. Imagine being like a medieval peasant and you saw like a Roman aqueduct.
00:27:56
Speaker
Like, it looks like out of miracles, like out of a fairy tale. Like, you know, no one can build stuff like this. Perhaps thinking about golden, past golden eras weren't actually, it was true in a sense back then. Yeah, I think so. I think we've inherited a lot of that and it's no longer true. Like we are currently in the greatest golden age to ever live. But it's, but back in the day, I think it was very reasonable. Like if you were a post-Roman, you know, person and you thought the Roman age as a golden age, I'm like, yeah, I think that's actually quite reasonable. The same thing about the Athenians.
00:28:26
Speaker
So yeah, I would say probably Aristotelian logic or like, just like, like, actually not if I don't know if it's if it counts, I don't know if this counts, but I would say like, the general proto epistemology that came out around like Athens and Alexandria around that time, there's like proto scientific, it was so close, so close, like getting to where the enlightenment got, I think, like, I think
00:28:50
Speaker
if all these, if this would have continued, they could have like discovered this active method, they were like, so close, a little bit more empiricism, a little bit more falsification, and they would have been there. Yeah, I think literally that plus just some more economic development, just like, you know, more population growth and stuff like that. And, you know, figuring out like, you know, coal mining and stuff more efficiently, like some more economic development, some more social development, like there is still like,
00:29:15
Speaker
It makes sense that Athens fell. Athens was also very small, actually. There were only 20,000 Athenians plus slaves and so on. There actually weren't that many of them. Which, again, is actually shocking that so few people had so many of these critical insights that are very advanced compared to
00:29:36
Speaker
almost anywhere else in the world. There were other, you know, like Alexandria and stuff was also very advanced, but like, like, you know, like in like, you know, Germanics or whatever, like, you know, they were like, still like barbarians. It's the innovation per capita in ancient, ancient Athens must, must perhaps the highest ever.
00:29:54
Speaker
Yeah. And I think this is a great example of how the market is inefficient is like you can just like most of computation, like most of the stuff Turing figured out or like life nets or whatever, you can just figure out by thinking like you, like there's some things where like you need, like, it's like one of the reasons why maybe.
00:30:13
Speaker
industrial revolution couldn't have happened in Athens, but may have been possible in the Roman Empire, is you need a certain amount of scale, you need a certain amount of industrial capacity, you need relatively good steelworking and stuff like this, like build large boilers and stuff, you just need steel processing that's quite advanced.
00:30:29
Speaker
and like metallurgy and like this is like the kind of stuff that Romans were quite good at like Roman concrete and like was like very like an incredibly good building material they had like quite advanced metallurgy and stuff like this kind of stuff was like super important to actually get to the industrial evolution it was not just coming up with thermodynamics there was a lot of it it was just like you know
00:30:46
Speaker
Incremental improvement metallurgy and like, you know smithing and like, you know the development of like building materials and stuff like that Which is like a something that like Athens I think would struck they didn't have like very good steel They couldn't develop like a lot of these things. They could maybe they could develop it later But that would have cried a lot more industry So like if I had picked like one other point where you know civilization could have gone industrial it would have been probably the Roman Empire not Athens but but yeah, it's
00:31:14
Speaker
It shows how inefficient the market is in that you can come up with most of modern mathematics with pen and paper. You don't actually need advanced steel or advanced economics or whatever.
00:31:30
Speaker
The fact that Aristotle exists is kind of an existence proof, or Euclid. Euclid's maybe an even better example. You read Euclid's first fifth elements and whatever, and you read it, and you're like, holy shit, this is modern mathematics. It has axioms, it has postulates, and he derives. Sure, it's not perfect and whatever, but holy shit, this was so far advanced for its time.
00:31:54
Speaker
And it was it was it remained cutting edge up until around 1700, which is an incredible achievement. Yeah, maybe it was elements is actually a better than Aristotelian logic, even maybe that would be the choice, actually. I like that choice. Okay. Are books outdated?

AI's Role in Education

00:32:11
Speaker
I mean,
00:32:14
Speaker
Yeah. It's, I mean, now at this point it's cultural and aesthetic, which is like fine. Like I like books as artifacts. I think they're nice. I don't read books anymore as I have back issues now. So it's like sometimes hard to find a good pose to read in compared to just sitting in a nice office chair in front of my screen. But I mean, the concept of writing is definitely not outdated. The publishing that writing in books is updated.
00:32:42
Speaker
Yeah, what's the replacement? Because if you replace your book reading with only podcasts or videos online, then you lose some depth. But if you go all the way to only reading papers, then perhaps it gets a bit...
00:32:57
Speaker
you will try to consume less information. I sometimes joke that there's a hierarchy of quality of literature and nonfiction. The lowest of the low is whatever you're on post on Facebook. That's the bottom tier. Then you get up and then you get a bit above that. It's mainstream kind of garbage. It's very low quality, whatever stuff. And there's a lot of videos, clickbait stuff and whatnot.
00:33:28
Speaker
Then like one step above that, you have like, you know, like pop style, like books and like magazines and stuff, which is actually not that bad. You know, there's some good stuff on there, but mostly still garbage. Then you get to like actual books, you know, written by people who know what they're talking about, which is, which has like real information in them and is usually accurate, but it's usually outdated and it's usually still like fluffy and like, you know, more like partially for entertainment and so on.
00:33:55
Speaker
And then you get to like, you know, like, you know, higher level papers, like that's where like, you know, actual scientific knowledge is really shared and where it's actually about the scientific knowledge. That's like where you need to be when you want to like really catch stuff. And then in the ascended tier above that is, you know, like poorly formatted WordPress blogs by one obsessive autist with an anime profile picture. That is the truly ascended form of the most high signal
00:34:26
Speaker
writing ever created by mankind. You know, Guern is the epitome of human literature as far like non-fiction literature as far as I'm concerned. And there's like a bunch of like these like niche Guern-esque, you know, um, blogs. Like some of them for like very different, like some for one topic, some for like all kinds of different topics. And I saw many of them like absurdly well researched and like I consider these, they're rare and like most, like most blogs are terrible, but like there's these like
00:34:56
Speaker
Extremely niche tale and like the like the tales of like blogs are much longer than like any other medium
00:35:04
Speaker
Yeah, I agree with that. I once considered only reading books and not reading anything online in order to focus more. But I just couldn't take the loss of these in-depth blog posts, where it's almost an insult to call it a blog. It's more like a research paper, but more self-aware and highly updated. And yeah, Gwen is highly recommended, definitely. Absolutely.
00:35:33
Speaker
don't really read books anymore, because I think the signal's too low. I think most books, actually, the signal, especially the signal to words, is actually quite low in most books. I read tons of books when I was younger, but it was more to patch up on a common corpus, because the common corpus that a lot of people expect people to know is in book form still. Everyone should read The Selfish Gene and a few books about history.
00:36:03
Speaker
read the sequences, whatever. There's a bunch of stuff like that that you should have read. But then after that, the current place where the most absolute galaxy brain smartest cutting edge stuff is posted is mostly on blogs. Do you think that language models could replace books, perhaps? Having a conversation with an expert is one of the best ways to learn.
00:36:29
Speaker
Perhaps language models could become so good that they could teach you better than, say, textbooks. I mean, yeah, obviously. I mean, given that I think language models will lead to AGI sooner or later than, yeah, obviously. Have you done any of this? Have you? So how do you use these generative models in your daily life, if you do?
00:36:48
Speaker
So funny thing is I barely use them. I'm like the, uh, like there's this, there's this. Memes tweet where it's like, you know, tech enthusiasts. Wow. My entire home is smart. I have everything connected to my wife and it's like tech worker. I have one printer at home and I keep a loaded handgun next to it in case it makes a noise. I don't recognize.
00:37:06
Speaker
Like I'm kind of like the second category. But is it actually a safety issue that that meme is about? I mean, it's not just a safety issue. I mean, I am paranoid about the online safety and such to a large degree and such, but
00:37:21
Speaker
It is more just like, don't trust technology in many ways. I can know its limits and its weaknesses. I am very interested in language models from like a research perspective. There's many interesting things to do with them, but like for most of the work I need to do, um, it's not particularly helpful. Like I hit the limits very quickly. Like I'm like, I know what it can do and I hit those limits super quickly. And there's also.
00:37:44
Speaker
This is also why I have a good model of what they can and can't do. And they can do a lot. I could probably optimize a lot if I put some effort into this. But another one is actually safety, is that the things I would want them to do are not things I want to open AI reading my logs on.
00:37:59
Speaker
You know, I don't want to share personally identifiable information or by personal emails or like whatever with open AI. I'm like, yikes. Like, I don't want them building, you know, like, you know, being able to train on my interactions with, you know, people. I don't want them to be able to, you know, if I had ways of improving their models or like getting their limits, I wouldn't want them to know about that necessarily. So there's also limitations there. There is a lot of.
00:38:27
Speaker
There is a lot of useful things. I probably should just use more. There's a lot of useful transcription stuff that also Conjecture has been working on, some transcription stuff, which has been super, super useful. Really great tool if anyone wants to check it out that we've been building there and stuff like that. Sometimes use AI generation stuff for my D&D stories, like images. But I use surprisingly little AI in my day-to-day life.
00:38:54
Speaker
One limit that I've come across when I talk to language models is just that you can't really go deep on a topic you know a lot about. But then if I'm annoyed at chat GPT's inability to talk about some philosophical issue that I'm interested in.
00:39:12
Speaker
then I have to remind myself that it could actually converse pretty well with me on any given topic to a certain level of depth. And so it knows much more than I do about plumbing, theater, and so on, any given topic. But I can see how you, perhaps at this point, it's difficult to make progress in your own knowledge if you're already a knowledgeable on topic.
00:39:39
Speaker
Yeah, it's like most of the knowledge I'm interested in is on the level of like word and stuff. Yeah. Like if I was interested in, you know, something I'm like a pop side level, then I'm sure I should have cheaper. I just like, genuinely don't consume any pop side. Like it's just like not useful to me. It's also not very entertaining to me anymore.
00:39:56
Speaker
I also have the privilege of being surrounded by very, very, very many smart people that I can just talk to whenever I want. And so I expect if I would be more isolated, it would be more valuable. But if I need to talk to an expert on programming language theory, I can just get someone and just talk to them. How close are we to having personalized AI tutors for children, for example, based on language models?

Critique of Traditional Schooling

00:40:21
Speaker
Are we already there?
00:40:23
Speaker
I think you could have that. I expect children to be very, very good at immediately getting it to say bad words. It's definitely what I would have done. I expect this not to happen. Why do I expect this not to happen? Regulation. I think school is bad, obviously so. It's child torture.
00:40:43
Speaker
Like I consider school to be a massive human rights violation, like absurdly so. And if people, if like our descendants will look back at how we do school nowadays and they'll be like, you did what to your kids? You lock them in these rooms with all these other terrible kids and these like, you know, mid-wit teachers that were like often very abusive and like, you know, and no one did anything about that. What? Like, excuse me?
00:41:06
Speaker
Like, I expect this to be one of the greatest moral atrocities of our, not the greatest, but like this to be one of the most pervasive moral atrocities of our time that we torture our kids and we're just like fine with it. This is another great example of how humans don't really have morals. Like people are not inherently good or evil. Like, they're just like, they just go along with whatever society says.
00:41:26
Speaker
If school didn't exist and I introduced the concept to you, you would be horrified. You would never do this to your kids. You would be like, and the teachers do what and how often is sexual abuse? What? I'm not leaving them alone with these scenarios.
00:41:43
Speaker
But because it's normal and everyone does it, like you get the fucking police called on you if you don't want to go to the torture chamber with all the other kids to get tortured. You know, like this is interesting, since if you if you ask, I think a lot of people, they would say that kind of education is something that everyone can agree is a good thing, like family or something like this. But perhaps perhaps we should differentiate between education and then the schooling, the school system as it looks now.
00:42:11
Speaker
Exactly. So this is the classic phenomena of where so this is like the like classic Robin Hanson, you know, elephant in the brain. Another book, everyone should read concept where something, you know, on surface optimizes for X. Everyone says it's for X. But in practice, it obviously is optimizing for Y. So like, maybe there's a different purpose. And the actual purpose of school is it is child prison. It's the parents need to go to work.
00:42:35
Speaker
They need to do other things. They can't take care of the child. They don't want to spend that much time with the child. They have to be put away in the child prison. And also you have to civilize people. Children are feral. Children are animals, obviously. Cute, adorable, lovely animals. I love kids.
00:42:53
Speaker
They are animals. And if you want them to integrate into society, at some point there has to be a civilizing process. The way this should work is you have loving parents, paying close attention to them, stern but loving father, kind but loving mother, and loving mother, whatever. But an industrial scale, you can't always guarantee that.
00:43:15
Speaker
So especially if you need people and workers to be at a certain kind of educational level, you can't guarantee that parents won't necessarily do that. In the modern world, you need to learn to read. You need to learn to do simple mathematics and whatever.
00:43:28
Speaker
And most people do not want to learn mathematics. They really, really, really don't. And you have to force them. You have to literally use violence to teach people math. Some people, that's not the case. I think if you left me as a kid just unsupervised in a room, I would have just stumbled up on mathematics just because I was bored and I thought it was fun.
00:43:47
Speaker
That's like a puzzle. Like, you know, he left me there with like a textbook and I would have probably just like, you know, done it because I was bored. But for most kids, that is not the case. And if you want kids to function in modern society, if you want them to be live a fulfilling life and, you know, be able to handle themselves, you need to literally use violence.
00:44:05
Speaker
Like at scale again, this is if you're a dumb society, if you were a smart society, Oh, do I have ways to solve this? I have lots of ways how we could solve this, but it requires an actual smart society, which we do not have.
00:44:18
Speaker
to have a more humane schooling system where. Yeah, there's so many ways how we can make school this wonderful experience. It's like everyone back now that helps people grow as people, you know, become adults, you know, like, you know, lots of fun experiences where, you know, you're challenged, you get to like, you know, like this, like, yeah, of course, like,
00:44:36
Speaker
Does this have to do with society simply becoming richer and then thereby having resources to become more humane, become more perhaps personalized in our approach to education, not putting every young person in the same box, which were perhaps necessary in some sense when the kind of modern school systems were set up 200, 150 years ago?
00:45:02
Speaker
Absolutely. A lot of that. School is much better now than it was before, obviously. As much as I am being direct here, that school is torture, and I do consider it torture, and obviously so. It's so much better than it used to be, obviously. Especially, at least, a lot of people do have fond memories from school. I have some fond memories from school, even so I consider it mostly torture.
00:45:27
Speaker
Still, there's a lot of nice things. Some of my teachers were really nice and they taught me some great things and I'm happy that I got to spend time with them and I have some good friends. There's also a lot of really traumatic experiences, as every kid has during those times. And it's...
00:45:41
Speaker
It's cruel gaslighting. This is like why I'm like passionate about this. It's like very cruel gaslighting that like, you know, kids are put into these terrible situations where they're like being bullied or like the teachers are cruel or abusive and they, or they're like, you know, they're, maybe they've been ADHD and they just like can't pay attention and they're like forced to sit still. And like, it's like genuinely painful. Like, you know, as a kid with ADHD, like this was like genuinely just like very, very terrible. And I got, you know, lots of shit for that. And.
00:46:08
Speaker
I only got away with because I was a pretty smart but like, you know, if I was not been smart, and I would have like failed in school, this would have been like, absolutely traumatic, horrible. It was like, no, I did get into trouble a lot because I was not pay attention to be like, you know, wiggling and like, you know, talking such, but I got lucky. And
00:46:27
Speaker
If, but it's still bad, like, you know, right? Like, you know, no one hit me, you know, in school. I was never abused. I was never hurt. Like, you know, I was yelled at, you know, sometimes, but like, you know, no one ever hurt me.
00:46:38
Speaker
That was not the case a while ago. You know, that was born like 30, 50 years ago. Yeah. Teachers would have beat the shit out of me. Like, obviously so. Maybe rightfully, but, you know, no, it's joking, obviously. Yeah. But the problem is, I think, is it's not just money. It's also money. It's also money and stuff. But it's like, uh, cynicism. Man, I'm still cynical today. It's just so sad. But like,
00:47:05
Speaker
The core problem is much worse than that. One of the core problems of civilization, of humanity, that comes up again and again in different guises, is this problem that

Memetics as a Social Science

00:47:18
Speaker
Just most people aren't that great. Like, man, like, just like most adults are just like, they also have their shit, you know, they're also kind of immature. They're like, you know, they don't, they don't give good advice and they're like, they can get angry and like, whatever. And like, as a kid, you're in a particularly vulnerable state. It's actually basically the same relationship, like the same type of relationship that exists between like a patient and a therapist.
00:47:45
Speaker
So there's this current movement kind of going on where everyone has to go to therapy, everyone has to have therapists. I'm like, whoa, slow the fuck down there. I'm not sure that's a good idea.
00:47:55
Speaker
I think the idea of therapy is great. The idea of having someone guide you through emotional issues, help you grow as a person, someone to talk to, like, oh, oh, wonderful. Yeah, fantastic. Now let's look under the rug and see how this actually works. Oh, no. And like, school's the same kind of issue. We're like, oh, education. We're teaching kids how to be adults and how to do useful things. Wow, this is so wonderful. Let's look how, oh, no.
00:48:19
Speaker
So I consider these the same class of problems. We're like, if you have a great therapist who's intelligent, mature, and they care, and they're like, wow, that's such a high value experience. Same thing with the teacher. We have a great teacher, smart, kind patient, and really believes in you and is helping you. That's a wonderful experience. That's like such, as a kid, having a mentor, having some adults that are really looking out for you, they're rooting for you is so important.
00:48:49
Speaker
You know, both your teachers and your parents and so on, you really have your parents and they're behind you. You know, they're rooting for you. That's so important to this kid. It's so valuable. But you just can't guarantee that at scale, man. Most teachers are just dudes. You know, they're just like some kind of people. They're just like, they don't want to be here either. They don't even like kids. Like fuck. What is, um, what is the belief or an opinion you have that is very different from those of your in-group?
00:49:14
Speaker
Hmm, well, depends on how in the in group is. I have various spicy opinions about like, you know, coordination, politics, and what we already mentioned some of this. If I, I guess if you define my
00:49:34
Speaker
in group as like, not like literally my friends, but like, you know, wider cultural context. Probably my like belief in like, you know, like virtue and like these kind of concepts or more like, um, well, here's a fun answer. Let me give you, okay. Instead of giving you a serious answer, how about I give you a fun answer instead? Okay. Fun answer is, um,
00:49:54
Speaker
whole schizo theory about basically how all of religion, mythology, spirituality, and stuff are actually really important and completely mechanistically understandable. I think that if you replace the word spiritual with virtual, if you replace magic or spells with memes,
00:50:21
Speaker
It just works. You have a make it congrats. You have a mechanistic understanding of magic. So like, I think that there are that like concept of magic is very real and like, you know, like magic with a K kind of is like very real and like, actually a very meaningful concept, but it's just, it's very confused. Like people are confused about it. I think it's like a physical concept. It's not like magic is to reality as computation is to CPUs.
00:50:46
Speaker
or computer programs are to CPUs. One of my favorite examples of this is that there's many groups, especially Southeast Asia, which have a myth about the evil eye. So the way this works is that the shaman can put, or the sorcerer or whatever, can put a certain type of curse called an evil eye on a person, and then they will wither and die if they get the evil eye. So this is true. This is observable, is that people who
00:51:12
Speaker
Get the eye, grow sick, and they often die. This is real. This is empirically observable phenomena. Now, how do you explain this? Well, if you try to explain this with a purely physicalist-driven view, you're like, that doesn't make any sense. This can't be possible. Surely there's some mistake here. Well, no. I think there's another aspect. If you look at it medically, there's a very simple explanation for this. The explanation is very simple.
00:51:40
Speaker
When the Shaman cast the Evil Eye, they make it common knowledge that the Evil Eye is not this person. The Tribe now all know, oh shit, this guy's got the Evil Eye on him. Fuck, I don't want to talk to him. So he just gets neglected. That's it. Everyone, it's a softer attack.
00:51:59
Speaker
The shaman performed a software attack on the operating system running on the general medic environment. They introduced a meme into the environment, a toxic meme, which was directed against a specific person, which changed the, predictably so, changed the minds of the other people to start neglecting this person. And by neglect, people die and people get sick. But this reinterpretation of magic in terms of memes and signaling and social psychology
00:52:25
Speaker
is still perfectly compatible with a physicalist world. It's just a slightly weird way of thinking about it. Yes, exactly. A naive physicalist would have been the correct way to say that. I think magic is as real as a Python program on your computer. Does the program exist debatable? You can say it's encoded in physical states, of course, and it has physical
00:52:52
Speaker
Consequence on reality and none of this is like unique to me like these are obviously things that other people have noticed as well You know like even like all the way back like your structure interpretation of computer programs opens with this like nice chapter Nice little intro ways like you know, oh we have sorcerers encoding, you know arcane glyphs onto magic stones that summons in material spirits that can have effects on reality and that's you know what we call a computer program and programming on a CPU a computer program like you know, so
00:53:18
Speaker
I've mentioned this before, the idea of the missing science of memetics. I think the missing science of memetics is also the missing science of magic. When I say memetics, I mean a science that can predict how memes would be effective in a given environment. We should have, in ontology, a way of thinking about how do we cast spells. It's a bit of a dark one, but I have a really good example for this. This is a bit dark.
00:53:43
Speaker
There is a truly terrible thing that happened a lot over the last 10, 20 years, which is school shootings. This is a truly, truly tragic thing. This is a terrible thing. No one wants this to happen. Everyone wants to prevent this. But somehow they keep happening.
00:53:57
Speaker
Somehow this keeps happening, especially in the US. Well, what causes this? Well, you know, one obvious explanation is guns. That's a pretty, pretty reasonable theory. But then we look at like Canada or like Switzerland or whatever, and they have as many guns as Americans, sometimes more, and they don't have this problem. Or it's like much, much, much, much rare. Like, or it's magnitude. And like, okay, so, you know, maybe it's definitely contributing to the problem, but like, is it the problem? Well, not really. Well, having more guns obviously also doesn't solve it. Like, I don't think I need to prove that. Like obviously it doesn't help.
00:54:27
Speaker
Not that. What is it? Why does it happen in America, but not in Switzerland? That's kind of weird. So there's a lot of weird explanations you can have for this. You know, you can, you know, economic disparity or like mental health treatment or whatever. But I think there's a much more sobering.
00:54:46
Speaker
much more sobering reality is its memes. It's cool to do. It's just because of the way the media treats school shooters in the US. It has become the default script, the fate, the encoded in our cultural mythology as a school shooter, as these evil boogeyman. This is the thing that evil people do. If you spread off the path of light into the path of pure darkness, this is what happens.
00:55:11
Speaker
So what has happened is, is that a deviously toxic meme has embedded itself into the cultural reality of the, especially the United States and the Anglo-Saxon world and, you know, the English speaking world in general. Like these things don't really happen like in Asia. They're like much, much, much better there because it's a different cultural context. It's a different spiritual realm, so to speak. So what do you do? So, okay, what is my solution? How do we stop tool shootings? The truth is we have to make it uncool.
00:55:38
Speaker
I think what actually has to be done is you need to have an anti-meme. You have to release memes. You have to change the spiritual story around this, about evil, good, and so on, to make it not the thing that evil people want to do. You have to give them something else to do. There will always be evil people. There will always be people who feel betrayed, who want to take vengeance upon society or whatever. And basically, you have to pass the spell. You have to manipulate them into thinking that they should do something different.
00:56:06
Speaker
And I have no idea how the hell to do this. Like, holy shit. Like, you know, like, if, like, if I was a advanced civilization with an advanced science of mimetics, I should be able to solve this. Like an alien comes from space, looks at us like, ah, yes, this is a class 2AB mimetic hazard. I know what to do. Let's, let's activate protocol D and then they'd like do something and like say some words and things would be solved.
00:56:28
Speaker
Do you think that we could actually get a science of memetics going? Because the sciences that have been very successful are sciences that are studying something where humans do not interact with the object. Physics, chemistry, biology to some extent. But perhaps one reason why sociology and economics aren't as successful
00:56:54
Speaker
is because you're studying something where humans are interacting with the system. And with memetics, what you're studying is purely humans interacting with each other. And so wouldn't it be too fast-moving and fragile and complex for us to pin down, say, Newtonian laws of memetics?
00:57:19
Speaker
Oh yeah, absolutely. There would not be no Tony in loss for this. This is a level two science. I think humans have mastered level one science pretty well. Look up how an EUV lithography machine works. What the fuck? How the hell do humans build that? The fact that we can build that, I'm like, okay, yeah, we understand level one physical science.
00:57:39
Speaker
If you look at the things that are actually hard for modern science, like the things we actually just cannot do, you always get back to complexity and interactivity. You get back to hard algorithmic problems, biology, sociology, economics, all these kind of things. What do these things have in common? Well, multiple things. One thing is reactivity and chaos and how by interacting like level two chaos in the sense that you interacting with the system makes it chaotic.
00:58:06
Speaker
This is the interesting version of the efficient market hypothesis. The interesting version is that there's a system which is chaotic by you interacting. If it is not chaotic, you interacting with it will make it chaotic from your perspective. This is a very interesting property to have. This is like an adversarial property. The market is adversarial.
00:58:24
Speaker
This is also interesting from the perspective of AI. This is why AI and AGI is fundamentally different from other kinds of disaster or risk management because you're dealing with an adversarial system. When you're dealing with volcanoes or viruses or whatever, viruses are kind of like an intermediate. Let's say volcanoes.
00:58:43
Speaker
They're not malicious. The volcanoes aren't maximizing their damage to cause the human. It's not like once you learn where the volcano is, the volcano goes somewhere else to hide from you. It's just the level one problem. It's like, OK, there's a physical phenomena out there in reality. It might be complicated. It might be like we might need to develop some new theory or whatever or some new measuring devices. But humans are actually
00:59:07
Speaker
Pretty decent at that, not great, but really pretty decent at this. Level two problems are different. Level two problems are things that evolve, that react, that fight. And this is, to a large degree, exactly why these signs are so hard. Sociology is harder than physics, obviously. Physics studies, physics is the signs of the first order Taylor derivative, an expansion, right? And sociology is much, much more complicated. This is why sociology is terrible. If you read an average sociology,
00:59:38
Speaker
terrible, terrible, terrible, terrible, like no signal. And I think the reason is not necessarily lack of trying, it's just that this is really, really hard and it requires the development of like science 2, you know, like scientific method 2.0 to be able to like deal with these kinds of scenarios. I think we've seen some early developments, words like scientific method 2.0. And we have some methodologies that are like much more advanced than like anything like Popper or such would think about like Popper is like classic like science 1.0.
01:00:07
Speaker
of stuff while like modern conceptions of science are much more complicated and involve other ways of gaining information about reality and such. But as an existence proof about how I think this is hard, not impossible, consider that the shaming can cast the evil eye. How did you do that? Well,
01:00:26
Speaker
He understood, he had a model, a causal model of how things around him work and how if he says certain spells and he frantically waves his hands or whatever, that he can then accomplish an intervention in reality. There's an intervention, B of I, that maybe it's passed down from shame into shame, but someone figured out that they could do this and that it worked, and then they kept it around.
01:00:53
Speaker
If that like proto scientific view can already understand quite a lot of things, like actually humans very naturally have a lot of sense for these kinds of things, like how to be convincing, how to be, you know, like an in tune sense for social reality. And if anything, most people, for most people, social reality is more natural than actual reality. Like actual reality is actually quite unnatural for most people.
01:01:19
Speaker
Most people don't really interact with physical reality. They interact through social reality. And most people when they, the fundamental ontology that most people think in is not Adams and bits and whatever, it's like people and alliances and friends and family and stuff like that, which are fundamentally high level concepts. So I think we're at the stage of like, there's lots of low hanging fruit. It's very hard.
01:01:48
Speaker
If I had like tons of time, you know, an AGI wasn't around the corner, maybe I would work on the signs of memetics, you know, as things my brain, you know, it's a liking to these kinds of thoughts. So maybe I'd be working on that. A formal theory of magic.
01:02:02
Speaker
We have a better intuitive grasp of social dynamics than we do about physics, for example. But why is our scientific theories of physics more advanced than our scientific theories of mimetics? If we start with a better starting point in the social realm, why can't we make faster progress there?
01:02:28
Speaker
because physics is just easier, like much easier, and it's not adversarial. So this is like the memetic evolution thing said by the, so like, if I had to do, if I was sent back 2000 years, or whatever, and I spoke the language, I could like memetically destroy these people. Like I could make them laugh. I could make them, you know, believe anything I say. I could out argue Socrates easily. If Socrates came up to me, I would destroy him.
01:02:52
Speaker
Not because I'm smarter than

Ethical Impact of Individuals

01:02:54
Speaker
him. Circuites was probably much smarter than me, but because I have been like, I've heard all his arguments. I am so memetically, you know, I grew up on, you know, fucking the internet. Like I have debated and argued like, man, I would destroy Circuites. He would be like, like, he would just rage quit if he had to debate me. Like, and not just me, to be clear. There's like lots of people that like, you know, in modern reality, like most, most like high school debaters, I think, could destroy Circuites.
01:03:21
Speaker
So it's an evolution. The problem gets harder as you interact with it. If you were to choose one person or group in history that has helped humanity the most by your values, what person or group would you choose? Question. I'm not sure I can pick out
01:03:51
Speaker
Anyone in particular, like, think about counterfactual impact.
01:03:55
Speaker
What would be the highest counterfactual impact? There's lots of people who did great things, like creating vaccines or developing various branches of science. But a lot of them, I think, are not counterfactually that important. I think a lot of these sciences would have been developed anyways. If Turing hadn't existed, I think, like, Konrad Zuzu would have invented things. If Einstein hadn't existed, then Lorenz, I think it was, would have invented special relativity.
01:04:23
Speaker
So what we're thinking about is perhaps a bit like the question about Aristotle or Euclid, where we think about who made the earliest progress on a difficult problem, but in the ethical realm. Yeah, yeah. So I mean, like someone who was ahead of his time was Jeremy Bentham.
01:04:42
Speaker
But I'm not sure how big his influence was. I don't know if he would be the person who's made the most differentiable impact. I mean, you could also give the Mimi answers of Stanislav Petrov and stuff like the people who averted nuclear war. I expect whoever actually did the most good is someone that history has forgotten. Just some person in some scenario just who made a choice that really made the world a better place and were never celebrated from it. But I don't have a clever answer for this one.
01:05:12
Speaker
Perfect. Is AI-generated content the future of media? Obviously.

AI in Hollywood's Future

01:05:19
Speaker
I've gone on the record to say in the past that I expect you can generate full Hollywood movies in the next couple of years. I expect that's possible. Full Hollywood movies in the next couple of years? Yeah. That would surprise me, but yeah, I've been surprised many times. So perhaps I haven't updated all the way, as you mentioned before. Yeah, just update all the way, bro. All right.
01:05:42
Speaker
Perfect. Let's end it there then. This has been fantastically interesting to me, Connor. I hope it's been interesting to you too. Yeah, had a great time. Thanks so much for having me. Perfect.