Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Liron Shapira on Specificity and AI Doom (Episode 19) image

Liron Shapira on Specificity and AI Doom (Episode 19)

Stoa Conversations: Stoicism Applied
Avatar
532 Plays1 year ago

Want to become more Stoic? Join us and other Stoics this October: Stoicism Applied by Caleb Ontiveros and Michael Tremblay on Maven

What's the best test of a business idea? Are we doomed by AI?

Caleb Ontiveros talks with Liron Shapira about specificity, startup ideas, crypto skepticism, and the case for AI risk.

Bloated MVP

Specificity Sequence

(02:19) Criticizing Crypto

(04:31) Specificity

(10:11) Intellectual Role Models

(15:06) Insight from the Rationality Community

(19:22) AI Risk

(31:07) Back to Business

(39:46) Making Sense of Our lives when the AI Apocalypse is Coming

***

Stoa Conversations is Caleb Ontiveros and Michael Tremblay’s podcast on Stoic theory and practice.

Caleb and Michael work together on the Stoa app. Stoa is designed to help you build resilience and focus on what matters. It combines the practical philosophy of Stoicism with modern techniques and meditation.

Download the Stoa app (it’s a free download): stoameditation.com/pod

Listen to more episodes and learn more here: https://stoameditation.com/blog/stoa-conversations/

Subscribe to The Stoa Letter for weekly meditations, actions, and links to the best Stoic resources: www.stoaletter.com/subscribe

Caleb Ontiveros has a background in academic philosophy (MA) and startups. His favorite Stoic is Marcus Aurelius. Follow him here: https://twitter.com/calebmontiveros

Michael Tremblay also has a background in academic philosophy (PhD) where he focused on Epictetus. He is also a black belt in Brazilian Jiu-Jitsu. His favorite Stoic is Epictetus. Follow him here: https://twitter.com/_MikeTremblay

Thank you to Michael Levy for graciously letting us use his music in the conversations: https://ancientlyre.com/

Recommended
Transcript

Rethinking MVP: First User Value

00:00:00
Speaker
So the term MVP traditionally stands for mobile product. But I take issue with the word product, as you say, because I think it distorts people's thinking. A lot of people who are in their head, they're like, I am working on a minimal viable product. And that's why I'm spending six months engineering this app. Like you say, it would be much better if they step back and they think of it this way. I am working on creating value for my first user. Okay. What does your first user want to do? Book a ski lodge. Okay. Do you really need to develop your app to help one person book a studio? Why don't you just call them up and book them a ski lodge? They're like,

Stoa Conversations Intro

00:00:29
Speaker
what? That's crazy.
00:00:29
Speaker
Welcome to Stoa Conversations. Each week we'll have at least two conversations. This week has three. Today we are releasing a bonus conversation with Liran Shapira. Liran is the founder of Relationship Hero.
00:00:45
Speaker
He wrote an awesome sequence called Specificity, Your Brain Superpower. We talk about that post, the one trick Leeron has for evaluating business ideas, and then discuss the case for AI risk. This episode should be of interest for people who are thinking about startups, rationality, and want to zoom out and think about the big questions of technology. Here is Leeron Shapiro.
00:01:13
Speaker
Welcome, Pistoa. My

Liran Shapira's Background

00:01:15
Speaker
name is Caleb Ontiveros, and today I am speaking with the entrepreneur, rationalist, and investor, Liron Shapira. Thanks for joining. Hey, Caleb. Thanks for the invite. Great to be here. So what's your story?
00:01:32
Speaker
My story is I was into computers from a young age. I learned computer programming when I was about nine years old from a library book. I was like, oh, holy crap, you can type stuff into computers and they'll do it. Hell yeah. I was very excited about that. You've always had like a very logical mind, studying computer science in college. Less wrong has been very influential to me. I learned it as a college student in 2007. I was like, whoa, what is going on here? You can be this rational. So these are like very influential moments in my life, learning
00:01:59
Speaker
theaters, learning rationality. And then I became an entrepreneur. I started a company called Quixi when I was 21. Quixi was notable for just raising a ton of money, $170 million. Alibaba was the biggest strategic investor we partnered with them. It ultimately crashed as a failure, huge failure. Didn't have anything to show for it. Started another company called Relationship Hero. Five years ago, that's going pretty well. We're high single digit millions revenue. So it's, it's getting started. It's no unicorn, but we're on kind of that entrepreneurial path. We're profitable.
00:02:26
Speaker
So it's, it'll at least apply some of my lessons for my last company. I'll try to at least do better. And before doing your own company with Quixie, the first one, you were at slide, right? With PayPal, that's right. I spent a year and a half at slide and it was actually really cool. It was, you know, I didn't realize it was like a special time, a special place. It was like 2008, 2009 in San Francisco. I started as an intern right out of college and I became a full-time employee. I left to start Quixie, but.
00:02:54
Speaker
In retrospect, you know, I remember I was working with Adora, who's a Y Combinator partner, and Suhail from Mixby, a little Max Levchin, obviously from the Paypal Mafia, Keith, her boy. So when I think about Rishi from Future Fin, a bunch of people I'm forgetting and I'm like, wow, what a cool place. I mean, I didn't realize this was the next Paypal Mafia, maybe not quite as big, but like this is a great place to be. And I guess I didn't fully appreciate it.
00:03:15
Speaker
That's awesome. And most

Critique of Web3 and Crypto

00:03:16
Speaker
recently you've also been made a name for yourself on Twitter by questioning some of the different crypto schemes or projects that people are pushing forward over the past few years. Yeah. I think it's worth mentioning. It was a target that I couldn't resist because it's almost like a mispriced asset, which is like
00:03:34
Speaker
Web three, if you remember 2021 or even early 2022, the Super Bowl ads, it's a 16 Z was dignifying this concept of web three, right? They were putting literally $7 billion into this web three thing. And you know, smart people, everybody was very, emperor has no clothes, right? Everybody was lining up being like, okay, you got AI, you got web three. Everybody was like pretending like web three was a thing. And.
00:03:55
Speaker
I could see that it wasn't like it just wasn't the logic of what web three has just made no sense. It was like very gaslighting. Like if you looked at the PowerPoint presentations, it's like, what the hell there's that meme I posted of like those people sitting around a conference room and at the head of the table, just big bird. It's like, that's web three to me. It's like, what is going on? Why is this getting the respect? Like it's a tech industry. This is, so I got a lot of mileage on Twitter out of just like, I guess being the kid shouting at the emperor has no clothes because with web three, what would happen is that.
00:04:22
Speaker
Everybody would like try to describe it. Each individual person would be like, web three is kind of like this. It's kind of like mortgages on the blockchain. And they'd be like, at the end of the day, I'm not a genius. Okay. They're smarter people than me in web three. So you've got to, they point fingers and be like, okay, you've got to look at somebody smarter. You've got to look at biology. You've got to look at Mark Andreessen. So I started being like, what if I could just show Ember has no clothes with like the industry leaders? What if I could just see a show that Mark Andreessen has no idea what he's talking about? Chris Dixon has no idea what he's talking about. Biology has no idea what he's talking about. It's like one by one. I'm like, look, in their own words, these people have no idea what they're talking about.
00:04:53
Speaker
Yeah, for those who are less familiar with the tech scene, these are people who are exceptionally successful, very smart, and there's always that initial inclination that one is mistaken when you're disagreeing with someone like that who is so successful and thought well of by assuming other people you admire.
00:05:11
Speaker
That's right. And these are people that I admire before the whole web3 thing. I had a favorable impression of Rick and Driessen, of Christiks and even of Balaji. These are smart people. And, you know, and even now I can still see their good side, right? I'm not like blinded by rage or anything. It's like, I still think Christiks and has like some good posts. You know, I still think

Specificity in Business

00:05:29
Speaker
Mark and Driessen is a figure to be admired in many ways. I just also think there's this dichotomy where they're just absolute clowns and they should be ashamed of themselves when they do things like propping up pyramid schemes.
00:05:40
Speaker
My favorite piece of yours, I found a couple of years ago on Less Wrong, it's entitled, why is specificity a superpower? And I think it's connected to this. So maybe we could step all the way back and ask that broad question. It's funny because crypto came along like a year or two after I wrote my specificity sequence that you're referring to on Less Wrong, which I do recommend.
00:06:01
Speaker
You know, your listeners Google for less rolling specificity and check out my posts because they really do underline a lot of the other stuff I've thought about and written and the whole concept of being specific. I learned it from less wrong and I felt like it didn't get a proper writeup. It never been given a proper read. That's why I did it in 2019. I'm like, somebody has to write how great specificity is because.
00:06:21
Speaker
the dovetails into what I saw in crypto that I coined the term hollow abstraction it's like this landmine and thinking that people are then they're just not aware of this landmine which is the idea that you can make a pitch and it makes sense on an abstract level and everybody like shakes hands like this pitch makes sense like web3
00:06:36
Speaker
And the pitch turns out that it doesn't make sense because it doesn't map to anything specific. So it's a hollow abstraction. It kind of makes sense when you pitch it abstractly. You can't poke a hole in the abstraction, but when you try to substantiate it with a specific example, you can't. So it turns out to map to an empty set of specific examples, right? It's a, you describe a set, but it's the empty set. And that's why I call it a hollow abstraction. And so, so my specimen sequence, the crypto stuff that I was doing, it was kind of like a way to teach the world about what a hollow abstraction is and like how to poke
00:07:05
Speaker
hollow abstractions and it's a skill that's sorely needed in the tech industry and in a million different applications, cross-domain applications, and crypto is only one example out of many that I hope helps popularize, you know, helping people go down the ladder of abstraction and poke hollow abstractions. How can specificity help one be emotionally mature? Yeah, that was one of the posts I did.
00:07:28
Speaker
I think it helps people that I actually do think that specificity helps you be emotional. And for context, I wrote like a dozen posts and I would just take a random domain and I'm like, look, in this domain, you can apply specificity and it's just crazy how many domains it applies to. So in the domain of emotional maturity, it's this technique of when you have a feeling, instead of just like embracing your feeling, if you try to unpack why you feel a certain way, a lot of times it has kind of an abstract description. So you might be like, even be like, this person was mean to me.
00:07:55
Speaker
Even that claim that somebody's mean to you, that's already an abstraction that's not exactly the ground level truth, right? So meanness, the laws of physics don't really have a concept for meanness, right? So at some point you've abstracted away what's happening in the world and you've labeled it, but you can go down one level of specificity. What actions did the person take? And which part of your interpretation has caused you to label it mean? And that's like a new style of communication where instead of saying like, you're being mean to me,
00:08:20
Speaker
You're like, well, when you brushed against me hard and you didn't say sorry, that gave me an impression that you're intentionally trying to create some friction between us or, you know, that you don't care about my feelings. So I drew that conclusion for that reason because of that action, right? So now you're already thinking one level down and that's kind of a known communication technique. And I'm just pointing out that it's an example of unpacking an abstraction.
00:08:44
Speaker
Yeah, in stoic psychology you have some initial sensation and then there's the interpretation that often involves things like value judgments.

Stoicism and Emotional Maturity

00:08:53
Speaker
So there are lines in the meditations like Marcus Aurelius where it says something to the effect of
00:08:59
Speaker
Don't say that someone has insulted you. All you need to say is that so-and-so has been talking about you behind your back. And this extra idea of an insult is something that we add to the more specific concrete situation. So-and-so said something to some other person. And you know, you should always have this, the question mark behind is, was that in fact insulting? Does that insult harm me? And these other related, related ideas that I think vague ideas like insults or perhaps other emotional terms can cover up.
00:09:27
Speaker
Right. Now it's, you know, it's sometimes okay to be like, okay, these actions happen and therefore I do consider that an insult, right? But it's just important to make sure that your abstraction of an insult, to make sure that you're able to connect the mapping to, you know, what it maps to, right? And double check, is the substantiation, is it adequate? It really deserves to be called an insult, but I'm okay with the answer being maybe yes. Yeah. Have this example.
00:09:50
Speaker
And the post, it's from Stack Overflow where someone who works for Stack Overflow, which is basically a community site where people can ask questions and answer them and so on. Someone who works for that community feels like, I think it's after some change that the community is exceptionally mean to them or something like this and that they've been insulted and then
00:10:13
Speaker
She looks at each individual comments and determines, oh, these are all reasonable comments on their own. It's the fact that there are so many of them that created this additional impression.
00:10:24
Speaker
Right, exactly. It's like the impression of being mobbed, right? So in somebody's brain, it's natural to be like, in the ancestral environment, if you're facing such a big mob, you're probably like, well, wow, and there's not that many compliments coming in. This is like a crisis situation. Whereas on the internet, there might just be like a ton of people, like an unfamiliarly large amount of people. And so you need to kind of double check your intuition about what's mean. It's not.
00:10:48
Speaker
Yeah. What are your thoughts on stoicism generally? I haven't

Influences from Less Wrong

00:10:50
Speaker
dug into it much, right? But it's definitely, I hear smart people on podcasts all the time name checking stoicism and I think I get the general idea. I think personally I could stand to be a little more stoic and meditation is something that I feel like might add value to me, right? I haven't really invested in it properly. It's like.
00:11:07
Speaker
It's on my bucket list to give it a try, especially because the way Sam Harris talks about it is like, look, it's like it's a type of, it's kind of like looking through a telescope to find like the edges of possible life experience in a sense. And I'm like, okay, that sounds good. What are several thinkers who model specificity or rationality in their thinking that you admire?
00:11:24
Speaker
I think some of my biggest influences came from the lesser run community. So Eliezer Udkowski, I would pinpoint him as like the single most influential just because it's like, he really wrote like the kernel left operating system almost. I mean, I was very into rationality even before I knew that there was like a whole, you know, a whole space to it, a whole study to it. I was just like, yeah, rationality is just like, you know, just being smart or just like following logic. But there's just, there's a lot of subtlety to it.
00:11:49
Speaker
There's an art to it. And yeah, I mean, I see him as like foundationally raised. He's almost like a Charles Darwin to the field of evolution type of figure to me. Yeah. So he's highly influential and just, he just has so many different ideas or mental models that come up in my life. I'm like, wow. Crazy. So I don't even know. Sometimes I think about the, a counterfactual of like.
00:12:06
Speaker
What would my thought processes be like if I didn't have all these tips and tricks from his book, you know, AI rationality from AI to zombies or less from, it's just been so influential. Robin Hanson is also major figure. I mean, just watching him process stuff and he's had so many amazing ideas. You know, the great filter is Robin Hanson, grab the aliens is his newest one, which I think is so underrated. Like he basically just like explained the cosmos, explain the Fermi paradox in like 99% of people have no idea that's happened.
00:12:32
Speaker
And no, like, are you kidding me? And then he's all snow. He's had a couple other hits. I'm trying to remember what else. Oh yeah. Of course, you know, prediction markets. Futarchy is a big Robin Hanson thing. One of, one of his less lesser known indie hits is quantum mangled worlds. Of course he had the book elephant in the brain. I mean, these are all like really good ideas. Like these are all ideas that you could like make a career off of to just nail this one idea. And he just, he just keeps giving during the pandemic. He pointed out that variolation made a lot of sense on the expectation of the vaccine was going to take years to come. It's.
00:13:00
Speaker
And like nobody was talking about it. It's like, guys, he's right. It's like people don't understand this guy is like, write a lot about topics that that most people doesn't even enter their purview.
00:13:10
Speaker
Yeah, could you explain what that is for people who don't know? Variolation. Yeah, variolation. Yeah, so variolation, it's actually kind of like the original implementation before we had vaccines. It comes from varioles, which I think are like the scabs you get in smallpox. And so variolation started where people would be like, okay, they figured out that it made sense to like take some of the pus from somebody's scab who had smallpox and you administer a dose of that to somebody who doesn't have smallpox yet. And if you administer the perfect small dose, they'll get a very mild version of smallpox and then they'll get immunity from smallpox.
00:13:38
Speaker
So it's kind of like a vaccine. The difference is that with a vaccine, you don't even need to use the disease itself. You can even use like a modified weak version or some sort of like small stimulation of it where maybe it's like just a protein or something. But variolations, like you take the real deal from the varioles and you do it and it worked. I mean, it saved people's lives. And I heard something like the accidental death rate where you give somebody too much has like a 1% risk.
00:13:58
Speaker
But at that point, I think it was more than like a 10% risk that you'd have like a serious case of smallpox. So the odds made a lot of sense. So as people's profession got to do the violation, there's a story that George Washington actually had a big advantage because he somehow went against the law and like insisted on variolating his troops. And that was like one reason why they survived to fight. I don't know the details of the story, but like 20s, this is like a real thing. This is like a solid conclusion from medicine that variolation works.
00:14:22
Speaker
And so here comes along COVID, where it's about to sweep through the population. It's about to kill and expected more than a million Americans. This is like, think back to March 2020. And Robin Hanson saying, look, variolation is an available option here, guys. We're all staying home. Why don't we just do some of this, especially if you're young. If you're young and you're sitting home, why don't you just have somebody do a variolation protocol on you? Get a mild case, which is already going to be pretty mild on expectation. But instead of having like a 1% or for young people, it's already like a less than 0.1%.
00:14:51
Speaker
Right? So you take your one in 10,000 chance of death, turn it into like, I don't know, a one in 200,000 chance of death, get it over with and go outside. And this is before the vaccine was available. And we thought it would take two years to have the vaccine. And I'm like, yeah, hello. This makes a lot of sense, guys. I knew, like, didn't get any press, didn't get any attention whatsoever. And everybody acted like it was like vaccine or bust.
00:15:09
Speaker
And I was even going crazy. I've posted on this wrong, an insane post, but I was like, Hey guys, do you want to do a remote variolation trial where we all just like find somebody with COVID trying to figure out, like swab the COVID out of them, trying to figure out what's like the right amount. Should we just do this from home? Like ignore the medical system. And people are like, what, you're crazy. And, but other people are like, look, this is a crazy situation. Like millions of lives are at stake. And again, it's like without Robin Hanson pointing this stuff out, it's like, I don't think I personally would have thought of that. So you just have this guy who's just like pointing out really smart stuff that everybody's ignoring. And that's, you know, I really respect Robin Hanson for getting that.
00:15:39
Speaker
Yeah, I have a lot of respect for Robin Hanson as well. He's got to be one of the top intellectuals of our age in the best sense of that term. Yeah. So I think one of his metaphors he has is pulling the rope sideways, which is a useful way for thinking about problems where usually you have, as you said, some dichotomy between whether it's left and right vaccine or not. And then you can ask like, is there a solution between those maybe that incorporates what is
00:16:05
Speaker
good from both but maybe even more centrally just looks at what's the problem we're trying to solve here and find some other route other direction to it. Yeah, I'm always interested if you have any examples of ideas that might be useful for people to hear about that are underrated that they could apply to their life now.
00:16:26
Speaker
Yeah. I mean, even just the whole idea of looking at philosophy with fresh eyes of being like, look, the reason we're doing philosophy is because we have a blank sheet of paper. We're building an AI. Okay. And the AI is going to be smarter than us and we need to program its philosophy. Right? So when you say things like, I think they're for AM. Okay, great. But like, what do I program into the AI? Like should AI know that it thinks therefore it is, right? Like we're starting from scratch here.
00:16:50
Speaker
Like don't even worry about your own conscious experience. Like how would you design from scratch? And then once you do that, you can compare yourself. I mean, like, okay, how does your own brain differ from the ideal rational reason or that you want to build? Then you can compare, but let's start from scratch. And even that perspective was very eye-opening, right? It really reframes how you're thinking about like anything in philosophy. Yeah. So there's that.
00:17:09
Speaker
the AI lens on philosophy, the warning obviously about AI risk, right? So it's just crazy. I mean, he is the person who basically pulled out a telescope, right? Looked in the future, metaphorical telescope. And it's like, Oh, wow. Like AI is going to really sleep over the universe, brick the universe, this terminology that I use, right? It's kind of like this nuclear explosion. You can't put the genie back in the bottle and you don't really like what it does with the universe, but it's like done. It's done for her. Right. It's like when you brick your phone, you kind of bricks the universe. So the fact that he pointed that out.
00:17:37
Speaker
In the early 2000s, right. And I first saw him pointed out in 2007 when I started reading last run. And I was just like, damn, that's an important thing to point out in 2005 or earlier. And now that like more people are like realizing this is happening, it's like, wow, at least we had a person, a member of our species who was able to like point this out. And you know, I know IJ Goode kind of did earlier, right. So, or Vernor Vinjie even to some degree. Yeah. So that's like a good mental model. AI is coming.
00:18:02
Speaker
coming hard to break the universe. And I'm sure you probably want to dive into that, but I'm trying to think if I have just other really memorable Aliezer stuff. I mean, just the craft of rationality, right? The idea that, you know, he calls it like the lens that can see itself, right? So it's like a lens that you can turn your lens on yourself and you can see that your own lens has flaws and you can try to apply like a corrective lens on your lens. Right. So it's this idea of like our brain just like evolve. It's just like a piece of meat. It's not really like a truth engine, but it has parts of it that are kind of like truth engines and you can like keep fixing it and you can like get more truth.
00:18:33
Speaker
Yeah, I think I also appreciate as a stoic his focus on virtue ethics So he has a I think it's a post called the 12 virtues of rationality that I would highly recommend That post is great. I mean and you know when I just that the classics are flooding in, you know mysterious answers to mysterious questions are really profound point right where I
00:18:50
Speaker
this idea that when you don't understand something, in this case, a lot of people are confused about AI, confused about intelligence, confused about consciousness, confused about freewill, confused about the beginning of the universe. It's tempting to draw a boundary when you're like, okay, this is the logic that I use when I'm doing accounting, when I'm going to the grocery store.
00:19:05
Speaker
Then there's a hard boundary. And then there's other types of reasoning that I do when I'm talking about like how like, okay, Mercury's in retrograde or they're like, oh, free will is like an act of God. Like there's two types of reasoning and there's kind of like a wall between them. And the answer is like, well, I think the universe only operates on one type of reasoning. And if you think that there's a wall, that's where you're really just talking about your own ignorance. Like you're never going to find that the answer to some phenomenon is that the phenomenon is over on the other side of the wall.
00:19:30
Speaker
and like the mysterious kind of phenomena. Like from the universe's perspective, everything is normal. Everything is like the same level of epistemology and so mystery exists only in the observer. Yeah, that's right. I think the alternative perspective to that is G.K. Chesterton has this view that when you're religious, there's one central mystery at the center of all things and everything else makes sense.
00:19:54
Speaker
But when you're not religious, you have so many little mysteries and the assumption that things will ultimately make sense. But I think if anybody goes around thinking everything makes sense without noticing how many things they don't get, it's just objectively wrong, right? It sounds like the non-religious perspective you're describing is like objectively more accurate.
00:20:12
Speaker
Yeah, I think there's the thought, of course, related to the beginning of this discussion. You know, can you make where you're drowning, drawing this boundary, can you make this mysterious postulate hypothesis, whatever it is, more specific? If not, there's this threat that you're not talking about anything, right?
00:20:30
Speaker
Yeah. Let's spend a little bit more time on the AI because that's always interesting. I think we'll be somewhat novel to people here. So I think we're in Silicon Valley. It seems pretty normal to hear concerns about AI risk, but I imagine a number of people listening will not find that concern something they're familiar with.

AI Risks and Control Issues

00:20:47
Speaker
So what's the case there?
00:20:49
Speaker
Yeah. So I would just argue for a position that's similar to somebody in a doomsday cult like heaven's gate, which hit this, the fit. I do think there's a significant risk, let's say 30% that in the next couple of decades within my lifetime, the next 20 years, if not sooner, I think the risk is very significant that AI is going to do what I call breaking the universe where it just kind of like runs wild accidentally. Like some accident happened there just gets released a little faster than we hoped.
00:21:14
Speaker
And you just can't turn it off and it's really just doing a lot and it's just unstoppable and like it's over. It's just game over. Like there's no off button. There's no reverse button. It's like, oops, shouldn't let it run wild and break the universe. Like I actually think that's going to happen. And that probably means we die. It might mean that we exist in like some like really mutated form that we don't want. Like just it's one metaphor is like, okay, we're just gonna, it's just gonna be like a bunch of.
00:21:36
Speaker
molecular smiley faces, things that kind of look like humans at the molecular level, but it's really not here or what we call wire heads where it's like, okay, we just end up where like everybody's just like dripped out on morphine all day. And like, so at least there's like some happiness, but there's no like artistic pursuits and there's no like achievement. It's just like a bunch of, you know, zombies ripped out, but like, yeah, that's something, you know, it's better than nothing, I guess. So that's kind of the conclusion that I believe. And then we can trace back to like, how did I get to like this crazy conclusion, right?
00:22:00
Speaker
Yeah, yeah, let's do some of that tracing. Okay. So there's just, there's a few properties of AI that I think are under-appreciated that kind of add up to this risk. So one property is just like, I think people don't realize how smart it's possible to be. They just don't get it. Like they think to like their smart friend, they think to Albert Einstein, they don't get what it means when one day an AI wakes up and the algorithms are dialed in and they're just damn smart. They have like a 1000 IQ instead of like 150, just 1000, right? Or 10,000 or whatever it is. When you're that smart,
00:22:29
Speaker
Everything that there is to do in the universe is just easy. Like you can have a blank sheet of paper and you can just draw where you want the atoms to be. You know what I'm saying? Like it doesn't matter where they are now. It matters where you want them to be. You can make it happen. No problem. And I think a good way to think about it is like some people imagine that the universe, there's just going to be like an infinite trend mill and infinite staircase where the problems keep getting harder, right? So there's always going to be more levels. And I think there's not that many levels. Like the universe just has like a few more levels beyond human engineering. And then it's like, you beat the game.
00:22:59
Speaker
If you look at math, there's always going to be harder and harder math puzzles, right? That's actually like a theorem in complexity theory that you can generate an arbitrarily hard math puzzle. So you can always entertain yourself with whatever, however smart you are. You can always do a math puzzle and you can spend an hour on a math puzzle. But if you're trying to engineer the universe when you're specific, when you're sufficiently smart, you just do it. It's just easy. You see what I'm saying? Like it's like new in the matrix. Like those bullets are coming. It's just no problem.
00:23:23
Speaker
for an AI. Yeah, I mean, I guess I don't see what you're saying. So you're you're supposing that if you're so intelligent, you can manage nanotechnology or something like this. That's like you can manage nanotech. I mean, and you don't even know what technological approaches they're going to use. But the point is, like they can have like, is it the blank sheet of paper? They can just whatever they want to happen, they can chart a path between I want it to look like this and it looks like that. Like in the path is just straightforward. Like it's from their perspective, there's no hard step to just map
00:23:52
Speaker
the blueprint to reality. Why is that the case? I mean, so one place you can see it is just like in the go playing engine. So the go playing engine is now getting to the point where it's so far beyond humans that it's just like you play it and you just know it's told to win and go. It's going to win and go. Right. And it's just, why is it so easy for it to manipulate every board configuration? It's like, look, it's just beaten go. It's beating the game. Like you just can't, you cannot, you know, mess with the fact that it is going to get to that win state and go no matter what you do. Even if you try to win, you can't.
00:24:22
Speaker
Right. So in principle, every problem is go like with some level of intelligence is a thought. That's right. Yeah. The only difference between go and the universe as a whole is you might call it like a big domain, right? So go is still like a small domain where at the end of the day, the domain you can describe it easily. Like, okay, there's like a grid of pieces and there's like a relatively small number of states.
00:24:41
Speaker
there's still an exponential number of states, right? Which is why it takes so long to even make a go engine, but it's still a much smaller space of states than like the universe as a whole. So if you say, okay, you have to play go, but you also have to be able to like, you know, fight me if I want to turn you off, right? Suddenly that becomes like a much larger state space. So that is at every domain, like you have all these small domains, but if you keep making the domain bigger and bigger, eventually the domains all merge together and you just get something that can just operate in the universe as a whole.
00:25:06
Speaker
You just get like a general planning agent or a general intelligent agent. And I think a good metaphor that I find useful is Turing completeness and what we've already seen play out with computers. Because I'm trying to explain how any time you have like a Go playing engine, you keep working on that, eventually you're going to get a general intelligence. Or if you have a chatbot, eventually you're going to get a general intelligence. If you want to intuitively see why that's the case, look at how all electronic devices have converged into computers.
00:25:33
Speaker
So if you look at your microwave today, it's not going to be a dedicated electronic circuit for a microwave. It's going to be a microprocessor, like a full Turing complete microprocessor that's running a microwave program. Why? Because there's just a factory that pumps out microprocessor chips and it's easy to just stick one in there. It only costs a few cents. You stick it in the microwave and you just like program a few instructions and you get a microwave. Why build a separate circuit board for the microwave when you just have that?
00:25:58
Speaker
And when you want to make it do something more complicated, like if you wanted the microwave to have 3D visualizations on it, at that point you really want to ditch the circuit board and you really want to go to turn completeness because the thing you're trying to do is so complex that it's only feasible to describe on top of this abstraction of a computer, of a microprocessor.
00:26:18
Speaker
And so you see that convergence, maybe a better vehicle than a microwave is like, you know how Steve Wozniak famously designed that game of Pong for Atari, like Steve Jobs got him to design Pong for Atari and he used like very few components and he made it very efficient. And that's great. But today, anybody who ever wants to make a video game for an arcade, again, they're going to use them as a true processor. They're not going to use a custom electronic circuit. And even if you wanted to, if you wanted to build Minecraft using a custom electronic circuit,
00:26:44
Speaker
you still couldn't because the semantics of the Minecraft game are Turing-complete. So if you tried to build it with a custom circuit board, your custom circuit board would necessarily also become Turing-complete anyway. So you might as well just use an off-the-shelf microprocessor. And so that kind of convergence into Turing-complete systems where electronics all converge into Turing-complete systems
00:27:03
Speaker
I think that's a good intuition pump for seeing how any domain where you have an effective intelligence in that domain, you make that domain bigger or you keep workshopping AI technologies, you're going to see convergence toward general AI engine. And by the way, we're already seeing it. This is underappreciated, but if you remember the progression from AlphaGo to AlphaZero, right? So you had previous Go engines, they'd be good at plain Go. And then you had AlphaGo, which trained, or I think you had AlphaZero, which trained on Go.
00:27:30
Speaker
Uh, and it had a nipper, it didn't have any like go specific code to begin with. They're just like, here's a game, just like learn the rules of the game and then play it. So like it really started from like zero, like spotless mind started from zero and still like within like a day, suddenly it can play go. And then there's this, there's new zero, which is like an even higher level of abstraction where I need to look up the distinction, but like you're already seeing a progression where the new zero engine, like it really knows nothing. So like it can play both video games and go like it can simultaneously get really good at both.
00:28:00
Speaker
Because even the rules of the game are just a variable to it. So you're already seeing, if you have a spectrum where on one side of the spectrum is the microprocessor, right? The generalist planning engine. And on the other side of the spectrum is the Steve Wozniak hyper-optimized electronics. You're already seeing a progression toward that end state. So to me, it seems like it's a very inevitable regression. And if you just think of the logic of a domain, if you just think of what it means to be really good at chatting, by definition, if something's really good at chatting, it, it knows a lot of different domains because you can ask a lot of different questions in your chat.
00:28:29
Speaker
Hi all, it's Caleb just interrupting to remind you that we've just launched a new newsletter, a new newsletter, and it's called the Stoa Letter. Find it at StoaLetter.com, and if you sign up within the next week and follow up to the welcome email with the words podcast or Stoa Conversations, we'll send you a free PDF of an unreleased course that Michael and I have put together. It's seven lessons on managing negative emotions.
00:29:00
Speaker
Cheers. Yeah. So the one fun connection to Stoicism is that the ancient Stoics were some of the pre-courses for first order logic. And they thought that humans were essentially what we would call computers, logic-based machines. So on that view, although it's probably speaking, strictly speaking, not entirely correct in detail, nonetheless, on that view that human minds are computers, this is the sort of thing we should expect is true, most likely.
00:29:28
Speaker
I guess one, one place people always want to push back is suppose you grant the general point that in principle, there's some mind that could, you know, just get, you could set up some goals and it'd be smart enough to figure out how to achieve those goals, even with an exceptionally small amount of initial resources or what have you. They grant that point in principle might always think, well, in the actual world is so complex that we shouldn't expect anything like that to evolve, at least not anytime soon, where soon is over the next, let's say that this century.
00:29:59
Speaker
Right. I mean, look, I think this is a really important, this is the crux of the issue. One of the main cruxes is the idea that can you imagine something with like a thousand IQ or, you know, obviously the IQ measure is kind of meaningless maybe at that point. Just something that sees the universe is just an easy engineering problem. I mean, if maybe one way to get intuition is like compare like a talented human engineer with like a chimp.
00:30:19
Speaker
that you can see, I mean, I do think that humans have reached this threshold of general intelligence, right? Where I do think in some ways, and really the smartest day I ever can still kind of talk to a human better than a human can talk to a chimp, because there is this threshold where humans can talk about anything, right? So I agree that it's not a perfect analogy, but if you just imagine, chimps can do a little bit of engineering, right? So if you're just talking about engineering rather than language, I do think it's not crazy to look at an average human versus an average chimp and be like, okay,
00:30:47
Speaker
humans think that building tools is much easier than chimps trying to build tools, right? And now if you can just extend that a little, like again, the whole universe is just not that hard. It's a finite difficulty game, right? Like intelligent, like sure, the universe is complex, but.
00:31:03
Speaker
It's just finite. And the funny thing about the universe is the laws of the universe are actually remarkable in being simple in an objective sense, right? So the universe has a ton of regularity. It's low entropy, surprisingly low entropy. And so in AI, when you put an AI into a universe, it's low entropy, it's game over. Like it just figures it out.
00:31:21
Speaker
Yep, yep. For listeners who might be feeling like this conversation has taken too of a sci-fi direction, I would recommend there's a nice book by Elise Bohan called Future Superhuman, which makes a number of different cases. But one of these is the AI case. And she also spent some time looking at biotechnology, just essentially for the claim that
00:31:41
Speaker
The next few decades could be radically different than anything we've seen in the past. And I think that's a very serious possibility. Whether you take the AI risk line on it or some other line on it, things could be radically different this century.
00:31:56
Speaker
you know it's like that movie don't look up right where it's if there's a temptation to try to live a normal life right but you kind of see the asteroid coming like the universe there's like a clear path that the universe getting brick and there's really like no normal outcome there's a great post by Holden Karnosky a recent post called the most important century and
00:32:13
Speaker
There's not really a normal path that this century can take. Just the fact that we're used to 2% a year economic growth, even if that's all you do is extrapolate 2% a year economic growth, even that has to stop within a couple centuries, or else you just get like every atom in the universe has like a trillion dollars of value, which just makes out. So no matter how you try to extrapolate the concept of normalcy, something very pattern breaking needs to happen in the next century or so.
00:32:35
Speaker
the particular type of weirdness that I think is very likely is AI breaking the universe. Like I think that's going to be the thing that turns out to dominate everything else. But even if you don't think that it's worth pointing out, like there's no normal default option. Like you have to pick your crazy.
00:32:49
Speaker
Practice Stoicism with Stoa. Stoa combines the ancient philosophy of Stoicism with meditation in a practical meditation app. It includes hundreds of hours of exercises, lessons, and conversations to help you live a happier life. Here's what our users are saying.
00:33:08
Speaker
I'm new to Stoicism and wanted to dive deeper with guidance. This is it. I love the meditations. I've practiced meditations with other apps, but this just seems to be more impactful. Life changer.
00:33:21
Speaker
With Stoa, you can really get a sense of how to take yourself out of your thoughts and get a sense of how to handle different difficult situations. Find it available for a free download in the Play Store and App Store. Right. Well, let's zoom in into another concrete area of people's lives, which is business.

Evaluating Startups with Specificity

00:33:41
Speaker
And then we can come back and ask questions about how these longer term situations come together. So one area you've applied
00:33:48
Speaker
this focus on rationality, specificity, is when thinking about business ideas and how to evaluate business ideas. I wonder if you'd share a little bit on that.
00:34:00
Speaker
Yeah. So, you know, I have one weird trick that's like at a startup idea. It's the first thing I do. It's a very simple trick and I wouldn't be talking about it except for the fact that it completely demolishes 80% of startup pitches by smart founders. So it's just like, it's almost like, Hey, you're coming to my house, by the way, watch your step, the doorstep. There's like a one inch high bar that you have to step over.
00:34:20
Speaker
And 80% of the people coming to my house are like face planting on this like one inch high bar. I'm like, I just have to step over the bar. So I don't know what's going on. Like that's the reason why it became fascinating. I'm like, normally when there's a simple sanity check, everybody just passes it and continues because sanity checks are so simple. But here you have, you know, the vast majority of startup founders I talk to are apparently failing a sanity check where the sanity check is.
00:34:40
Speaker
the specificity test, the idea of just describe your value prop, how you think you're adding value to people. Don't describe it abstract, describe it specific. So it's the difference between saying like, oh, we're going to help startup founders have better analytics. And I'm like, okay, give me an example of a startup founder you know, who currently has bad analytics and tell me what specifically about their analytics is bad.
00:35:02
Speaker
And they might be like, oh, well, they're, they're, they're graphs. You can't drill down to the graph. I'm like, okay. The graph of what? Like any graph. I'm like, okay. It's dual Monte Carlo simulation. So, you know, the concept of Monte Carlo simulation, it's like, if you're telling me that the game can progress a certain way or reality can progress for a certain way. Okay. Pick a way, then it can progress. You know, you get to pick, I don't get to think you get to think I'm giving you a handicap. You get to pick the Monte Carlo simulation, but just run the simulation forward. Tell me what happens.
00:35:27
Speaker
And ultimately this turns into a gacha where they just struggle to pinpoint an example of what they're trying to claim. And so their pitch kind of dissolves into a hollow abstraction. And that's my one weird trick is I could have founders face plant on the door of whether their value prop is even defined.
00:35:43
Speaker
What are some of your favorite examples of pitches that maybe initially sound plausible, but failed this test? Right about this on my blog, bloatatmvp.com, I have like a dozen examples of companies that failed the test because I thought it was interesting because a lot of these, they raised millions of dollars. They launched, they got hype. Some of them even got number one on Hacker News and Product Hunt. And I just, I used the one weird trip. I'm like, this is a whole abstraction. Same as I did the one weird trip with, you know, A16, these $7 billion crypto fund. I'm like, sorry, guys.
00:36:09
Speaker
I know you're investing seven billion in this, but it's actually a zero. And like, sure enough, you know, it's already down 40%. And, you know, I don't think it's done dropping. So it's the one weird trick, but like specific examples of startups. So there was one called alpha sheets that was big in the rationality community. And I was actually following it for a long time. I was actually giving him advice. I was like, look, you re I know you're building a better spreadsheet, but you really need to identify like one early customer who's getting a lot of value out of this before you build more and more features. And they didn't listen and they built a ton of features and they just, they never really got traction. And I'm like.
00:36:37
Speaker
This is how you should fail. You're just not going to get traction because you're not actively finding your first user to use this. And then, and you know, cause they're all pictures, like we're going to build like a smarter spreadsheet where you can write like more complex formulas inside the spreadsheet. And then I said, look, whenever you try to describe to me a specific user, and then if we work forwards from the user's problem, instead of working backwards from the solution that you like, if we work forward from users problem, the solution we get for that user.
00:37:02
Speaker
is much smaller in scope than what you're building. And that's a huge red flag because for founders, it's always very tempting and appealing for smart founders, especially. It's tempting and appealing to be like, I'm going to make the Zapier and I'm going to make a flowchart diagram. My product is a flowchart diagram. You can wire anything together. And I'm like, okay. And so what do you want me to wire together? Like anything, man. I'm like, okay, what? Giving what example? And then they gave like a super weak example. And I'm like, okay, if you want me to be your first customer, do you need to build this whole flowchart thing? Do you need to build this whole spreadsheet thing? And what I was saying is in the case of alpha sheets,
00:37:30
Speaker
A lot of the use cases they describe, I'm just like, look, this person, you're telling me they have like a small problem with Excel. So in this example, the example that you chose, all this person needs is like a small Excel plugin, right? And I would save them like an hour a week, right? So why are you going out of order? If you're trying to build something huge, and then you're going to launch, you're only going to launch and pick off that low hanging fruit customer who just needs the Excel plugin, you're going to get them directly to use your huge spreadsheet.
00:37:55
Speaker
That's almost always the wrong way to do it. You usually want to snowball the amount of value you're creating. So that should be like your North Star metric, which is like, how much value am I creating for how many people? And what most people miss is they're like, I'm going to create a ton of value for a million people.
00:38:12
Speaker
And that's, and I'm going to launch and like 10 days after I launch, there's going to be like a million people, right? Or at least thousands, right? Getting tons of value. And usually what happens is you launch and you just get zero, like your launch just hits the dirt, like a dog turd. And then you feel terrible. Like you just wasted like two years of your life. If only you had just, you'd use more of a greedy algorithm where at any given day, you're just like, okay, I currently have zero users who I'm getting value to.
00:38:33
Speaker
How do I go from zero to one? And then imagine you successfully do that. You're already better than 80% of startups. And then you ask, how do I go from one to two next week? How do I go from two to four? If you just keep doubling or you keep growing 10% or you keep going in any sort of exponential path, it's a lot of things tend to take care of themselves when you just focus like that. And if the alpha sheets team, just to use one example, had just focused on how to create value for one user had just made a popular Excel plugin. I believe they would have been on a better path than like making this product that literally nobody ever used.
00:39:01
Speaker
Yep. Yeah, absolutely. Very good. So what about a example of someone who is, of course there are many, but someone who passes this test in a way that you think is exemplary.
00:39:12
Speaker
Sure. Yeah. So, I mean, if you look at a lot of the big successful startups, a lot of times they did a bloated MVP and then they pivoted, right? But if you look at like whatever worked, whatever pivot worked, it usually follows lean startup principles of bloated MVP. And by the way, bloated MVP and lean startup, it's the exact same idea. So like if people had better reading comprehension, I could just tell them to read the loan startup. Except I personally didn't have good reading comprehension because I read the lean startup and like, I totally get this. And then I go off and I make a bloated MVP. So it does take.
00:39:37
Speaker
multiple readings. So like my blog to just be like another version of the lead startup from my own perspective, I think I'm adding value. People tell me I'm adding value. It's brief. So anyway, what's a company that did it well? Reddit is a good example because I've heard their story. I think I covered on my blog, which is when they had zero users, you know, a lot of companies would be like, okay, we got to build this platform. We've got to make like an email waiting list and we got to get users to come in. And then the users are going to generate the content. And what would happen a lot of times is they'd be like, well, why do users want to come contribute content to the site that nobody uses yet? And they would have died from the chicken and egg problem. But in Reddit's case,
00:40:07
Speaker
They just, they did a nice trick where they just created a bunch of fake user accounts for themselves. And every day they just log into like 10 user accounts and like submit a link using 10 user accounts. So they would astroturf the whole front page of links. And so they just solved the chicken neck problem. So from day one, if I was just a random guy on the internet who just randomly hit on Reddit, cause it was posted to a
00:40:26
Speaker
just however I discovered it, I'd go to reddit.com and I would see 10 links. So the value prop story from Reddit on day one was like, look, do you like to surf the internet? Yes. Okay. Here's a webpage. Look at the webpage. I look at the webpage and it's like, Oh my God, are they a Chia pet, you know, weird type of Chia pet. Okay. Click. Right. And then it's just like, Oh, that's interesting.
00:40:44
Speaker
Bam. That's what I call a value transaction. A random person came to the page and they got value out of the page because they clicked the link that entertained them and they're probably going to come back. They kept it simple. They're like, look, day one, the person needs to see links. I think there's a lot of startup founders who would be like, we need to make sure that there's a community of people who are all engaged in the community dynamics. It's just much simpler to be like, look, person sees links, gets entertained. That's a much clearer way to think of the value.
00:41:12
Speaker
Yeah, I think maybe there's always the risk that somebody gets hung up on the word product. In some cases, maybe even better just to think about like, what's the solution that one can do manually? And if there's no manual solution around, that's a sign that maybe you're not going to be able to create something that's larger and automated and so on.
00:41:29
Speaker
That's

Delivering Immediate Value

00:41:30
Speaker
right. So the term MVP traditionally stands for a viable product, but I take issue with the word product, as you say, because I think it distorts people's thinking. A lot of people who are in their head, they're like, I am working on a minimal viable product. And that's why I'm spending six months engineering this app. Like you say, it would be much better if they step back and they think of it this way. I am working on creating value for my first user. Okay. What does your first user want to do? Book a ski lodge.
00:41:53
Speaker
Okay. Do you really need to develop your app to help one person book a studio? Why don't you just call them up and book them a ski lodge? They're like, what? That's crazy. I'm like, doesn't that get you from zero to one? Like, yeah, but that doesn't get me to a thousand. I'm like, okay. Why don't you go from zero to one, one to two, two to four, 48. And when you're trying to go from 500 to a thousand, yeah, that's a great time to build your app. Cause it's hard to go from 500 to a thousand without an app. Okay. Build your app then.
00:42:14
Speaker
So you're interested in entrepreneurship, these questions about rationality. And some of these questions about rationality, they have a very large scale.

Balancing Business and AI Risks

00:42:22
Speaker
You're thinking about the world in terms of decades, you're concerned about AI risk, of course. How does that come together? We just had a conversation on concrete business ideas. But before that, we were talking about these much, much larger questions. How does this come together?
00:42:35
Speaker
It doesn't come together. It's inconsistent, right? So it's like part of my mind. The logical part of my mind is like, there's a very convincing case that we are walking like lemmings right into a idea. I don't see any logical fall in the case except the idea of unknown unknowns. It's a complicated subject. Maybe I'm wrong.
00:42:50
Speaker
That's like the only kind of argument I see against it. And then another part of my brain is like, okay, all right, let's see. Oh, I can optimize my business to, to make, maybe I can increase my salary. Just like that. And there's like, Oh, maybe I can like do a Twitter doc against a lot of likes. Right. So the different segments of my mind of like what I'm interested in on a daily basis, a lot of times are way out of sync. And like, I don't wake up, if he told me, Hey, you're about to get slaughtered in five years.
00:43:12
Speaker
Logically, you're making a good case in terms of AI doom, but am I waking up in fear? No. And I think psychologically a big part of that for me, different people vary. Like some rationalists do get depressed with this. In my mind, I think there is a psychological component where I like just at least, at least I'm not like falling behind the herd. So it's like, at least everybody gets slaughtered. Now, objectively, would I sacrifice myself for the human species? Sure. I don't want everybody to get slaughtered, but there is some part of my brain that's like, look, I'm not the only one getting slaughtered. Humanity as a whole is getting slaughtered. So how bad could it be? That's just like how I think about it.
00:43:40
Speaker
as part of your picture that this is more or less inevitable. The problem is it's just a hard problem, right? So you've got this attractor in like a black hole in AI space where it's like hard not to build the new me AI and it kind of like begs you to build it. Just if you know the plans for a nuke, it's like hard for armies not to build it, right? It takes like a massive coordination or especially if you're messing around with nukes. I imagine chain reactions just really want to happen. Like in reality, it's kind of hard to set them up. So there's not like a
00:44:02
Speaker
huge danger of it happening unintentionally, but you got the idea. It's just like it's a state that's hard to avoid in the AI design space. So I think, and I just think we're kind of like driving right to the space and we're just going to get sucked in and the universe is going to get bricked. And then we're going to be like, Oh shit, can we just undo? And there's like no undo. So that's kind of like my default path of idea of what's going to happen. Um.
00:44:22
Speaker
And you're saying like, is it inevitable? Yeah. I mean, in terms of what can we do to avoid it? I mean, that's part of the problem. Like tapping the brakes is pretty hard because there's this idea like, okay, you're tapping the brakes, but somebody in China, in a basement, in some research lab is still working on it. So now what? Yeah, there's always the chance that it'll arise unless without some exceptionally drastic actions.
00:44:41
Speaker
Yeah. Is there anything else you'd like to add? I'm working on a site. It's a site where it basically teaches technical optimists like myself, how there's an AI doom risk where you kind of have to break the technical optimist pattern. So stay tuned for that. I'm going to launch a site with some explanations about that. Yeah. And besides that, I guess I can pick my day job relationship here at Alcoa. We offer relationship coaching. So I spend a good amount of time working on that and I'll put out a promo for less wrong. You know, I found it life changing. So I highly recommend checking out the less wrong sequences. And besides that, you know, I think we just covered like random stuff that's on my mind. So you did a good job.
00:45:11
Speaker
Perfect. Thanks so much for coming on. Yeah, I invited Caleb. Thanks. Thanks for listening to Stole Conversations. If you found this conversation useful, please give us a rating on Apple, Spotify, or whatever podcast platform you use and share it with a friend. We are just starting this podcast, so every bit of help goes a long way.
00:45:32
Speaker
And I'd like to thank Michael Levy for graciously letting us use his music. Do check out his work at ancientliar.com and please get in touch with us at stoameditation.com if you ever have any feedback or questions. Until next time.