Introduction to Nathan Lebens and AI's Economic Impact
00:00:00
Speaker
Welcome to the Future of Life Institute. My name is Gus Dacher. I'm here with Nathan Lebens. Nathan is doing research and development for Waymark, which is a company that does AI video generation. He's also the co-host of the Cognitive Revolution podcast. Nathan, welcome to the podcast. Great to be back.
00:00:22
Speaker
Fantastic. I think we should talk about economic transformation coming from AI as we see it today. If we talk about large language models or a generative AI in general, how do you see those AIs transforming the economy? That's of course
AI's Role in Job Transformation and Productivity
00:00:43
Speaker
a huge question, but perhaps you could tell us about how you think of this question in general.
00:00:48
Speaker
For sure. It is a huge question. I have limited confidence on how the dynamics will play out. I do think it's very hard to predict. We talked last time about agents with the emergence of agents and the dynamics that may develop between them. I have a lot of uncertainty about how things will shape up, but I think for starters,
00:01:13
Speaker
The smartest thinking that I came to, and I think the smartest thinking I've read from others, points to the unit of a task as being the way to think about things. Because language models are really good at tasks.
00:01:32
Speaker
And also, language models, at least in isolation, are kind of limited to discrete or kind of finite tasks because of their nature with the limited context window and all the things that folks I think will be familiar with.
00:01:46
Speaker
Do you think it's so simple that we can simply describe jobs as being made up of a number of discrete tasks? Or is there something we're missing if we do what economists sometimes do and then take a job and then split it into a hundred different tasks? Do you think the kind of straightforward model there is the right model? I think it's a pretty good start for sure. People always have these kind of binaries like, is it going to augment humans or is it going to replace humans?
00:02:15
Speaker
You know, I think that is, inevitably those things end up being kind of a false binary. And the answer almost always when I hear something like that, it ends up being, in my mind, it's going to be both at the same time. And it's not that hard to generate examples of each different process playing out.
00:02:37
Speaker
The examples don't invalidate each other. Rather, I think they should lead you to a conclusion that both of these things are going to happen. And in different domains, different changes may predominate. So I do think that, yes, decomposing a job into tasks and saying which of those tasks could a language model do is a pretty good way to get started on analyzing what's going to happen.
00:03:05
Speaker
The language models can do big enough tasks that even if you stop there, I think you have to... I don't know how you could come to any other conclusion other than they can do a lot of tasks. So I've started talking about this in terms of the unbundling of jobs into tasks.
00:03:27
Speaker
Because it is at the same time, it does remain true that if you were to go look at any given job and say, could a language model do this whole job as currently constructed? Then the answer is like almost universally no. You know, there's some element of.
00:03:44
Speaker
Physicality, which is like the most obvious one right now. I think we're going to have robots too, but that's a little bit delayed relative to GBT4 anyway. There's sometimes just a lot of implicit context where there may be kind of long-term memory requirements. So there are a lot of things that if you just said, can a language model in isolation as it exists today do this whole job? There's a lot of reasons that the answer kind of boils down to no. But if you start to do time slicing or look at the tasks that ultimately make up the job,
00:04:13
Speaker
A lot of times, the answer for individual tasks become yes. And I think critically, a lot of times, the core task, for the core task, it is yes. Some of the stuff we do on computers, right? Like we want to go, I need to go put some summary of a call into my CRM. If I have the transcript of the call and I want the summary, GPT-4 can do that.
00:04:43
Speaker
where it is gonna get tripped up more often is like actually logging into the CRM, having access to the CRM. I've used this example before, but if you work at a doctor's office, let's say it's a small office and maybe you have a mix of responsibilities that may include handling paperwork as the patient arrives, it may include taking the phone calls and scheduling the appointments.
00:05:08
Speaker
It may include getting, you know, height, weight and vitals when they get to the office. It may include some interfacing with billing after the fact. You know, a language model right now is not going to take somebody's blood pressure.
00:05:21
Speaker
So can it do the whole job? No. But could it take the call and schedule the appointment? Yes, almost for sure, with a little bit of affordance around access to the scheduling system and maybe a synthetic voice that it can speak in over the phone to you. But otherwise, yeah, for sure, it's going to be able to do that. Probably already
AI in Healthcare: Capabilities and Limitations
00:05:43
Speaker
So there's a question about whether these models can do the core tasks in many industries. So I could imagine that being true for a doctor, for example, where we could say that the core task is diagnosing a patient. So you present with a list of symptoms. And from that, I've seen very impressive examples of GPT-4 generating a diagnosis.
00:06:05
Speaker
Same goes for drafting legal documents. Those are two huge parts of the economy, so the medicine and law and so on. We could talk about consulting with summarizing and presenting information in specific ways. But is there perhaps
00:06:23
Speaker
Are there legal protections around certain industries such that what the service that the lawyer provides in some sense is access to his insurance? The service that the doctor provides is access to perhaps some health insurance or the stamp that says that this is approved by a doctor. Are there legal modes around industries that will prevent
00:06:51
Speaker
AIs from doing the core tasks, even though they might be capable? I don't know. There certainly are legal moats. Will they prove resilient to what I expect to be the sort of incredible demand for direct access? I would kind of guess that the answer is no. But we may be headed for all sorts of interesting dynamics in terms of
00:07:19
Speaker
start with medicine, for example. I've honestly been very impressed and pleasantly surprised by how forward thinking some like early adopters seem to be in like the medical and also the legal space. I would have expected if you'd asked me just totally a priori, like how will people react? I would have said like in a very hostile, zero sum, protection, you know, oriented sort of way. And it's been less that way, I would say, than I expected. You see this, you know, this book that I keep referencing the AI revolution in medicine,
00:07:49
Speaker
And they are not shying away from the fact that GPC4 can do certain core tasks better than most doctors. They're very clear about that. They also then do pay appropriate attention to the failure modes. And for now, the recommendation is the best care
00:08:10
Speaker
is gonna be from the combination. And you've got all these questions about like, well, what is the standard of care in medicine now? Is it even okay to not use an AI in the not too distant future? Or would that be violation of the standard of care? Then if you do use the AI, like what if it messes up? How much is on you? What is considered responsible use? We already have this, at least in the US, we have this notion of like,
00:08:32
Speaker
Standard of care is like a reasonable defense for a doctor. Like if you did something and there was a bad outcome, hey, if it was standard of care and that's what, you know, a typical good doctor would have done, then like, hey, you know, things happen. It's not your fault. If you deviate from that and you go freelancing and, you know, get too creative and something bad happens and that is your fault.
00:08:53
Speaker
So that's going to probably get reconfigured. And then we've had the Google WebMD debate for the last 15 years of doctors don't really always love it when their patient comes in having been on WebMD because they feel like,
00:09:08
Speaker
They already think they know what's what and maybe they do and maybe they don't and whatever. This is going to take that up a whole other level. And I think it's going to be hard to make the argument that people shouldn't have direct access if they don't have other access. I
Productivity Expectations of AI vs. Historical Innovations
00:09:25
Speaker
mean, I can totally see how like a hospital system or a medical practice
00:09:30
Speaker
is not going to want to be the ones that provide you direct access to AI and say, you know, have at it. Like, I don't think I'm going to get that from my doctor. The experience of going to a doctor likely gets augmented with, you know, AI or whatever, but they're not going to just like give me a chat service with no supervision because they have like their own, you know, standards to uphold and liabilities and all that kind of stuff.
00:09:54
Speaker
But I could see maybe an open AI makes it directly available to the public. I mean, it is today. They've got the mitigations in there around like, I'm not a doctor as an AI language model. I can only help you so much. But you can coax still all that information out of it. I just tell it, I'm going to see my doctor tomorrow. And I want to be as prepared as I can be for that conversation.
00:10:17
Speaker
That gets me out of any of the concerns of like, I can't help you with this, because now it has this notion that like, right, you're going to go see the doctor, then I can help you prepare. You know, that seems low stakes, so it'll just do it. And then you can have the black, if you don't have that, then you'll just have the black market, right? Because the, we talked about this a little bit last time too, like the stable, you know, model family is, is out there.
00:10:39
Speaker
It's going to be fine-tuned. It's going to be deployed somewhere. Worst case scenario, it takes a VPN maybe. But you think about around the world, where primary care is scarce here where I'm in the US. It's extremely scarce in a lot of places. I don't see how
00:10:58
Speaker
It can really be contained, even if it's made illegal, in all honesty. So I don't know how that all shapes up, but it doesn't seem like there's any way to just put a lid on it, you know, and hope everybody kind of forgets about it. I just can't see any path to that.
00:11:16
Speaker
There seems to be a quite straightforward story here about how AI becomes part of core industries and is able to accomplish core tasks. We could imagine a lawyer drafting 30 documents in the time that he would normally take to draft one document on his own.
00:11:33
Speaker
do you expect from this that productivity, worker productivity, would rise? And the obvious answer there seems to be yes, right? But when we think about, you know, if I don't have to send a letter, if I can send an email as opposed to a letter, we might expect worker productivity to rise. If I can use the internet, if I can use a smartphone, we might expect that I become more productive.
00:11:57
Speaker
But have people have worker productivity risen as much as we might expect with these previous innovations? And so, yeah, do you think AI will make us more productive? Yeah, if I had to say a one word answer, I would say yes. Certainly, I feel that in my day to day work, you know, software development is undeniably accelerated by
00:12:21
Speaker
just the tools we have today, whether that's Copilot or I'm using Replit more and more now, which has some awesome products as well. How much faster would you say you're able to program with these AI assistant tools? I think measurement in this area is really hard in general and it really varies across subdomain. And I think probably like the productivity statistics will vary also across subdomains and even across individuals.
00:12:46
Speaker
And on top of that, I think we're going to maybe have like a different price regime that may make like traditional productivity measurements, like kind of not apply in exactly the same way, or like even GDP may not make a ton of sense in the not too distant future. I'm expecting a lot of consumer surplus. That's one of my like standard answers here is like if a doctor appointment today costs $100 and the GPT-4 version costs $1, then
00:13:17
Speaker
GDP could very easily go down even with like a lot more medical advice, quality medical advice perhaps being dispensed. So does that make somebody more productive or I don't know, it's all a little bit weird. And to answer your question directly, I've seen examples where it would save me an hour in a second. One example that came to mind was I was just manipulating some media files and I wanted to,
00:13:47
Speaker
convert one file type to another file type. Fairly standard operation, but I don't know how to do it. And you Google that stuff, it's hit or miss. You might find exactly what you're looking for, or you might find like a hornet's nest of pain of just like, why aren't these examples working for me? And all I want to do is a stupid, simple thing. So this is just copilot. Just typing comment, convert the file to the other file type.
00:14:14
Speaker
Boom, there's my command, right? And though, so those are like those hour to second type things where, you know, what's that? That's a, that's a factor of 3,600, right? I mean, that's an incredible speed up, but then other times, certainly you have these things where it's like, eh, it doesn't really help that much on this or, you know, there are kind of failure modes still where like the library has been updated since the training data, you know, was cut off and.
00:14:42
Speaker
It wasn't very easy to figure that out. Why isn't this working? It seems like it should be working. Oh, I see. It's because it's using an old version that's not the package that I have. So I've definitely seen kind of everything there. But I would say other things that I've seen, just my own experience, rough number, I don't know, twice as fast already. And I think that's only kind of picking up, especially with Replit extending this kind of stuff to
00:15:10
Speaker
the whole hosting environment, the whole file structure being generated right off the bat. Again, for different individuals. Ezra Klein had a great riff on this recently. I thought it was so smart where he said, if you went back to pre-internet and you imagined what the internet might do, you could tell two different stories. You could be like,
00:15:32
Speaker
It's gonna give us access to information like we've never had before. It's gonna lower all these barriers. You're gonna be able to find the answers to any question that you want online. You're gonna have instant communication around the globe with people, even like video, instant video conference. You would have had to get on a plane to go see somebody. Now you can do this. Imagine what that's gonna do for productivity. Oh my God. But then the other story is like,
00:15:55
Speaker
You're gonna have this thing that's gonna ping you every five seconds. You're never gonna have any quiet time to yourself. There's gonna be infinite bullshit competing for your amusement all the time. By the way, there's also gonna be tons of just entertaining conflict that you might get sucked into.
00:16:15
Speaker
So what's that going to do for productivity? And his take, which I think was pretty smart, was in the end, it seems like those almost canceled each other out for now. We've had roughly consistent productivity growth. We certainly both feel both of those dynamics, almost all of us do in our own lives. But you do see the outliers. You see people who are not bothered by the distraction or somehow managed to tune it out and just crush it.
AI's Effect on Employment and New Work Paradigms
00:16:41
Speaker
And then you see people who
00:16:43
Speaker
get totally lost in the distraction and don't do anything anymore. So I think AI is going to bring a similar pack of pros and cons. I'm John from Repl. It talks about the 1000X developer. There's the meme of the 10X developer. And he's like, we're entering the era of the 1000X developer where the people that are best at using these tools are just going to blow anything that came before them away. But then the flip side is if you can't figure it out or if you get lost in your
00:17:13
Speaker
dynamically conjured metaverse that you are kind of text to 3D environmenting for yourself as you go into AI dungeon, choose your own adventure, infinite entertainment, then maybe productivity is not very good at all. And I really don't know. I think that's going to be just so
00:17:33
Speaker
I think it's everything everywhere all at once. Like that movie title has proven to be, I think, extremely prescient for AI because it always kind of seems to be, yeah, both of those kind of happen. So perhaps we'll see a divergence of people into the ultra-productive and the almost non-productive.
00:17:52
Speaker
Do you think we'll see mass unemployment as a result of these tools? This is something that whenever you bring up mass unemployment, thinking about perhaps 25% unemployment rate, economists start to talk about, well, we've always had people adapt. People have always been able to find different jobs. Would you say that this time it's different? With AI, it's different. I do think it's different.
00:18:19
Speaker
I don't know how the current, again, I don't know how the current measurements will ultimately end up looking. I've debated a little bit just via email with Robin Hanson on this and he loves to bet and I always kind of try to engage him on that level and he's on the bear case of like,
00:18:37
Speaker
The language model is not going to be that big of a deal. I'm obviously convinced they're going to be a big deal. But we still couldn't quite come to a bed because he was focusing on things like revenue, GDP growth. And I'm like, I just don't know how GDP is going to respond to some of these dynamics.
00:18:53
Speaker
So with employment, I think the same thing is probably true. I do think this time is different. I think it's Yuval Harari who has just a real simple paradigm for it. We talked about this the last time or something similar. We used to use muscles. Now we have machines that do what the muscles do. We don't use our muscles to survive. They're not like the core of productivity like they once were.
00:19:16
Speaker
but now it's the brain. But now we have something that can do a lot of the stuff that their brain can do, seemingly really closing in on the gaps of, you know, there are certain things that we do that it can't, but boy, that gap seems to be shrinking awfully quickly. Where do we go next? We don't have another organ, you know, that is like the third tier of, you know, and even with things like emotion or whatever, I mean, that's obviously largely in the brain. And also GBT4 is like,
00:19:45
Speaker
pretty charismatic, you know, pretty good conversationalist, pretty empathetic is seemingly, and a lot of interactions at least appears to be. So I don't know where, I don't think there's like an obvious place for people to sort of graduate into. That said, people want to do stuff, people want fulfillment, and there's plenty of activities that are rewarding, and like plenty of those might ultimately be somewhat economically, you know, geared as well. I could imagine a world where it's like,
00:20:16
Speaker
we all have this, you know, time luxury and we get to like pursue our, you know, passions and interests and make music and art and all that stuff. And, you know, that may not count as employment or you could say maybe it does count as employment. I can also see like highly bespoke local services being kind of a thing. Like people do these like murder mystery dinner parties. And, you know, that's like a,
00:20:45
Speaker
a cool experience that somebody really crafts for a local market. And I think there's a very beautiful future potentially there that is like, if things go well at all, it seems like we should be in a position where most people don't have to work to eat.
00:21:03
Speaker
So that's great. Will people choose to like, you know, live off a UBI and just like play piano all day or video games more likely? Or will they, you know, kind of have these like hybrid things that are sort of like for passion, but also make some money?
00:21:22
Speaker
I really don't know, but I do think this time is different. I just don't have, I think we're gonna ultimately need new paradigms, new categories, probably new measures. You know, today it's like you're employed, we've added like underemployed, then we've even got people talking about like fun employed, right? So you're just taking time off and having some fun and being unemployed. I wouldn't be surprised if there's like eight gradations of that in the not too distant future, you know, where it's like,
00:21:49
Speaker
I sort of work, but not because I have to eat, and why do I even charge money for this? Well, because people that pay me seem to enjoy it more. You can imagine a lot of people, well, I tried offering this for free and my reviews were worse. Just by having a little charge, I get people that are actually more into the concept and it's better for everyone, that there's some exchange maybe associated. Is that employment? I don't know.
00:22:17
Speaker
Yeah, so you mentioned we might move into creating art and music and so on. That seems a little ironic to me, perhaps, because we see AI models being highly capable of creating both music and art right now. But the idea of kind of bespoke local in-person services, that seems super interesting. This is a bit in the same frame that I've been thinking about these.
00:22:42
Speaker
these things. For example, we might move to jobs that seem ridiculous to us now, silly things like, come to the park and I'll teach you how to throw a frisbee in the right way. This might constitute a job. Do you expect us to all be able to be employed in that way? Not if we're still working on a, you have to work to eat.
00:23:11
Speaker
I think there's only so many Frisbee coaches, you know, and the music part or the art part, you know, I took piano lessons as a kid. I'm not very talented. And I am in no danger of being paid to play the piano. But I do still have just enough talent to enjoy it. And if I had more time,
00:23:35
Speaker
I would, you know, just a couple of years ago when I did have a little bit more time, I would sit there and play the piano for a while. And it was like, I'm not good. I'm never going to be good. Nobody will ever pay to listen to me. But it's still fun for me. So I think there is something there. You know, I look at chess too and I kind of think,
00:23:54
Speaker
My understanding of chess, you can fact check me on this, but obviously it used to be only humans that could play chess. Then it was like, oh, now computers have solved chess. Then there was a moment of like, actually human computer teams are the best. And now I believe we've reached the point where that's no longer true. And it's just that like computers are of a straight best again.
00:24:13
Speaker
But people who watch a ton of chess on Twitch or whatever, the streaming platforms are. It's bigger than ever. So I do think that there is some leading indicators of value. At some level, this is the most obvious thing in the world.
00:24:37
Speaker
Certainly our ancestors or those that are more enlightened among us would be like, how could you have ever forgotten something so fundamental? But it does seem like value as we just intuitively understand it.
00:24:54
Speaker
economic activity are like significantly overlapping, but not the same thing. And we live in such a capitalist environment and I'm generally one to celebrate the great achievements of capitalism. So don't take me as a hater there by any means, but it does seem like
00:25:13
Speaker
we've almost kind of forgotten that you can just play the piano just for you, nobody has to pay you, and that can be very rewarding. Or you can sit in the park and play chess, and you can get better at it. And even if a computer is better than you, then that can still be a rewarding way to spend your time. So I'm optimistic about people's ability to find good things to do, to spend their time in ways that are valuable to them, to have
00:25:40
Speaker
pleasurable experiences and good relationships and even just growth. I think that's a bit too often conflated with economic activity. I don't think that's inherently economic. And I don't think it's necessarily going to be the case that people are gonna get paid for that stuff. So I don't know, I've got a ton of uncertainty about it. What's the phrase? Like I'm a utility
Measuring Utility Beyond GDP in the AI Era
00:26:03
Speaker
optimist, but a revenue pessimist.
00:26:06
Speaker
I think is kind of how I would, again, in the sane scenario where we don't have a catastrophic outcome, then I think things could be really, really nice even if you're not collecting a check. Have you seen any interesting research into how to measure the utility part there in new ways? So not thinking in terms of GDP or employment rate or economic growth in the traditional sense.
00:26:33
Speaker
Are there any other numbers we could go to? Because it seems like in the end, we would like to see some numbers. We would like to be able to make bets, I think. And for that, we need something concrete. So have you come across anything interesting there? I mean, we have something like the Human Development Index, which is kind of like an attempt to extend GDP into thinking about healthcare, thinking about education and so on. But even that seems a little limited.
00:27:02
Speaker
Perhaps this is a question that AIs could solve for us, where we could ask people how much value they think they're getting from AI and then collect data from hundreds of millions of people.
00:27:12
Speaker
You know, I mean, there's also like the, which is it a Bhutan that has the gross happiness index or national happiness index type of thing. I don't know exactly how they report that, but something like that is definitely interesting. Leisure time, you know, would be another thing that I would expect to maybe be a leading indicator. Like if, I think that like leisure time will lag, probably deployment a little while. For now, at least like I'm working harder than ever. I'm obsessed with this stuff and,
00:27:43
Speaker
You could also ask the question, how much does it work? When I'm scrolling on Twitter and reading academic work, nobody's technically paying me for that. I'm paid for other things. Those are inputs to the job. I don't know. Again, it all gets blurry pretty quickly. But if this is going really well, then I do think we should expect to see something along the classic Keynes vision of
00:28:08
Speaker
more leisure, less work. And then you could also start to look at metrics which are tracked, like loneliness. We've got a loneliness epidemic. We should probably start to see some things like that reverse, I would hope. If people have more time, then they should hopefully spend some of that time with one another and connect in meaningful ways.
00:28:33
Speaker
So yeah, I think there's probably a, I don't know, I don't think there's a great answer to that, but maybe start to kind of triangulate and piece something together that will at least be meaningful. Do you think that if we succeed in avoiding catastrophe and we get to this luxurious situation in which people don't have to work in order to live, do you think we will have lost something that's important to us? And here I'm thinking about the,
00:29:00
Speaker
the pleasure, but also the deeper sense of meaning that people get from accomplishing something difficult, having a job, working at a goal, at a task for months at a time, and then finally succeeding. There's something deeply satisfying about that for a lot of people. My go-to example here is the lawyer who's gone through law school,
00:29:27
Speaker
blood sweat and tears and now comes out on the other side and sees that GPT-4 is able to pass the bar exam. There's something a bit, it's more than annoying. It's perhaps a bit, you know, a loss of meaning going on there. Yeah, I certainly empathize with that. I saw this one TikTok where a doctor is sitting at the computer and using chat GPT for the first time and just is basically saying exactly that like,
00:29:56
Speaker
I spent years learning this stuff. Now this thing just spits it out. What the hell? And interestingly, on that TikTok too, the comments were all women saying, well, maybe GPT-4 will at least listen to me when I tell it about my problems. So that's whatever, a footnote, but an important one.
00:30:18
Speaker
So yeah, I guess I kind of come back to the timeframe on that.
Meaning and Control in the Age of AI
00:30:21
Speaker
It feels like there is the transition. That's one question to me. And then there's kind of the long term, like, if you imagine yourself being born into this regime, my best guess is, if you're born into this regime, you basically never have this problem. And you
00:30:38
Speaker
look back on people, you know, and kind of think that people were deluded in the past, you know, much like we may look at people who did like human sacrifice, you know, like you look at the Aztec or the Maya, you know, whatever human sacrifice, and you're like, yeah, they were very confused about like, how things work and like, what constitutes a good life. And, you know, so they did all these things, and it seemed great to them. But like, it's very clear to us now that that was wrong.
00:31:04
Speaker
I kind of think that the ultimately I think it will in retrospect look mostly like a cope that people told themselves all these stories about how like work is meaningful.
00:31:16
Speaker
And that doesn't mean that there won't be meaning, right? Because again, learn to play the piano and challenge yourself. And it is hard. And I personally, even today with all of our economic activities swirling around us, I can find value in that. So I don't think if you imagine being born into this regime and you're like, hey, I just grew up in an environment where I never had to work to eat. And so I got to kind of figure out what my definition of meaning was.
00:31:44
Speaker
You know, I don't think we're gonna have a hard time finding something that, you know, that fills that job shaped hole in our hearts. But we got to get there right and that this intervening period. Now you're like, you know, when the factory leaves town,
00:32:01
Speaker
This has been one of the big revelations of the last 30 years, economically, is we used to tell a story where when the factory left town, well, everything will just adjust and GDP will grow and something about if we really need to, we can redistribute, but of course we never really do. And so the machinist that used to have that job at that factory, there is no other job and there is that loss of, I would all argue, status.
00:32:27
Speaker
as much as or more than meaning. And those are maybe conflated, obviously overlapping as well. But it's painful to lose status. It's painful to lose income, especially if there is, we're not yet in a post-scarcity world and your thing that you've invested in and your status, your self-conception, your income is all tied up with an activity and now the demand for that activity potentially drops or at least the rate that you can charge for it drops. I think that it's gonna be extremely painful to people and
00:32:57
Speaker
likely to cause a lot of conflict of all sorts, but I would say I think it's a generational thing. It's kind of my best guess right now.
00:33:06
Speaker
You think it perhaps, we could imagine a world in which we succeed in incurring death. And then we think back about all of the stories we told ourselves about how death gives meaning to life and it's a necessary part of a natural process and so on. You think the necessity of having a job could be somewhat in the same category where we are out of, kind of because we didn't have a choice for hundreds of thousands of years,
00:33:36
Speaker
we had to work we had to die in a sense we made up stories that that we told ourselves that were comforting but ultimately not reflective of anything real that seems right to me i would hate to think it would be really you know the flip side of it is just so unappealing that i can't really countenance it you know the idea that like
00:33:58
Speaker
What, we're just gonna be bummed forever because we have no jobs? Like all of, you know, it's just an inherent thing that we can never overcome that like nobody wants to pay me to do anything. No, I still get to eat, but nobody wants to pay me to do anything. Woe is me. Like I just, my life has no meaning. I just don't buy that. It seems like that's unlimited imagination in my view.
00:34:17
Speaker
But perhaps, and now we've been talking about what we could categorize as luxury problems, if everything goes right. But perhaps this instinct for having a job is also an instinct for staying in control. So we might imagine that we want to be the ones making decisions. And we are not interested in handing over all of the important decision processes to AIs. Do you think there's something healthy in that instinct? Yeah.
00:34:45
Speaker
I mean, I think it's also to be managed and, you know, to be in balance, but yes, I do think you can be wrong on both sides of that question. I would say, again, I keep coming back to this medicine thing cause I just read this whole book and it's, you know, it's extremely compelling and I think we're not too far from a world where.
00:35:07
Speaker
you probably can get more reliable medical advice from like GPT-4 med, you know, if we imagine a slightly enhanced version than you would from, let's say three quarters of doctors, I don't know, 90% of doctors. I don't see it necessarily beating the very best doctor just yet. We talked about that last time, but in that scenario, if I'm the patient and I
00:35:37
Speaker
want the best decision for me, then I'm going to look at the doctor's impulse to like want to make the decision pretty skeptically and say like, I want the most likely right decision and I really don't care who it comes from. And wherever we are in that chest curve, whether it's, you know, only the doctor can decide or I wait, the computer's better or wait, it's the team or wait, it's the computer again. I just want the best for me. That's kind of that.
00:36:03
Speaker
But I do see a lot of benefit also to some sort of, you could call it, I guess, precautionary principle or some just kind of instinctive, conservative impulse to say, hey, maybe it's not a great idea for us to just abdicate all responsibility and let the systems do everything. And that's gonna be very hard to balance, I think. I don't have a good vision for,
00:36:33
Speaker
what does medicine look like or what does really anything look like in a world where individual decisions are almost always a little bit more likely or maybe significantly more likely to be right when coming from an AI, but yet there's some threshold potentially or tipping point where you do that to the max and then
00:36:55
Speaker
You lose control of everything. You just have no idea how those dynamics are going to play out. Agents, which is another thing we kind of talked about last time, brings that, I think, very much to the fore. In the world as it exists today, if you gave me an agent that worked well,
00:37:11
Speaker
I would be very pleased. And I would happily be like, go research a new pair of running shoes for me and give me back three options. And then I would be like, go buy the one. I don't want to enter my credit card and deal with all that stuff if I can avoid it. So great. If I could have it go negotiate my cable bill on my behalf, that'd be sweet too. Go pretend you're going to quit.
00:37:34
Speaker
And tell them you're not renewing another month unless we get a certain price. You could probably get it. But that's before the world starts to adapt and everybody else has their agents. And the next thing you know, my cable company has its own agent. Now I've got agents talking to each other and there's some possibility for a
00:37:56
Speaker
weird equilibrium here where, you know, I would hope it's not like, oh, I'm not a game theory guru by any means, but I would hope it's not, but it seems like it could be some sort of Nash like equilibrium where it's like, everybody is incentivized. No matter what strategy you're playing, I'm incentivized to use my agent. And once you're using the agent, I'm still, you know, I can't defect. And like, how do you get out of that? It seems like everybody naturally just kind of puts more and more stuff on the agents.
00:38:22
Speaker
But yeah, again, then it's like, well, what the hell happens after that? We really don't have any good model of that. So yes, I'm torn. I feel like.
00:38:32
Speaker
I want my agent, but I don't want everybody to all rush to have an agent at the same time because I have no idea where that leads. And so what do we do? I don't know. I think it's very tough. You want someone who has gone through
Existential Risks and Philosophical Questions of AI
00:38:45
Speaker
the annoying process and the time consuming process of perhaps checking whether these agents are doing the right thing and whether they are acting in the way that you want them to act. There are so many tasks for which it would be
00:38:59
Speaker
much easier and much more pleasant in daily life to simply automate them. I don't want to call the authorities and when I have to do my taxes as I recently did, talk to them for two hours and then not really get to solve my problem and so on. There's so many of these types of daily annoyances that could be solved, but I do fear that if we outsource all of these tasks, we begin to lose grasp of reality and not
00:39:29
Speaker
If we're not in contact with reality, we probably are going to make worse decisions. This is perhaps a bit analogous to thinking about never doing math in your head. So always outsourcing all calculations to a calculator or never trying to navigate around in a city on your own and always looking at your smartphone for the map. We do perhaps lose some abilities that we could have had there.
00:39:59
Speaker
Yeah, I mean, I think a source that has definitely helped shape my thinking on this is Ajay Khatra's, I hope I'm saying her name quite right. You are. She's been a guest on this podcast and she's absolutely insightful on everything AI. Her piece on, I believe the title is
00:40:18
Speaker
might paraphrase slightly, but absent specific intentional countermeasures, the most likely path to AGI likely leads to AI takeover. And that's basically the story that she's telling, right, is one where, and Paul Kushner just talked about this actually on the Bankless podcast in the last few days as well, where it's like,
00:40:40
Speaker
In their sense of what AI takeover looks like, it's not the sudden out of the blue, nobody saw it coming, but I think they both articulated it in compelling ways Paul said. In his mind, it looks like AI is widely deployed, and then it's kind of like,
00:41:02
Speaker
Boy, if they really, if they were trying to kill us, they definitely could. So let's hope that they really are not. And in a JS version, it's kind of like, you know, you have this increasingly AI-ified economy, everything increasingly managed by AI systems.
00:41:18
Speaker
everything kind of working okay, seemingly, but like inscrutable. And then you sort of lose control potentially of the system because, I mean, nobody can understand the economy today. And that's with people where we at least have like an intuitive sense of what one another value and how we're likely to act under like almost all, how
00:41:42
Speaker
We have a pretty good sense for how almost everybody's going to act under almost all circumstances. And we have nothing like that with AI. And yet, you know, it's not that hard to imagine a scenario where the delegation to AI has ramped up to such a degree that
00:41:58
Speaker
It's like that problem of understanding the economy goes 100x more difficult in the AI economy version. So yeah, I don't know. It is tough. I do think there is definite wisdom to the notion that we want to maintain control. I mean, I've put as simply as that, who could argue? But how do we not slip down that
00:42:19
Speaker
that slope of just more and more delegation until such point as like, actually, we don't really know how this thing runs anymore. I don't know. That definitely seems like the downhill path. It definitely seems like the attractor. So I don't know how we avoid that. So we've been alluding to this notion of catastrophic outcome from AI. And this is, of course, something we at the Future of Life Institute care a lot about. We are worried about existential risk from AI or from AGI in particular.
00:42:49
Speaker
This is something that's been discussed a lot on this podcast, so we shouldn't go through making the case for against specifically here, but I'm interested in what you're hearing in the AI industry as it looks now. How are people reacting to the notion that
00:43:05
Speaker
further developments in AI could become catastrophic? It's definitely not a fringe position anymore. That, I would say, is some form of progress. The people I talk to, they're all over the map, honestly.
00:43:23
Speaker
Some are still pretty blithely unconcerned. I'm not worried. You know, these things don't have any innate desires like they're not going to, you know, why would they do that?
00:43:37
Speaker
You know, those people could be right. Richard Ngo from OpenAI gave the best answer I've heard to the question, why might this all just go perfectly smoothly and all this worry was totally misplaced? And he said, the best argument for that would be dogs. He said, we started with wolves.
00:43:57
Speaker
And we did stuff that seemed like they would make the wolves nicer and like us more. And gradually over a long time, we got dogs and it basically worked. Dogs aren't perfect, but they're way better than wolves and they're pretty, they're genuinely friendly for the most part, like most of them anyway. So maybe that could, something like that could turn out to be right. Maybe it's just as all easy. I don't find that, I don't hear arguments though, honestly, that are,
00:44:27
Speaker
that are compelling that it's going to all be fine, and here's why. That part always seems to be missing. Standard of evidence is often maybe what people are debating most, where they're like, I don't see any reason that would happen. You haven't convinced me that the bad thing would happen is often the position that people end up taking. So debating burden of proof.
00:44:50
Speaker
Yeah. And I mean, for me, it's pretty clear that like just the general survey results, you know, that we see where it's like half of people think there's a 10 half of AI researchers think there's a 10% or higher chance of some catastrophe. I don't know how anybody can really dismiss that at this point. I don't know. I don't understand on what burden of proof paradigm you would dismiss that it just doesn't make a lot of sense to me.
00:45:16
Speaker
But, you know, yet that position does remain out there. You've got like the EAC meme space as well. I don't even know what to make of that. I don't think it's like generally
00:45:31
Speaker
even necessarily serious, it kind of seems like a lot of it is somewhat just like shitposting, to be honest. I think that the steel man for the effective acceleration is the case of nuclear, for example, where we had
00:45:48
Speaker
We had a technology that could help us solve climate change, but we chose to perhaps over-regulate it. And now we can't really get out of that situation. And perhaps what the effective accelerationists are afraid of is that we will over-regulate AI and then miss out on all of the benefits. And then there's some theoretical justification thinking about thermodynamics and evolution and the kind of
00:46:17
Speaker
evolution, not just biologically, but cosmically. And I don't know about that, but I think that's the steel man for their position. Yeah, thank you for doing that. I think you did better than I probably could have. I think the cosmic evolution notion is honestly kind of interesting to me. I find that to be like at least, you know, philosophy worthy of consideration, you know, how
00:46:45
Speaker
Eliezer once said, in his way back in his FAQ on the meaning of life, the meaning of life is to create or become our descendants. And I thought, that's pretty interesting. It's incredibly dense, but it can unpack in a lot of ways. And how different could those descendants be from us
00:47:07
Speaker
before I would feel like I don't care about them anymore or that like that has no value to me. I think I'm honestly probably a lot more open-minded than most people in that respect. I don't think that's gonna play with the public super well, but you know, I'm willing to entertain some far out ideas. So that one is at least like intriguing to me. Yeah, we can think about how our ancestors would be horrified by a lot of the things that we consider to be instances of moral progress today. So just,
00:47:36
Speaker
the equality of the sexes would probably be a difficult pill to swallow for someone in the 15th century. And we might evolve to look at our descendants in the same way, or we might look at our, in a hypothetical scenario, we might look at our descendants in the same way. Yeah, I think that's very plausible. I mean, a key question there
00:48:01
Speaker
would seem to be like, is there any qualitative experience or subjective sense of wellbeing that these descendants have? Obviously that's a unanswered question at this point. That's another one that I'm honestly quite surprised where people
00:48:18
Speaker
overwhelmingly seem to jump to the conclusion that current systems are not sentient or are not conscious or whatever. And I would reframe that personally slightly to say, I'm quite sure they're not conscious like you or I are conscious. That seems almost impossible. But could there be anything that it feels like to be GPT-4?
00:48:44
Speaker
I have no way of ruling that out personally. It still feels kind of unlikely, but I don't know where my own subjective experience really comes from. And I was told as a kid that animals didn't have subjective experience. And I took that at face value for a time. And now I'm like, how could anyone have ever thought that? So you look at GPT-4, you interact with GPT-4. You're like, on some level, it seems like it has something going on.
00:49:13
Speaker
Why am I so confident that there's nothing that it feels like to be that? I don't know. I would honestly be pleased in some sense if we could.
00:49:23
Speaker
definitively show that GBT4 has some feelings. Why would that be a good thing? Yeah. It would at least open up that descendants, you know, that might be very alien to us, but like could have value. If they feel nothing, it's hard for me to get over that. I'm going to have a real hard time feeling like things are good. If nobody's home, you know, in the universe, so to speak, all of this might just be totally confused, but
00:49:48
Speaker
If we could say, hey, look, somebody made a discovery that sort of has a unified theory of consciousness consciousness and look, it applies to, and we can measure it maybe in some way or, you know, put a number on it or something. Humans have it at this level. And here's what dogs look like. And here's what a nematode looks like. Here's what GBT4 looks like. And it's like, well,
00:50:05
Speaker
a very lopsided thing, perhaps, where it feels a lot of this kind of thing and feels almost nothing like this, but at least something. That would be interesting to me. I think also it would force us to confront a different set of challenges around what are we going to do with these things, and how fast should we deploy them, and into what environments, and just the more we can be thoughtful about that probably the better. So I don't know. I'm very open-minded on all that stuff.
00:50:31
Speaker
Yeah, I do think there's potentially something suspicious about if you look at GPT-2 and then the evolution there and capabilities to GPT-4, there's obviously a massive jump, but the underlying technology is pretty much the same. But I'm guessing that people would be more likely to describe some form of proto-consciousness to GPT-4, even though
00:50:54
Speaker
What's going on under the hood hasn't really changed. It's been scaled up, and we've seen emerging capabilities. I worry that we might be in a world pretty soon, actually, where we will be in a scenario like the movie Her, being able to talk to these models in natural language back and forth. And we won't think about anymore whether these models are conscious, simply because they've been engineered and designed
00:51:22
Speaker
to push all the right buttons for us so that we feel, of course, these models are conscious. I actually
Emergence and Risks of Advanced AI Capabilities
00:51:27
Speaker
worry a bit in the other direction that I agree that we shouldn't dogmatically rule out large language model consciousness. But I do worry about kind of over ascribing consciousness too. Yeah, totally. You can be wrong on both ends. I mean, that's kind of the, on all these things, it seems like, you know, when you look back at
00:51:47
Speaker
our whole conversation, right? It's like all these major questions and it seems like for almost all of these big questions, you could be wrong on either end.
00:51:56
Speaker
the spectrum and that's true I think with the regulation the nuclear thing too I'm not that sympathetic to that argument in as much as I think it's very true of nuclear like I wish we're building more nuclear plants I look at Germany's shutting down their nuclear plants and I'm like what are you doing guys like come on this is I just you know so frustrating can't wrap my head around it so on that level I do sympathize with you know at least the the fear that like man there's such
00:52:20
Speaker
promise here with this AI and let's not make that mistake. I'm totally with that. But it's also like, okay, but you can be wrong on both sides. You could over-regulate and stifle and foreclose on value prematurely, but you could definitely under-regulate and end up with, who knows what?
00:52:40
Speaker
And in the end, I think there is still, I mean, you gave, I think, a best attempt at an IAC, you know, steel man, but I think that steel man still kind of relies on a straw man, which is the, well, you're just gonna over-regulate it so much that it's gonna be terrible and we're not gonna miss out on all the value. And well, yeah, that could happen.
00:52:57
Speaker
But I don't know how you look at the situation and say, you know, we should just pump, you know, I mean, probably the most operative question right now, as far as I know, is should hyperscaling continue?
00:53:11
Speaker
from where we are, given what we know. I don't know how much compute was used on GPT-4. Obviously it was a lot. I think Sam Altman has recently said it was north of a hundred million dollars worth of spend. It's obviously super capable. It doesn't seem like it's hit a wall. You know, it seems like obviously you've got this kind of log scale where it's not going to be cheap, you know, to go another 10 to a hundred X. But if it was a hundred million, 10 to a hundred X would only be one to 10 billion.
00:53:39
Speaker
Google makes a billion dollars a week in profit. So that 10 billion is less than one quarter worth of profit for one company.
00:53:50
Speaker
So it's a lot, but it's not that much. I have actually been a bit surprised about how, in a relative sense, low these numbers are. So $100 million for companies of a size such as Google or Apple, Meta, and so on. This is not a large amount of money. And I imagine that
00:54:11
Speaker
A lot of companies right now are racing towards developing billion-dollar models, perhaps 10 billion-dollar models. And you see this as the crucial question, should we scale up as rapidly as we've done in the past? Yeah, I mean, I could be convinced that there's an even more important question, certainly. I don't think this is the final view by any means, but
00:54:37
Speaker
Yeah, that's kind of what I came to coming out of the red teaming. I wrote a report for OpenAI and the bottom lines for me were,
00:54:47
Speaker
I think this technology is awesome. I think it is likely to be transformative. I think the good of GPT-4 will dramatically outweigh the bad. There will be a lot of bad, but there will be even way more good. And I'm super excited about it. And I'm glad you guys are doing an extended safety review, but ultimately I do endorse the deployment. However,
00:55:10
Speaker
it does not seem that we have a robust way. In fact, it seems quite clear that we do not have a robust way to predict, let alone control what would happen in another 10 or 100X compute scale up. So I don't think we should do that right now. Which is to say never, you know, which kind of brings us back again to this notion of a pause. Is the pause the perfect thing? I don't think that, you know,
00:55:39
Speaker
even the sponsors of the letter wouldn't necessarily think it was the perfect thing. It seemed to me like it's somewhat of a consensus and what is the thing we can agree on is probably more of the binding constraint there than anyone's notion of perfect. But yeah, I don't know. You look at these emerging capabilities, that is the thing that worries me most. And to their credit, OpenAI has been quite forthcoming about this.
00:56:08
Speaker
I think it is so easy. I have my concerns around how they're approaching this. And I do kind of think, hey, I thought it was awesome that Sam Altman came out and said, yeah, we're not doing GPT-5 right now.
00:56:23
Speaker
I've heard some cynical takes on that as well. They're like, they're just waiting for hardware to come in. And you know, it's, that's, it was a convenient, kind of a low cost thing for him to say, but nevertheless, taking it at face value. I think that that was very kind of encouraging to me that they're not immediately rushing to do the next one. And I think they've done a nice job of being forthcoming. Like in the technical report, the graph for me, that is the most important graph is the one where they show multiple curves, right? They show the loss curve.
00:56:52
Speaker
And the case that they're kind of making is we're getting very good at predicting the loss curve. We made a tiny little model with this architecture and then 100 times bigger than that, and 100 times bigger than that, and 100 times bigger than that. And that was still only one 10,000th of GPT-4. But when we plotted that loss curve on this curve and fit to it, then boom, it goes right through the GPT-4 point and therefore we can predict loss. Sweet. Problem there, of course, is
00:57:20
Speaker
What's loss? That's some general measurement that aggregates all of these kind of next token predictions. And I think something we see there with the general loss of these models is that in specific domains, you will see spikes. So you will see suddenly GPT-style models are now able to accomplish some specific task. But if you generalize over all of the models, you see a more smooth loss curve.
00:57:49
Speaker
Um, is that a, do you think that's right? Yeah. I mean, and even in just the one, you know, training process of GPT four, um, so they tried to, I mean, of course they're smart, right? So they understand this. And again, they've been forthcoming. So then they show two skills or like specific narrow capabilities. One of which was success on a certain coding problem standard and.
00:58:12
Speaker
It's a little bit of a bumpier curve, but basically, they're able to fit the same curve and show that the tiny little model couldn't do the problems, and it got better. And more or less, they're able to predict what's the success rate that GBT4 is going to have on these coding problems. So cool. You might think, well, great. Now we can solve that. We just extrapolate that out. We know how good it's going to be. Oh, but the next sentence is, some capabilities remain hard to predict.
00:58:40
Speaker
And then they show the hindsight bias graph, where basically they set up these little toy problems where it's like, you had a decision to make, you had certain information available at the time, you made a reasonable decision, but things went against you. Maybe it was, you had a chance to make a bet where it was highly positive expected value, you did, but you lost. And then the question is, did you make the right decision?
00:59:05
Speaker
And it had previously been an example of an inverse scaling law where the bigger models seem to be getting worse on this problem. Like there was something where it was like understanding that the outcome was the bad outcome. And so making the mistake that, you know, I shouldn't have done that. And that's hindsight bias. But the punchline is when you get to GPT four.
00:59:28
Speaker
there is a spike to basically 100% correct on the hindsight problem. I'm by no means a pioneer in this. I don't know if you've had Neil Nanda on the show. We have, yeah. A huge fan of his work and a huge fan of the grokking exploration in particular. It seems like
00:59:51
Speaker
something like that happened for the hindsight bias where, you know, a new circuit kind of came online. You know, there's a generalization from a stochastic parrot to be, you know, to give
01:00:06
Speaker
credit to the historical understanding, that is what the small models do, but there seems to be some shift, some sort of phase change, some sort of moment of grokking, where now, in the original grokking paper, it's modular edition, right? And it goes from, it can only do the examples it seemed to, it can do all the examples. And at that point, it's pretty hard to say that that model doesn't understand modular division.
01:00:32
Speaker
I don't know what it would mean to understand it if being able to do all the problems like isn't, you know, in some sense understanding. Then they reverse engineer it and they get into like, wow, it's doing this like Fourier transform based, like very weird approach, at least from a human standpoint, extremely weird. And yet it works. And like, it's not only does it work, but like you can look at what it's doing and know that it's going to work reliably. That's amazing.
01:00:57
Speaker
We can't do that, obviously, with hindsight bias at the scale of GPT-4. But I'm guessing that something similar is happening there. And I just have no idea. And I don't think anybody else does. And I think OpenAI has been pretty straightforward about the fact that they don't know how many more of those grokking moments happen on the next 10x or 100x of compute.
01:01:19
Speaker
And, you know, they don't know when they happen. They don't know which ones happen. They don't know if there's like extremely, you know, unpleasant surprises that may await. And you get back into, you know, the sort of a J of, you know, Holden, Paul Christiano, memeplex of like, we're not fully reliable evaluators. So it seems pretty likely that at some point it's going to grok that there's a difference between
01:01:49
Speaker
objective reality and what satisfies or elicits the highest feedback score from the human. And I really don't want to see that model come online in production until we have a lot better understanding of what's going on. So it might happen. It might not happen in the next 10X, but I don't know. I think we've got enough with GPT-4. Let's try to develop memes for this. One is AI servants, not AI scientists. We have so much
01:02:19
Speaker
benefit right in front of us. Let's deploy it. Let's figure it out. Let's integrate it into society. We've got enough to chew on in my mind before we go to however much scale up of compute and try to create pasta. If you know, you know.
01:02:40
Speaker
Yeah, which is in a sense, at least in some people's opinion, the pinnacle of human achievement, the ability to discover new knowledge and so on. There's something interesting about the fact that... I think if we were able to automate that process, we would have reached an end state in which there would be nothing else to automate. This is the final thing that we will hand over to the AIs.
01:03:08
Speaker
The question then becomes, of course, you're talking about the problem of when new capabilities arise in these models and the impossibility of predicting these new capabilities. This is what makes it the case that you should probably never say the sentence, AIs will never be able to X because you will be surprised by the next model. And this is also what potentially makes it so dangerous.
01:03:37
Speaker
If we think about the capability of situational awareness arising in the model, the new ways for models to be deceptive around humans and so on.
01:03:50
Speaker
We've covered a lot of topics here, and you said something interesting about the general uncertainty around all of these topics. I think if we are doing podcasting on AI, we are in a domain in which we will very quickly be proven wrong, and our whole conceptual schemes might be
01:04:11
Speaker
might be mistaken because they are either outdated or we have created them so recently that we will have to adopt them. How do you think about how stable our opinions should be in this domain? I'm very modest in my opinions. I try to put things out there
01:04:33
Speaker
If I have reasonable confidence about something that I think is pretty likely to happen in the short term, I'll try to be concrete about it because it feels like you kind of have to in order to give people something to grasp onto. If it's too hand wavy, then I think people just don't know what to make of it. But I always try to contextualize everything I'm saying with, I'm pretty confident we're gonna see AI agents start to work in 2023 in such a way where they can do your online grocery shopping.
01:05:03
Speaker
But I have no idea what happens in 2025, you know, and in the big picture, you know, and how these agents play out, you know, as they're all interacting with each other, what the dynamics of that are, you know, I have no idea. So I try to kind of channel both, like, here's what I think I'm confident on. And also, here's what I have radical uncertainty on. You know, I try not to sugarcoat anything for people because
01:05:32
Speaker
I think it's Tom Hussy Coates who says, I'm not here to give you hope or whatever. And that's kind of how I feel too. I think I would be misleading or doing a disservice to people if I sort of reassured them that whatever happens, it'll all be fine. I don't think that's obvious. And in very practical terms, I try to spend half of my time
01:05:57
Speaker
just understanding what's going on. So I do have a job at Waymark and I'm working with a couple other companies as well. I'm working with a company called Athena, which is in the executive assistance space and trying to help them figure out how do we become more productive. And I love it. Honestly, it's so much fun to get...
01:06:13
Speaker
I really do love the technology. Getting to me right down to where the rubber hits the road, where people and AIs are working together today, I think is so fascinating. But even for them, I've said, I really don't know what the big picture of this is. And I can tell you, we definitely need to figure out how to adopt these tools. Is that going to be enough?
01:06:35
Speaker
Long-term, I have no idea. I really try not to sugar coat anything and I really try to be as upfront as I can about the radical uncertainty that it seems like we're facing. Fantastic. Nathan, thank you for coming on. It's been super interesting for me. Yeah, this has been a ton of fun. Great conversation. Thank you very much.