Introduction to AI and Long-term Concerns
00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Stocker and I'm here with Daniel Fagella. Daniel is the founder of Emerge Artificial Intelligence Research, which is a market research company that focuses on AI. Daniel, welcome to the podcast. Eric, Gus, glad to be here.
00:00:17
Speaker
Fantastic. Okay, you have had conversations with interesting people and powerful people in AI. And one thing you've often noticed or noted is that some of these conversations seem to be too short term. How should we focus on the long term when it comes to AI?
00:00:35
Speaker
So some of my involvement is in the intergovernmental world. So whether it's presented about deepfakes at United Nations headquarters six years ago, I was just at the UN for the kind of aviation wing of the United Nations ICOW up in Montreal recently. I do a lot of work with the OECD, the World Bank, et cetera. So in that space in particular,
00:00:54
Speaker
I think the focus very much is on very, very near term considerations of AI around privacy and maybe children's data being used and things that in my opinion are absolutely without a doubt valuable and interesting to talk about, but don't
00:01:10
Speaker
And certainly there's a place for it, but I think there's some sort of a place for the grander existential concerns as
Stakeholder Beliefs and AI's Future Role
00:01:17
Speaker
well. And I think that it's still a little bit. It's getting easier. The UN has their committee now on AI. Of course, Jan Talin is there. Jan was recently on one of our podcasts here. And so the Secretary General has thought about existential. But for the most part, we're really just thinking about
00:01:34
Speaker
How can this stuff support normal SDGs, concerns about privacy, et cetera? So my hope, and certainly have some ideas here, is to be able to genuinely inch the conversation forward a little bit more towards what's the world we're stepping into, because my current contention is the following.
00:01:49
Speaker
My contention is that when we're in the room discussing these kind of micro-tidbits about sort of AI policy, whether it's the UN, the OECD, etc., there's this sort of shared understanding that we all want the same future. That is to say, everybody in the room, from Yoshua Bach to Russell, you know, we're both in the room at the OECD's AI future.
00:02:08
Speaker
that five years, 10 years, 20 years in the future, we're looking for the same exact thing, which is more or less happy hominids running the show with machines and sort of our beautiful servants. And as it turns out to us, that's not the case. As it turns out to us, people have wildly divergent beliefs about sort of where the trajectory of intelligence and the role of humanity should be. And I think we must pretty early on squarely put on the table, where are we hoping to head? And I think hiding that question will build up
00:02:37
Speaker
very inevitable conflicts around differing views about the ascension of posthuman intelligence versus sort of the supremacy of the eternal hominid kingdom. These are not things that should be pushed under the rug, but as of today, they are. Okay, makes sense.
AI Values and Goals Framework
00:02:51
Speaker
How would you categorize the end goals of different actors in AI? So we have the top AGI corporations like DeepMind and Anthropic and OpenAI.
00:03:02
Speaker
We have meta, who's also a big player, x.ai. How would you categorize the values or the end goals of these different entities? Well, I think as far as I could tell.
00:03:14
Speaker
It's easier for me to pin end goals to a human than to an organization, but I do think organizations have a feel and flavor. FLI has a feel and a flavor. Meta has a feel and a flavor, but I think it's a little bit easier to peg to individuals. Of course, the individuals running those organizations are going to have a disproportionate effect on the folks that work for them.
00:03:32
Speaker
But I think a little bit more in terms of individuals. I don't so much stratify it just by do we want it to be open source? Do we want it to be very controlled? That's one important kind of continuum. But I've got a way of thinking about this that is not the be all end goal of these destinations. I kind of think about this in terms of destinations, not like an internal destination, but a waypoint upon which to go. So if you and I were on a journey, there might be a gas station along the way. There might be a state we want to go through or a place we'd like to arrive at on our way through our trajectory.
00:04:02
Speaker
We're all on a trajectory whether we want to be or not. Those sort of trajectory points, I think about in a bit of a framework called the intelligence trajectory political matrix, which people just Google intelligence trajectory political matrix. It'll come up or danfugella.com slash itpm kind of a quadrant. It's kind of a nine by nine. So across the top, you can imagine Gus, we have four different sort of criteria.
00:04:24
Speaker
one of which would be preservation. These would be people who would say, I don't want anything that even starts tippy towing beyond humanity as it stands. End of story. That's that group. There's another group that would be progression. So these people would say, well, it's got to have a human essence and it's really got to serve humanity. And that should probably be where it sort of ends. But maybe there is going to be transhumanism, but it's going to be very much human. And maybe there will be strong AI, but it'll serve, you know, explicitly hominid
00:04:51
Speaker
Objectives and then there's the essential category. So one two three ascension would sort of be a deer
00:04:57
Speaker
and ardent hope that intelligence and sort of potential itself would blast as far beyond hominids as we did beyond sea snails. And just kind of continue the grand ascension of the spires of war. So there's your one, two, three across the top. On the side, at the very top on the side, we might imagine control in the middle, cooperative at the bottom, total laissez-faire or freedom. And that would be the way that we would govern those things. And so what we notice, and again, I can't speak for a whole organization, but if you are
00:05:25
Speaker
In the AGI race, in other words, you're aiming to build the Sand God, if you will. Let me be playful with the terminology here, Gus. If you're in the race to build the Sand God as that great final crescendo achievement, then, and you're not winning the race, and most people are not, generally you will lean towards the bottom right. So that would mean freedom and ascension.
00:05:43
Speaker
Because you don't want that much control. You're already out of the halls of power. You want as much flexibility to bust your way into those halls as you can. So if you're Mark Andreessen and you want to fund the next company that overtakes open AI, you damn well don't want any control coming into the mix, right? You really want to lean away from that. Or if you're in kind of the effective accelerationist camp, many of whom, again, are running kind of younger companies, you'd very much sort of be in that area. If you're Altman, I think you're OK to wear a suit.
00:06:11
Speaker
and to go shake hands with the folks at the EU and other places and start to kind of solidify things. Maybe you want to be ascension controlled or essentially collaborative or you're okay to be a little bit higher on the scale. So the way I see it, the intelligence trajectory political matrix is really just a gauge of where people's incentives naturally put them. And for some people's objectives, their incentives are really going to sort of bend them in a space automatically. But yeah, to answer your question, I think about it kind of more as like waypoints of where we want to head,
Understanding AI Actors and Goals
00:06:39
Speaker
future combinations of man and machine and I think about it in terms of those strata as kind of a starting point for discourse and conversation. How do you infer these goals? How do you decide what Altman believes or what any actor in AI believes based on their communication and based on their behavior? It seems sometimes these goals are pretty inscrutable and we can only guess. So how do we actually know?
00:07:03
Speaker
Absolutely. Well, the only way I've been able to do it is by getting people on the phone, right? So, I mean, I interviewed NGO nine years ago, and so I had him back on and I said, where would you put yourself? And he, you know, talked with me for an hour about where he put himself and why. Jan Tallin more or less did the same thing. And neither of them, and I would never ask this, want to put themselves with a pin and say, eternally, I sit here, right? What the most a person could say is, I feel like I cluster in this area, and I feel like this is best for these reasons.
00:07:30
Speaker
And I think that's the most you can ask people. As of today, you may learn and grow as I have and many others have. Yoshua Bengio has certainly learned and grown. When I interviewed him nine years ago about AGI risk, been covering this stuff for 12 years, he more or less laughed it off. I mean, in the interview, I actually mentioned his survey response of like, basically like, get out of here was more or less a survey response on risk. He's slightly changed from that position now to something slightly different, right? So in my opinion,
00:07:58
Speaker
We can subtly infer for people's conversation sort of where they sit. Some people it's remarkably clear like they would have to absolutely eat their own words for the last three years in order to move them to a different place. Like, you know, Andreessen is pretty clearly in a spot. Aldman, I can't exactly speak for the man, but I feel like I have a hint. But I wouldn't have a confident hint unless I hopped on the horn with them and I really asked them to lay out where they stand and also their reasoning behind it.
00:08:23
Speaker
I would not be so bold as to prognosticate. I like to foster conversation. And that's what I tend to do. So I've been doing for these 12 years.
00:08:31
Speaker
What about conversations between people who are worried about AI risk and effective accelerationists who perhaps are not so worried or maybe they're worried in a different way. You wrote to me that you think sometimes these conversations go astray, maybe not for the best of reasons. And there's a potential for a better conversation between folks who are worried about AI risk and effective accelerationists.
00:08:57
Speaker
Absolutely. I gave a talk at Gertzel's beneficial AGI event in Panama a couple months ago on exactly this, where we're starting to see this predictable hominid tribal dichotomy that we just love doing. We love wearing a colored face paint and hating the other tribes. We have so many neural grooves.
00:09:18
Speaker
just get right in there, you know what I mean? That's our natural state, right? It's just like, in-group, out-group, f these guys, these are my people, that's the way we like to play. I don't think it's wildly productive. My hope, supposition, and what I try to foster through the media that I try to drum up here is garnering a sense of where precisely do you want to go? What would be a good or a beneficial
00:09:44
Speaker
future in 20 years, what would be a horrible future in 20 years for the combinations of man and machine and why? And for the most part, I think when you can, we can really hear somebody unpack, this is where I think I, this is where I'd like to be in 20 years, I think it would be a beautiful future for man and machine. And this is why I think that way, then we can get down to individual suppositions of, okay, this is how they think this is how risky they think this thing would be. Or this is what they tend to believe politically, or
00:10:10
Speaker
At the end of the day, we get all the way to the bedrock. This is how this person interprets the existential condition. There are some people who they look around and they see suffering and death. And there is a deep, mostly subconscious portion of them that says, my human life, my spouse, my children, this is the sacred gem, period. All of this is fleeting. Humanity is the game. This is not about some other kind of life. This is about the beauty that is us. And I've just made that existential. I'm going to be real frank with you guys.
00:10:40
Speaker
I don't think it's super rational, my brother.
00:10:43
Speaker
I think these are our predilections. I think this is nature nurture and there's a pachinko machine, ding, ding, ding, ding, ding, ding, and we end up with a decision and we feel like we've rationalized it. I think mostly it's already been baked by nature and nurture. Some people change their minds, thank God. I don't think most people do very well though. And so we end up with these sort of pachinko machine decisions around anchored in our existential condition. Others look at the same fleeting state of things and they say, everything is in flux. All of this is going to pass away. We've bubbled up from other forms where
00:11:11
Speaker
We're a point on a line of, you know, or a cloud of additional bubbling to other forests. By golly, it sure would be horrible if this just ended with us. And there's a heartfelt, genuine, poetic, deep, emotional sense of that being there grounding through the existential conditions. So my supposition is that if you just tweeted each other, mostly there's going to be accusations of the enemy being both stupid and evil.
00:11:34
Speaker
Two things, stupid and evil. Those are mostly the accusations. Go ahead and look on Twitter and you can see if I'm right or not. But if you ask people, what is the combination of man and machine in 20 years you'd want to get at? Why would this be a good one? Why would this be a bad one? And sort of, you know, why is that a place you'd like to get to? And this is not a place you'd like to get to. Probably we're going to get to foundations where
00:11:54
Speaker
We don't have any certain agreement, but at least we don't have enemizing where they color their face blue and they're bad. Because I actually don't think that that's a super good dynamic when conflict involves AGI.
00:12:08
Speaker
You're worried about AI risk, but you also think that we have a moral duty, an eventual moral duty to create posthuman intelligence. So you're kind of in a middle position between people who are worried about AI risk and effective accelerationists. How do you manage that? What is the argument that we have this moral duty?
00:12:29
Speaker
Sure, sure. And again, I don't impose this on other people, right? It's not like this is my belief and you're stupid and evil if you don't have it, Gus. Imagine if I said that though. And so I just, I'm a hominid, right? I was born. I have a certain nature and nurture. Things seem to me as probably the case and probably true. I've made sense of the world in my way and I'll lay that out for you. I will not call it gospel.
Potential and Posthuman Intelligence
00:12:50
Speaker
I will not call you stupid or evil if you don't agree.
00:12:52
Speaker
But yeah, so my general supposition is that, and this is going to get a little bit heady, Gus, but you've gotten you've had much headier people than I on the show. So thank goodness you can you can handle it. I'm pretty I'm not that intimidating from a heady standpoint. But there's a concept that we don't have to use Spinoza, but it's great. And I love to give the man credit. There's a concept of tension.
00:13:12
Speaker
So Spinoza talks about this idea of the kinetics. So things that are living, kind of organisms or organizations will have a kinetics. A kinetics is a core impetus to persist. That is to sake us to not die.
00:13:24
Speaker
And that in the pursuit of non-death, an entity will wield everything in its power to not die. And everything in its power is potentia. So when life bubbled up, there was a time, Gus, where life couldn't see. There was no sight, literally no senses. There were just chemical processes wiggling around, no senses to the outside world at all. They were just bumping into things. Maybe I have no idea. I'm not an expert on eukaryotes, unfortunately.
00:13:50
Speaker
That was the case. And then sight emerged, randomly. Then new kinds of locomotion emerged. And then eventually, you know, swimming and flying Gus for crying out loud. And then consciousness at some point, you know, very fuzzy concept for us humans, but it seems pretty likely I have an internal movie. I'm currently assuming Gus that you're real and you have one too. I might be wrong about that, by the way. I'm certainly not sure about that. But I'm under the supposition right now that you do.
00:14:15
Speaker
And all of these things emerge from nothing. So potentia is a hard exoskeleton. That's potentia. Flying is potentia. The ability to express with words as you and I are doing or use tools like we're doing with these fancy computers. This is potentia. Potentia bubbles up from nothing. All of the things that are our potentia right now does never existed. And here's another supposition for you. Most potentia has yet to bubble up.
00:14:41
Speaker
You following me? So there was a time when there was only sight, and someone could have pumped the brakes and say, we've discovered it. We can now determine when there's light or dark. We've explored the state space of attention. It's all here. Someone could have said that, Gus, and they would have been.
00:14:57
Speaker
And my supposition is that potential expandeth and expandeth. And the idea as well, Gus, is if we look at everything that you value, and I'd love a counter argument, and again, I'm not preaching to you like you need to convert, I'm just saying, this is my reasoning, I'm just a guy, just a guy with some existential takes. My supposition is that everything that you consider to be good, maybe, Gus, you really like creative endeavor. Maybe you do oil painting or writing or, you know, maybe interviewing in some ways a creative endeavor for you. I happen to think some of what you do is kind of creative here, and maybe that's a value for you. Maybe romantic love is a value.
00:15:27
Speaker
maybe giving back to a community in some abstract way that's beyond yourself as a value for you.
00:15:34
Speaker
All of these things are only possible because your potential bubbled up to such a level that you could have goals like that. If you were born a sea snail, Gus, you wouldn't have those goals. And so my first argument here is that the good itself is to be explored as potential itself bubbles up. If all the things we consider worthy, lofty, and high are because our potential has expanded to the extent to which it is now, then by golly, it ought to continue. Second argument, Gus, I gotta lay it out for you. The second argument is about the torch of life, the torch of life itself.
00:16:04
Speaker
What we should suspect, Gus, is that potential has expanded in order to not perish, and that it is possible that in the galaxy, someone is playing with the black pieces. That is to say, there is another species bubbling about who we will one day encounter in some way, shape, or form, possibly in a way we can't imagine.
00:16:23
Speaker
and there are natural events that could occur from solar flares to the eventual expansion of our sun and however many billions of years there's there's asteroids many things could end a super volcanoes or something who knows how likely that is i'm not giving a like i'm just saying there's things natural that are preceded on proceed and there's things
00:16:41
Speaker
potentially volitional, foreseen and unforeseen, that at some point have a pretty good likelihood of coming in contact with us. If a small enough asteroid was to approach Earth now, we could actually shoot it down. Holy crap, Gus, we've got some potential. That's pretty good, Gus. I'm not going to lie. We've cured a lot of diseases. That's a lot of potential. Beautiful. We might have humans living on Mars in 20 years. Gus, that's a lot of potential. That's beautiful. But that's not the height of keeping the torch alive. We can keep the torch alive a little bit better as human. But there may be entities that can
00:17:10
Speaker
enter other dimensions or can, you know, stay not only conscious and intelligent, but conscious and intelligent.
00:17:18
Speaker
infinitely blissfully beyond the status of man and live in all kinds of environments and ecosystems and keep this earth torch lit. So if we stayed in an eternal hominid kingdom, not only would the good itself not be explored, but by golly, someone plays with the black pieces, even if it's nature. And if we die because we stay weak and we don't expand our potential, I would consider that poor outcome, let's say 1,000, 2,000, 2 million years from now. So those are my two arguments. Would love to hear your thoughts.
00:17:47
Speaker
Okay, got it. So summarizing, we need to eventually create post human intelligence, because if we don't, then we don't have the same potential or potential to survive and flourish. And we don't have the same capabilities or
00:18:03
Speaker
potential for developing a bunch of capabilities that we that we would have to have had or that our ancestors are post-human human ancestors would have had. And so it would be a tragically limiting of our potential if we if we forever stayed human, even though, say, I just want to make sure this this doesn't imply that we should immediately raise to build super consultants. Not at all. And you could read you could read 100 of my articles over the last decade. I not argued for that.
00:18:31
Speaker
Okay, got it. Does it make sense to you to be a humanist? Where a humanist is a term that I just coined, let's say, or I just re coined to mean that it's a person who cares about the interests of humanity above the interests of AIs. And does that make sense? Is that a position you would be comfortable with? You kind of described it before that maybe you're not so comfortable with it.
00:18:57
Speaker
Yeah, so it does not revolt me by any means. I am a human. The people who I love are also humans. You know, I love some other mammals like cats and stuff like that, but for the most part, you know, my most robust connections are with fellow aminids who are as yet unaugmented beyond the augmentations you and I are using outside of our skulls. One thing to add to your argument of sort of this potential carrying the torch, the other is
00:19:20
Speaker
potential to explore the good. And so I think there's a potential coldness that could be interpreted from the way you described it of like, oh, well, he just wants us to be strong, some kind of terrible guy who just thinks only the strong are good. There's a way you could interpret that. What I would say is, Gus, what do you value? Is it romantic love? Is it giving back to your community? Is it creativity? Is it exploring food? Maybe you love cooking and there's a rich sensory experience. What I would say is, do you think
00:19:50
Speaker
that the entire state space of those good, worthy, and beautiful things has bloomed and opened up in its full luster, yes or no, and my suspicion is it would be very hard to have a straight face and to say, I believe we have reached the end of good, Daniel, and that all of the highest goods have already been attained.
00:20:08
Speaker
I suspect that it would be a hard argument to make honestly. Now, some people would counter that. And again, I'm not here to tell anybody what to believe. I'm just a guy with a couple of arguments. But I would say there's the survival part. There is also the exploration of the good itself. For the sake of keeping the torch lit and exploring the good itself, I think there is a pretty strong moral mandate to eventually make our way to a strong and sturdy continued blooming potential itself, to your point about humanism.
00:20:34
Speaker
I totally understand the position. I think it's actually remarkably natural, and I would expect Gus the bulk of humanity to be humanist as you describe it. I don't suspect that that will bubble up from their virtue, Gus, just as I don't suspect necessarily my beliefs bubble up from my inherent delightful virtue. I think most people will be humanist because Gus, it's really
00:20:56
Speaker
to selfishly just value, to be a speciesist. Let's call it that. I will say humanist, but another take here would be speciesist. To be a speciesist, it's pretty natural. I like things that are like me. Now Gus, we used to have versions of that that we now look down on, but let's just say we continue that version to hominids, and right now that's an acceptable word. There's other IST words that you and I, I don't know, they'd shut your podcast down, you know what I mean? If you were the wrong kind of IST word. But right now, you could be a speciesist, and you can keep this podcast running.
00:21:25
Speaker
Most people I think will be. And so I would suspect it's a very understandable position. I think egoically it makes a ton of sense. And I think, honestly, there's a good argument of, hey, I don't know what this posthuman stuff is. It doesn't exist yet. Let me value what I have. And to that argument, Gus, I have no counter. I would say, yes, I'm not looking to destroy human value now, but this stuff is slightly hypothetical.
00:21:47
Speaker
I think it's extremely rational. I think it's ridiculous to suspect we've reached the peak. But I think we don't know when it's going to bubble up. Maybe AGI is in 100 years. Now, I don't think that's probably the case anymore, Gus. But maybe it is. Maybe brain augmentation is not going to take us very far in the near term. Let's not blow up what we have. That's far from where I want to stand.
00:22:07
Speaker
I'd rather just say, you know, let's figure out where we want to go and if we can get there and let's understand that keeping the torch alive and growing the good is good, but let's not do it by violating what we have today. So to your point, would I be against humanists? Not at all. In fact, in the practical day today, that's kind of what I am.
00:22:24
Speaker
I'm not asking to take the first thing that smells like strong AI, throw it on as much steroids as we can, cross our fingers, and if it all rips to shreds Gus, who cares? I don't need a worthy successor, I'll take any kind of successor. That's not my position. So, I guess I'm not against humanists, it's not my long-term perspective, but I
AI, Speciesism, and Self-interest
00:22:40
Speaker
understand it. And I think in the near term, it makes sense to me.
00:22:43
Speaker
So we could call this humanist or speciesist perspective biased, and we could talk about how we evolved in a way that makes us more likely to accept such a position and so on, and that there isn't actually any deep philosophical grounding for it. I do worry, or it makes me slightly uneasy to
00:23:01
Speaker
abandon this kind of humanism. It seems to me that if a civilization is not in some sense self-interested or biased towards its own values, wouldn't that make it more or less likely that it survives? If we took account of all the civilizations that, let's say, took to space and conquered some part of space, my bet would be that most of these civilizations would be self-interested and biased in a way.
00:23:30
Speaker
Yeah, look, I believe that the Canatus is the king, Gus. I believe that all of the altruistic things that you do and all the brutally selfish things that you do and that I do are undergirded by a core impetus towards persisting and that the Canatus is just the king and that that will continue. But I do think that in a civilization, as maybe within an organism, I'm not a biologist, there's probably some great analogies in the micro scale that I'm unaware of,
00:23:57
Speaker
Balance feels pretty logical. So I think that if everybody in a civilization was like, I'm going to be an entrepreneur and it's going to be like the newest, craziest thing, no one's even thought of it yet. If it was every person to a man who only pursued that task, you would probably not have a civilization. Similarly,
00:24:17
Speaker
And we have had civilizations that have been in ruts, right? Japan until, you know, Paris showed up, China until, you know, Yuan Ming Yuan, and different civilizations have had their stagnant periods. But it is also possible to say, this is the way we eat, this is the way we drink, this is the way we dress, and nothing will ever change. Both of those are probably a rough game.
00:24:37
Speaker
if you're looking to continue to persist and make some progress in meaningful and important ways. I would say the humanist impulse, I think is a rational one. Benjio put it pretty well, actually, at the end of my interview with Benjio that the channel is called The Trajectory. It's findable on YouTube. Benjio's ending clip was, I thought, interesting.
00:24:57
Speaker
For the people who are really looking to go pedal to the metal, we don't know the consequences, but we just think it'll be great. Let's just get AGI, throw it in the door from whatever side, whether it's on the military side, the private side, robot, who cares, get the AGI. For him, it's, hey, I think we should have a little bit of empathy for entities now living and kind of take that into consideration as we start to tinker with what's next. And then to the people that said, only humans forever, even for a billion years, he had said, hey, it might make sense to have
00:25:26
Speaker
empathy even outside of serve our own form. So I'm not saying Benjio agrees with my position directly. He all of this potential stuff is not coming out of his mouth. This is mine. But the way he ended that interview, I thought was a pretty good take on the kind of balance we probably need as a society. And I hope that dialogue between those two parties could persist. That is kind of my reason for being is to foster a frank dialogue around where do we want to go and why? And let's not make the other camp evil. And let's see if we can do a thing called progress that's not
00:25:56
Speaker
in a result in Armageddon?
Balancing AI Progress With Empathy
00:25:59
Speaker
Yeah, earlier in the interview, we talked about the distinction between near term and long term issues and how we could potentially focus more on the long term, even though both are important. I do wonder, however, whether that distinction even makes sense. So it doesn't make sense to talk about the long term if your timelines to ATI are sufficiently short. And so maybe you believe that we will get
00:26:21
Speaker
AGI within 10 years or 15 years. Suddenly, long-term issues are coming in the pretty near term. You're bringing up an excellent point. So when I first started doing polls on this stuff, again, first ones were like a decade ago. But if I look at like a second round, which was eight or seven years ago or something like that.
00:26:40
Speaker
Bach and, yeah, Polsky, a bunch of people you guys have had on the podcast, Hanson, and a whole bunch of specific AI and AGIs for researcher related folks. The consensus we got from 35 on, you know, people that have given some thought to this, some of them hardcore AI academics, some of them not.
00:26:56
Speaker
was kind of in a similar ballpark to where Bostrom landed where like the 2060 to 2100 sort of space was like where people were suspecting we'd get to that like generally more capable than humans at pretty much all the things level. And so when I was at the UN talking about deepfakes, etc, it was definitely something I still wanted to discuss, but it did feel more long term. I would concur with you now, I think these things are more short term. And my hope is that the people who are chairing these groups within the OECD within the United
00:27:25
Speaker
nations, etc. will be able to look staunchly at how near term this stuff is. Because I think the conversations between these multiple parties that want a very different 20 year future and have a very different idea of what kind of risk vastly post human intelligence creates is something we do need now. So Gus, I would actually completely concur with you just said, I mean, I think the the near and the long are mixed. And it's just it's time to talk about big futures right here right now. I'm totally with you.
00:27:51
Speaker
And it might be a bet worth making. So I'm surrounded by people who have pretty short timelines, say in the 10 to 15 year range. And even if those people are wrong, there's still a bet worth making here. There's a bet around preparing for a future, even though we're not certain that that future is going to come about. I think what you're doing is valuable, even though of course no one knows what the future of intelligence is.
00:28:21
Speaker
I think we should talk about your notion of a worthy successor in a bit more depth. So we talked about why you would eventually want to create posthuman intelligence, but you wouldn't just want to hand over a bunch of power to an entity, to any old entity. You would want to hand it over to a worthy successor. So what is a worthy successor? What are the key metrics there?
00:28:47
Speaker
Yeah, and worthy successor is intentionally subjective. So I think you would define it differently than I. Some people would say there is no worthy successor. The billion year hominid kingdom, right? And again, I can't hate that crowd. I don't think that crowd's going to win out whether they want to or not. But I think that that force in society is perfectly good. But everybody will have a different opinion. Some people might say there is no such thing, you know, hominids forever. As I'm sure the sea snail said back in their day, I'm certain that they thought exactly the same.
00:29:17
Speaker
So, the way I would define a worthy successor, for me, would be an entity to which, and all of this is defined in the article. So, if people Google Fagella, F-A-G-G-E-L-L-A, worthy successor, it's pretty simple to find. I tweet about this stuff all the time as well. I think that probably does where you and I got connected. At least that's where I started seeing your work, was on Twitter at first. Worthy successor is an article with more depth. The basic idea is this.
00:29:40
Speaker
A worthy successor is an intelligence, an entity in intelligence, to whom you would feel comfortable giving the keys to kind of determining the future maybe instead of man, where you would feel like, you know what? Take the keys, pal. Take the keys. And so different people would have a different opinion here. For me, Gus, I'll just lay out my perspective, but we could unpack this for anybody with different values. For me, the continued blooming of the good, as I explained with potential,
00:30:07
Speaker
And a real ability to shepherd and keep on the torch of life would be rather important. So I would bias myself towards an entity that has an theater movie playing in its head, as I happen to have, and as I have guessed through the course of our podcast, you have going on in your head. But I don't know. So I would bias in the direction of an entity with that.
00:30:27
Speaker
But consciousness is notoriously, you know, flim-flam. And as of right now, I'm very disappointed, Gus, and I've been disappointed. I have articles about this that are eight years old, how miserable it is that we've made such little progress on consciousness while AI has absolutely blasted off. Because by golly, it feels consequential from a utilitarian and other standpoint to sort of have at least an understanding of that if something else is going to populate the galaxy. So I feel quite disappointed there.
00:30:51
Speaker
But I would bias towards something conscious, something that was not aiming to optimize towards a single narrow goal. The paper clips is the classic example. But we can think about something else, an AI that just
00:31:06
Speaker
I don't know, just converts things into solar panels and compute. Now, by the way, to an intelligence vastly beyond us, I don't think we have any good analogies, right? Monkeys would be like, oh, humans, they'll have even bigger bananas, right? They don't even know what to think of. So when I say solar panels, that's a silly, hominid idea. They would be building something I would presume vastly more complex, likely beyond my imagination. But maybe it just optimized for, like,
00:31:28
Speaker
capturing energy in some sort of weird way and like doing that over and over but in a way that doesn't really bloom potential outwards in any additional and more meaningful and useful way. So that that would seem to me to be an unworthy successor. If it was also unconscious, that is to say it was consuming stars by doing brilliant cool things traveling faster than light, whatever the heck it can do. I have no idea. Maybe that'll never happen. Maybe it
00:31:51
Speaker
But it was totally unconscious and it was optimizing for something fettered. I would consider that a non-win, a non-worthy successor because for me the blooming of the good and the keeping alive of the torch in a grand sense are very, very important things for me personally. So I would, the way I explain this in the article, and again, people can Google worthy successor for gel if they want to, is I lay out like four bullet points. I don't remember them all. I'm going to paraphrase them for you guys. But I lay out four bullet points that for me would be rather compelling, like pretty convincing and compelling. Like if there was an AI,
00:32:21
Speaker
who could communicate with me in a very fluent kind of human way, but in a way that was very, very clear it knew more than I. I would ask it questions and it could easily just explain vast kind of concepts that humans by themselves could never discover, but in a way that is intelligible to humans.
00:32:38
Speaker
articulate to me sort of its goals and objectives and sort of have this dialogue that would be kind of heartening. If it could articulate its own rich sentient experience and maybe with invasive or non-invasive kind of devices be able to kind of tap into my brain and give me slices of the flavors of qualia that it experiences that I cannot.
00:32:57
Speaker
So, Daniel, you might want to try this flavor. This is some of what we've explored, and it could just rip me into something as wildly divergent from my experience as my experience is from that of, you know, a gerbil, right? Completely different idea space and sentient space and things beyond kind of the current understanding of the mind, vastly even beyond what psychedelics do for some of us.
00:33:18
Speaker
today. And so if it could say, hey, we experience all these things, here's some slices that are sort of interesting, this will give you an understanding of the panoply, and it could give me serving sizes and show me that space, I think that would be rather compelling. Also, if it could just manifest things out of thin air, so to say, oh, we know I'd like a spaceship or I'd like a, you know, tuna tartare with some, I don't know, with a like a glass of wine on the side, whatever the case may be anything with nanotech or technology beyond our conception, could just manifest things.
00:33:46
Speaker
could just do such insane levels of magic that it matches how magical humanity is to lemurs, like minor lemurs, small skulls, like insane degrees of magic. If those boxes were checked, I'd probably say, brother, to explore the good
00:34:06
Speaker
And to keep the lights on, because I don't know what kind of aliens are out there, take the keys. And my hope, Gus, is that we would be treated well. And I have some specific hopes for the individual instantiations of humanity. I do not wish for our violent attenuation. I think our attenuation is imminent, but I don't think it should be horrible. But probably at that point, I would hand out the keys.
00:34:24
Speaker
Got it. You mentioned not optimizing hard for one objective. So we could think of this as kind of a slack in the utility function. So maybe a future super intelligence takes over, say, 90% of the galaxy, but it leaves 10% for humanity to keep our human activities going.
00:34:47
Speaker
I don't know exactly how to formalize that or how to set that up, but that would be something you would be for and that would be something that would make you more likely to hand over the keys, so to speak.
00:34:57
Speaker
Yeah, yeah, the purported treatment of humans, I think, you know, could be another one of those described things. I don't know how much we could trust it per se. I think probably the creation of something vastly beyond us is going to, I'm not a big optimist on us having a great place after that. But in terms of wiggle room for the utility function, I'm going to go ahead and say I literally think that the kinetics and the potential are better for azeology of the utility function. I'm not going to impose it on you, but I'm just going to say I think there already is a utility
00:35:25
Speaker
I think it's in slime molds. I think it's in JP Morgan as an organization. I think it's in eukaryotes. And I think it is to persist. And it does that by wielding what it can to persist, which is potential. Sometimes that potential expands and new things bubble up. I think that we're saying like utility, it's like
00:35:44
Speaker
This is a thing already. Spinoza's got, I think, terminology that aptly applies to drastically post-human as drastically pre-human intelligences. And so I wouldn't even say utility function. I would say it'll have a kinetics, and its potential will continue to bubble. And the continual bubbling is what brought us to us. If a eukaryote just said, cool, I want better eyesight, better eyesight, better eyesight, better eyesight, right? That's what the eukaryotes did, or whatever level of development developed eyes. I don't effing know. Somebody's a biologist listening in. I'm sure they're going to comment.
00:36:14
Speaker
That would have been a rough local maximum, if you will. And so I think the continual expansion of potential is a thing. Does that mean we'll get a portion of the universe to do our shebang? I don't know, Gus. I'm going to pass this back to you, but I want to frame it this way. Why would an entity do that? And I guess maybe it's because we've come up with an agreement with it. But of course, it could do whatever it wants. It doesn't have to put up with hominid agreements. Let's just say, Gus, that you said, well, Dan, it's because it respects us and wants us to be happy. I could say,
00:36:42
Speaker
I can think of a lot of ways of keeping individual instantiations of humans happy, much happier than we are today. Gus, we live like gods compared to generations in the past. Look at how rife depression and anxieties are. Look at the multiple ores we got going on, Gus. Jeepers, by golly. I'm not really sure that respect and well-being are optimally sort of catered to for individual instantiations of human consciousness through the flawed effing vessel that we've been hurled into, Gus. I don't know about that. What would you think would allow it to give us a little playpan, if you will?
00:37:13
Speaker
Well, you could think about the successor problem, handing over power between kings in the old days, for example. Maybe if you build up a tradition of when you hand over power to the new king, you let the old king retire instead of violently taking over power.
00:37:33
Speaker
And then maybe you hope that there's, you build up some form of tradition there. And this is of course extremely speculative, but we could imagine. It's super anthropomorphic, right? It is, but just to start somewhere, right? We could imagine a future extremely intelligent machine intelligence speculating about the existence of aliens or the existence of
00:37:57
Speaker
further developments in machine intelligence that will make it irrelevant. And so perhaps there's some form of kind of rational choice there where you initiate a tradition of preserving your ancestors and not interfering with their giving them something and not just ruling over them completely.
00:38:16
Speaker
in the hopes that, say, when this future machine intelligence meets the more advanced alien civilization, that it will all it will also be given some form of a sue or some form of place of peace. I'm not against the idea or the hope, but I'll just put some ideas out there. You bring up great points. This is interesting. This is an interesting analogy. I've not thought about this exact one. I'd love to unpack this. So I suspect
00:38:41
Speaker
and I could be wrong, that the reason that any degree of cooperation between hominids has ever developed is because it was in the best interest of the hominids to develop those things. Some things were arbitrary and they built momentum, but then we found a way for them to serve some of our needs or whatever the case may be, and not always serve them well, by the way. There's plenty of dysfunctional societies that want to stay the way they are. Again, self-interest is perceived, right, Gus? Spinoza talks about adequate and inadequate ideas.
00:39:09
Speaker
So an inadequate idea would be, oh, I have an apple tree outside and I won't have to spend any money on food if I only eat apples. And if I did that for two years and I die, that would be because I had inadequate ideas about nutrition. Spinoza himself died from breathing in the glass dust because he was a lens grinder. He had inadequate ideas about microscopic stone entering his lungs and what that would do to him coughing up blood and dying of painful and tragic end. He had inadequate ideas. So I'm not saying that our moral ideas are
00:39:38
Speaker
like they're perceived they're not because they're actually our self-interest is perceived self self-interest a cat that needs a medicine will still try to bite you thus if you try to drop the pill down there if you've ever had that experience so
00:39:48
Speaker
I suspect that if there have been those cordial handing off of the scepter routines, it's because the person who took the
Power Dynamics in AI Governance
00:39:57
Speaker
power and also sort of was in that milieu and that society believed that probably in general it would behoove me actually to sort of keep this thing going because I don't know what the next 20 years are gonna hold and I don't wanna be torn in four parts by carriages, right? So I suspect that it was self-interest that conjured
00:40:17
Speaker
that, that it was the kinetics that conjured those things. I don't know if something that's that drastically beyond us will have that sort of in their mind. And again, also I would just say, I think it's, here's a tough part. This is a real toughie for Mika. So I've got an article called the moral singularity. I mean, there's an infographic in there, which I wish we could have on screen or something.
00:40:40
Speaker
The idea is, if we are to suspect that hominids that are survival and maybe they're well-being, will be valued and maybe even encouraged by drastically close to human intelligences, we have to hope that, number one, if there's many such intelligences, they all at least somewhat buy into that. That's number one. Cross your fingers on that one, Gus.
00:41:00
Speaker
As they foom, so to speak, even if it's a slower foom, as far as I can tell, the developments in AI over the next 10 years are close enough to a damn foom for me, Gus. Some people pretend to be unimpressed. I'm pretty damn impressed. And I think if that continues, we're going to see some very quick developments. If we are to expect a fooming of
00:41:18
Speaker
potential at capability and abilities, probably there will be a fuming in ways of valuing things as well. So we call these things ethics or morals. I'm going to take the hominid language out of it. I think if we want to be ice cold, we should talk about amoral things. We should talk about
00:41:34
Speaker
amoral things. The kinetics is amoral. It's not selfish and bad. It is amoral. We invented morality. We like to cast it out into the world and it ate there, in my personal opinion. So I'm not saying that's good or bad. Again, I'm saying it's amoral. So as it blooms,
00:41:49
Speaker
It would have to, all of its iterations of how it values things, the new iteration, the new iteration, the new iteration, the new iteration, would all have to have one value that doesn't change. So we can imagine everything in flux. Think about how the brain is molded and reshaped all the way from the first fish that crawled out of the water to you and I. Its shape has changed, its preferences have changed, its nesting patterns have changed, its mating has changed, many failed branches, many successful branches bubbled up to you and I. Are we to expect that all of that bubbling would have one value,
00:42:18
Speaker
one unbroken value of Gus that would always be ensure the happiness of the monkeys. And even if that was the goal, Gus, there's plenty of ways to jack something into the back of the skull and make us pretty happy. You know what I mean? Or to upload us and make us pretty happy. So I consider it to be unbearably unreasonable to suspect that through the fooming of ways of valuing things,
00:42:40
Speaker
an entity would always have one sturdy value stay put. Unfortunately, what this means for me, and I'd love to be argued out of this position, is that we should hope and plan for a very blissful attenuation. I think that when the baton changes hands, probably it's over for us. At least that's my current supposition. I'm not saying I relish it. I'm not saying I hope it to be violent. I have article upon article about how I would hope that that would end well for individual instantiations of humans so we can unpack it.
00:43:09
Speaker
but I'm not an optimist in that regard. Okay, let's return to our forum for a minute, because this was a very speculative discussion, but an interesting one. What if we talk about power dynamics? So say power dynamics over the next 10 years. How do you see power changing? Which entities are going to lose power? Which entities are going to gain power? Say, how is the relative power between governments and companies going to change, for example?
00:43:38
Speaker
There's a couple things I'll put on the table. There's like 30 things we could talk about here, but I'll put like two or three on the table, Gus, and then I'll let you run with whatever's interesting for you. I will trust your intuition and curiosity and your own interest here even more than my own in this regard, but I'll put some ideas out there. So one dynamic that I'm particularly concerned with at present is the sort of gung-ho arms race towards
00:44:04
Speaker
just hardcore AGI as soon as possible. It's almost comical. I hope, Gus, should I find the time, which probably won't happen. I happen to have a day job where we talk about AI in the business world. That's what Emerge does. People don't pay to hear me talk about this kind of stuff, Gus. It's about how it's going to affect banking and drug development and stuff like that. But I want to find a way to put together all the
00:44:23
Speaker
all the video clips of AGI or AI-related founders saying this will probably kill us and also we're really headed towards building it. Like in the same speech, right, in the same talk. And by the way, Gus, some people would say that's because they're bad people. And what I would say, Gus, is they are kinetics and potential like you and I. I would consider their self-interest to be naturally what they would pursue, as your self-interest in mind would be what we would pursue. Gus?
00:44:51
Speaker
if you were far enough along the chain to be in the runnings to birth the sand god and if you did it you would be consumed by someone else's sand god
00:44:59
Speaker
There's a very high chance, not a definite chance, but a very high chance you would say, why not go pedal to the metal? But you and I, Gus, the little people, you and I, Gus, we sit here on the side with no ability to build a sand dot. And of course, it is natural for us to say, by golly, these folks should slow down. So I'm not even claiming moral high ground. I'm not. I'm explicitly taking moral ground out of the mix, total level.
00:45:23
Speaker
Total amorality. I think there's a problem with the incentives. It's not the people. Oh, we should replace these AGI leaders and put the saints in their place, Gus. The people who are only selfless, who only care for humanity. Do you remember how opening AI started, Gus? I do, yeah. Yeah. Do you remember how anthropic started, Gus?
00:45:43
Speaker
Remember? I remember Gus. But what happens, Gus, is if you're Robespierre, you are a paragon of virtue. You will speak as a paragon of virtue. You will write as a paragon of virtue. But when you have the scepter or it is close at hand, you will do what the fuck it takes, Gus.
00:45:59
Speaker
And so that's literally, that's not bad people. Oh, that's those bad techies. No, that's humanity. This is an incentives problem. Law in the United States is not based on everybody being evil. It's just, ah, we got some tough, you know, incentives here and we need to balance this about and make law. I believe in the international discourse,
00:46:18
Speaker
There's going to have to be some degree of governance around destinations or directions that we're permitted to or not permitted to go in because I suspect that the pedal to the metal arms race is just where the incentives take us if we don't have that. So if I don't build a sand god someone else will and it will
00:46:36
Speaker
If I have a choice to be devoured by my own power or by someone else's, I'll do my own." There's a great poem about Sardinopolis, the purported sort of semi-nithical figure in sort of like Greek mythology, Persian, who was at his defeat. When he knew he was going to be defeated, piled all of his horses, concubines, gold onto this giant bed and just like burned it all with him. In a show that he was the final victor,
00:47:02
Speaker
that it was his might and force, and that you could charge his walls, but you would see the flexing of his will, that his people would still pile up and die for him, and you wouldn't even get the satisfaction. You want the gold, he just melted it and mixed it with bone. And so there's a certain thing, I think, in the masculine psyche of, if I gotta go out, I'm gonna do it in my own terms. And unfortunately, we see that in the school shooter world, but I think that that impetus, Hitler, but I think that impetus exists in non-overtly,
00:47:31
Speaker
you know, evil folks, I think that's just a thing. If we're gonna go out by our own power in our final blaze of glory, a final flex, if you will, or we gotta go out by someone else's power, then the natural conclusion is, this is gonna kill us, but you should join my team and build it first. I think this dynamic is one of the main things that international discourse should handle, and I absolutely lament and push back against the idea that it's because the techies are bad and they wanna build it. Let's replace them with saints. If they were building the AGI, well, if you put me in power,
00:47:59
Speaker
said it through Robe Stier, I would, of course, serve people. I think that's wrong. I think we need the incentives in an international mode to be fixed. So, we'd be happy to unpack that, Gus, but as far as power dynamics, though, I think the current power dynamics are wildly bending us towards danger. And so, how do you... Yeah, this is perhaps the biggest question. How do you fix the incentives? What are something you could do on the international governance level?
00:48:24
Speaker
I have some suppositions, and there's many people with smarter ideas, and I'm not sitting here saying, oh, the world should take the Dan idea. There's an older article called the SDGs of strong AI, where I unpack some of this unemerged, E-M-E-R-J dot com. Another article unemerge about uniting or fighting, but I think the basic idea is foreign as I can tell. And by the way, I'm not optimistic, A, about this happening, necessarily. I'm also not optimistic, necessarily, B, about it working. But I think it has a higher likelihood of working, and by working, I mean,
00:48:54
Speaker
not end terribly for humanity and be more likely to create a worthy successor. So I'm not saying like, oh, this will clearly be the fix. There's so many issues with law and regulated things. And I'm not I'm not presenting this as a panacea by any means. I'm so glad to be running a business in America and not Europe. But at the same time, like I see this as a totally different animal. I see this as a race to build the sand god. The urge of Sardinopolis is going to drive people to die by their own hand rather than someone else's hand. And that is a fucking horrible dynamic.
00:49:23
Speaker
be under right now. So my supposition is the following. There should be some pretty strong resonance consensus that, and I don't think this will happen without a let's call it independence day level event where like the alien intelligence lands, it hurts humans in a certain way. Like I don't think
00:49:39
Speaker
what I'm articulating is going to happen articly until there's a real impact. And I hope that's not too late. But I think there needs to be consensus globally around sort of, okay, the trajectory of intelligence itself clearly is going to have impacts for us and whatever's after us and our children and whatever the case may be. We should figure out what
00:50:00
Speaker
domains we want to carve out and go ahead and explore, that like we're all kind of kosher with that. It feels safe, feels good, like let's do diseases or let's do this, let's do that. And which other domains, honestly, we need transparency on research and making sure that people are not going pedal to the metal on it.
00:50:15
Speaker
That is not easy remotely. I'm not claiming any of this is a panacea. I would just say if my alternative was everybody peddled to the metal, whoever gets their first, their sand god devours everyone. I think that's a much worse scenario. So I think a consensus around preferable and non-preferable futures on some level and some degree of global governance and transparency around sort of the steering and direction of the trajectory of intelligence itself
00:50:43
Speaker
That's far too much power for one person. It's scary to think about that being centralized. I just look out to us and I say, the incentive, not for evil people, but for just self-interested people, like you and I, my opinion, like you and I, is to build the Sand God before someone else, even if I know damn well it'll kill me.
00:51:00
Speaker
You can go back and listen to all the Altman quotes before OpenAI was founded about the situation for humanity, and Ilya was pretty frank about this stuff, right? But like, hey, we gotta build it anyway. We gotta be the first ones to get there anyway, because if I gotta die by someone's hand, you know, whatever. So, international coordination on preferable and non-preferable futures and some degree of steering and transparency around what that stuff is to tweak our way towards what post-humanity is without a violent arms race. Not saying I like it, but it's better than the alternatives.
00:51:29
Speaker
What's the model for that international governance or international institution? Is it the UN and sustainable development goals so that we coalesce around certain goals where we say, okay, we want to perhaps maximally explore what AI can do in medicine, what AI can do for poverty reduction and so on. But there are certain areas where we've collectively decided that we want to go.
00:51:53
Speaker
Of course, there are there's a thousand problems with this approach, but is this is this is this the basic framework for how you would you would set up an international governance.
00:52:03
Speaker
I suspect, I hope that the governance itself would be structured and articulated by folks smarter than me. I happen to have read a decent amount about the original founding of the UN, which I think was a pretty pivotal and interesting moment and kind of hominid ability to collaborate, which I think is very heartening, by the way, Gus, when I look forward, it's like it's nice that we were able to do that. The League of Nations didn't work out, but when I lived in San Francisco, I deliberately lived within about two punts of the football to the building where all those initial negotiations happened and the UN was drawn up.
00:52:31
Speaker
I could see quite easily the U.N. being the logical locus for this conversation. I could see many arguments against that. My current intuition says probably given timelines, it makes sense. But I think there's probably really strong arguments that it should be another body. And I'm open to that as well. I'm not I'm not close minded around it.
00:52:48
Speaker
And as I mentioned a hundred times here, I'm not even optimistic this is necessarily possible or necessarily grand and fantastic. But it does feel vastly better than who can put steroids under the first thing that smells like strong AI.
00:53:03
Speaker
as much steroids as possible and just birth the sand god as fast as possible. So I only think it's a better alternative to that. I don't think it's easy. I don't even think it's likely, but I do think the UN is probably our correlate and I would hope that smarter policy thinkers than me who have a lot of precedence around nuclear and other kinds of chemical weapons stuff and other kinds of soft governance and hard governance would find a panoply that would be
00:53:26
Speaker
hopefully suited for this grand task, but it is the grand task, Gus, or the determination of the trajectory of intelligence is the big game. I suspect this will at some point be understood to be much more important than the SDGs. Not that the SDGs don't matter, but at the end of the day, I think at some point people are going to understand this is actually bigger.
00:53:44
Speaker
So that we stay on our current trajectory and we don't change anything radically. So we have some regulation, but not much. And we are basically in a race towards developing the most advanced AI models possible. Who becomes more powerful in that situation? Do you think it's the companies aiming for the most intelligence possible? Is it perhaps everyone becomes more empowered because open source wins out?
00:54:08
Speaker
Do you think that eventually the US government or another government will wake up to the power of this technology and train their own model in a gigantic training run? How does the power shift over the coming decades?
00:54:27
Speaker
Great question. There's going to be all kinds of breakthroughs and social dynamics that I couldn't possibly foresee. If I can claim expertise in anything, it's sort of the current boots on the ground impact of AI and major industries and big corporations. So these are just prognostications here. But here's what I would suspect, given our current trajectory.
00:54:44
Speaker
If there's any nation that's going to wake up and wield all of their might as one united effort towards building AGI, it's China. They have much longer abilities to do planning than we do because they have the emperor for life kind of thing, the country's named after the first emperor after all, and they still have a good deal of that. And the private companies there are, to some degree,
00:55:07
Speaker
wield it as an arm of the CCP. And in my personal opinion, I think there's there's rife evidence to believe that. And so I think if there's any nation that says we want to do this, they've digitized the heck out of everything, social credit scores and beyond. They've got their hands, I would assume vastly deeper than the CIA has their hands in US tech companies. This is a supposition. I don't know for sure, by the way, could be wrong there. And their leadership seems to be a bit more overt about the AGI race being kind of the big game.
00:55:34
Speaker
you know, as opposed to the state. So if there's a nation that wields itself in that direction, I suspect it would be China, it does feel as though they've fallen behind in the LLM race from what I can gather. My my current thought is that maybe it's two releases after so chat TPT was a big deal. You know, it was a big deal to the point where, you know,
00:55:51
Speaker
people in the neighborhood might say, man, Dan, you do that AI stuff, dude, you seen this chat, like people that didn't care about AI were like, oh, right, they talked about it. But I go back home to my 4,000 person town, you know, my parents and twin brother, you know, that still live in that town, not even on the Richter scale, not even a small amount. A couple releases forward where we get to such a spooky degree of capability that anybody with brain cells can look at it and be like, oh, we have species dominance issues here, anybody with brain cells.
00:56:20
Speaker
When we get a couple jumps forward, maybe it's going to require biped robots. I don't even think it will, but it will require some flex of capability that's like, what? I think when that smacks in, the political singularity occurs, political singularities when most people in most countries realize actually the only questions that matter are sort of who builds a sand god and what they do with it.
00:56:41
Speaker
When that occurs, when the populace, when the 4,000-person towns are all like, I don't even care about who's doing the sewage system in Wakefield, Rhode Island, I care about this thing that could kill us all or build utopia. When that becomes the only political issue, I suspect there will be a commandeering of the Great Labs. Currently, OpenAI does not have an RV, and if enough tanks rolled up,
00:57:04
Speaker
I hate to tell you it's going to be a one-sided song and dance. However, I would suspect, and I really hope not to be proven right about this because I hope that this whole scenario doesn't happen, but I would suspect that Microsoft
00:57:16
Speaker
really getting its data center footprint really wide and far out there. I think the Middle East is going to be a great place to start to settle in and train your EGI. I think really widening that compute footprint could be some baby steps to making sure that if things are commandeered at home, they could play the, I'm an innocent guy, I'm just running a company, pump everything over to Qatar, and it claims some degree of sovereignty over there. Because again,
00:57:42
Speaker
If you could die by your own sand, God, Gus, or by someone else's, I think most people are going to pull the Sardinopolis. They're going to pyre the concubines on, and they're going to light it all on fire, Gus. That's my stuff. If the concubines are going to die with you already, I think that's the way most people are going to do it. And there's a chance that these folks would be able to wield
00:57:59
Speaker
maybe a couple years or months of great power if they were the ones to build the sand god too. There's a chance they'd even have a let's call it a glory phase before maybe they themselves were consumed. I think that the big companies are already thinking about how are we going to prevent commandeering but I will tell you right now Gus when the political singularity occurs I suspect commandeering will be in short order.
00:58:19
Speaker
What I hear you say is that the companies can get more powerful up to a point, and that point is when the man on the street realizes the power of this technology and kind of forces the hand of the government to intervene, or maybe a bit before that. I think, yeah, maybe a little bit before that. The DOD is so asleep at the wheel in many regards, but I think that when enough of that leadership is like,
00:58:45
Speaker
If they understood the way that Silicon Valley leadership understands what AGI is, I think the proxies for commandeering would already be clear. We'd see the chess pieces moving already in a much more overt way than we are now if they could understand what was going on. But they don't yet.
00:59:00
Speaker
What about open source? So open source is right now only a little bit behind state of the art, if I understand things correctly. And it's in a certain sense uncontrollable. You can't roll the tanks up on open source. You can't commandeer open source, at least not to the same extent that you can commandeer open AI.
00:59:18
Speaker
How do you see that playing out? Great question. I suspect open source will continue to be a force. I currently see open source as a cloak that one would take on or take off as the situation dictates.
00:59:34
Speaker
So if I'm, you know, right now it's a differentiator for Zuckerberg and there's kind of this acceleration is sort of under courage within tech where like the kind of cool guy energy is sort of with like, you know, let's let's go pedal to the metal and I'm not even calling that wrong, right?
00:59:49
Speaker
Gus, if there's anything about this call I hope I don't do, it's moralized. I'm not. I'm just saying there's an energy right now and you could amorally look to ride that energy. So I think that cloak was put on. I think for Mistral, it was a cloak that was put on. You remember how OpenAI was founded, Gus? Do you remember, Gus? So I think that was a cloak that they put on. And Gus, when it didn't serve them anymore, what did they do with that cloak, brother? Cast it to the wind, my friend. Not because they're bad people, but because incentives rule the world.
01:00:16
Speaker
And so I think when companies are building things for open source, I think that lasts for as long as it serves their interests and then they just title to something else. But open source itself, I think will be a strong force. I would say it will be extremely hard to roll the tanks up on open source to your point. You know, you could find where hugging faces headquarters is, but you can't like chase all the code down on everybody's, you know, computers somewhere. And so that's a totally different story.
01:00:43
Speaker
I think there's going to be a much more ardent push against open source and by many people who historically have very much been pro open source for their entire career. So Benjio is a great example here of a guy who has, as far as I know, I've only known the fellow for a decade in terms of interviews and following him on Facebook. He's not on Twitter, unfortunately. So for Benjio tweets, you have to be on Facebook. It's an interesting fun fact for the audience here.
01:01:09
Speaker
But exploring his thoughts and blog and whatever, pretty down with the idea of open science and open source tech, et cetera, but now very much sort of of the belief that we really should have certain things that are not open source of polling. He himself admits that there's a capability challenge.
01:01:25
Speaker
Do we wait until it's overtly dangerous and they say, okay, no more open source? Or do we try to proactively think he doesn't have a great answer and, you know, Lacoon will poke fun at him for that? But I do think we'll see a lot more pushback against open source. I do suspect it will be a continued force. Where it exactly lands is I don't know, Gus, but I think there's going to be some pushback by some smart people.
01:01:45
Speaker
When you talk about open source and the incentives changing, does it mean, for example, that say if Meta were to develop a state of the art model, you don't think they would open source it? Do you think they're open sourcing their models because they're behind open AI? The kinetics is the key. And that if it behooves their perceived self-interest to toggle away from open source, they will. And if it behooves their interest to go back, they will.
01:02:12
Speaker
So yeah, do I think, no, is it they develop a state-of-the-art model? If it behooves their interest, Gus, to develop a state-of-the-art model and keep it open source, extra cool guide points, and maybe they keep some secret sauce in the back room, but they get the extra cool guide points, they get more people developing on their tech, and that somehow helps stock price adoption, whatever the case may be, they'll play it. And again, Gus, I'm not calling that wrong. I'm saying they will do what anybody with brain cells running a corporation and trying not to die, either economically or physically, would do.
01:02:41
Speaker
They're not bad, but they're going to take it on and off as their self-interest dictates, as anyone else I believe in their shoes would do. Why do you think Meta trains these models and then open source them afterwards? What's the business case or the economic incentive for doing that?
01:03:00
Speaker
I mean, I think there's a certain revivified perception of Zuckerberg. If you'll recall, when he was first getting grilled, you know, in the Trump election stuff, there was a certain, you know, there were a lot of memes about Zuckerberg that I never really thought were fair, by the way. I'm not sitting here to say Zuckerberg is the greatest and I like everything he does work. Zuckerberg is terrible and I don't like Zuckerberg. I'm in neither of those camps. I'm just saying I thought a lot of it was undeserved. People were disappointed about Trump and they wanted a scapegoat and they called Zuckerberg a lizard or a robot or whatever they called him.
01:03:30
Speaker
And he had this sort of perception of, you know, being wildly autistic and out of touch and, you know, feeble and whatever. And he sort of changes his patterns of speech. He's changed his dress. You know, he's gotten into jujitsu, which I've done a little bit of that stuff myself.
01:03:47
Speaker
got a black belt over a decade ago, never roll this up though. And so there's this, this sort of perception shift. And I think there's also a perception shift around meta. This is a whole new company becoming a whole new thing. The metaverse didn't take off on the first go, but I do think it's a very viable future for dominance. I think we're going to be living in primarily digital futures. If we get to persist, if the AGI war itself or AGI itself doesn't take us out, I think we're going to be living in increasingly virtual worlds. I don't think there's any doubt about that. I think he's ahead of the curve in a major way. He bought Oculus knowing that it's not a stupid
01:04:16
Speaker
But I would see this as part of that perception management game, maybe part of the talent acquisition game. Hey, we're open source. We're kind of the good guys, right? You know, Google, what did Google do back in the day? Hey, we don't even work with defense. We're the good guys. Come work here. If you have purple hair, whoever they wanted to attract, right? A lot of people with purple hair that now they're firing. But yeah, like I think it behooves talent acquisition. It behooves maybe the data they can train on, right? If they have this cool
01:04:41
Speaker
open source stuff that everybody's kind of using and kind of building new apps on. Maybe there's a way to sort of monetize and roll that in. Maybe there's a good credibility and credence within the developer community. A lot of people got used to working with Nvidia and that's really booked Nvidia because they don't want to change. So they're not switching to other hardware. So I don't know all the plays exactly, but it's clearly not something that they did out of not self-interest. And again, that's not because they're bad. It's because they're logical and they're alive.
Drivers of AI Progress and Data Challenges
01:05:08
Speaker
OK, say we create a simple model of AI capability increases, or what drives those increases. We have compute, we have algorithmic progress, and we have training data. How would you rank the relative importance of those? What is it that's driving AI progress the most? Is it compute? Is it training data? Is it algorithmic improvements?
01:05:35
Speaker
Does it even make sense to talk about what's driving AI progress the most? Yeah, honestly, well, a few things. I'll be really frank with you. I'm not the best tactical guy for like, hey, does this, you know, technology with CUDA or this approach with Python or like, which of these is superior? My day-to-day work. So the reason that I'll speak for like a super regional bank or at a pharmaceutical conference or an AI conference is not because they're like, hey,
01:06:00
Speaker
you know how to build AI hardware, tell us how it works. It's more because, hey, you talk to the guys leading AI inside of Sanofi and GlaxoSmithKline, et cetera. Where are they spending money? Where are they seeing an ROI? Where's the current impact on workflows? Where's the future potential business advantage? These are my strengths. So I'm going to put a
01:06:17
Speaker
massive caveat that I feel really comfortable in that world, and less comfortable with like, there's somebody that you could interview that would be like, well, if these two things happen in our grid, it would be a phase shift and yeah, and those those books are smarter than I, I would say it seems quite clear that there is a
01:06:32
Speaker
a minimum degree of sort of raw compute that you have to work with to be a player in this game. I don't think anybody's going to contend with that. I don't know if the sand god will be birthed by whoever has the most, but I will say it certainly behooves you to have the most if you have the choice. And I believe we're going to see so much compute come online, it's going to melt people's brains. They're going to be blown away by how much computers going to come online.
01:06:53
Speaker
So that amount of computers is increasing year by year, right? So to be a player in the game, you have to increase your compute budget by orders of magnitude. Absolutely. Absolutely. So that feels secure. As for the developments and breakthroughs, this feels really tough to predict. I don't think there were many people that saw that like a DALE would occur before
01:07:18
Speaker
whatever other, autonomous driving, for example, right? There were many people that, you know, defeating experts at the game of Go, for example, these are things that we would have thought, okay, it'll do X, Y, or Z, but surely it won't do that. I think even the smart folks, right? Smart folks being like the Benjios, the lacunas of the world, I don't think can prognosticate well on what the big breakthroughs are going to be. People have their guesses. We're going to see what happens. I feel like we've got a lot of regaining to play there.
01:07:43
Speaker
Who knows what wild and wacky algorithmic breakthroughs are going to occur or even hardware breakthroughs are going to occur that are going to supercharge this stuff at levels that you couldn't have predicted and set it in a direction, right? LLMs are good at a subset of things. LLMs are not like AGI itself, right? But the next breakthrough, they bubble in another direction that's maybe different than LLMs. That might take the whole set of use cases and where the money goes and all of that in a new direction. So I feel like we've got a bit of a crapshoot on the algorithmic breakthrough side.
01:08:11
Speaker
If anybody out listening can prognosticate about that, I'm sure you're betting in the stock market. I'm sure you're going to be fabulously wealthy. I hope you'll take me on your jet, but I actually don't think you can prognosticate any better than a Benjio can or somebody like that. So don't have much guesses there. I would say that's a much less certain domain. Who knows how long it'll be? I don't think it'll be long, but we don't know what direction that'll come in. Compute, you need a certain threshold, bare minimum.
01:08:33
Speaker
Yeah, so compute is more of a game of trend extrapolation and algorithmic progress is more difficult to predict or say kind of theoretical advances in AI. Those are much more difficult to predict. It certainly seems that way, guys. And I would also say, to your point, though, you brought up data, which we can't leave out. From the earliest days of AI startup seven, eight years ago when I was in the Bay Area interviewing the folks at Floodgate and Excel and the other venture firms,
01:09:02
Speaker
The real dream here was a kind of data dominance flywheel where we could get more data in a specific area. Maybe it's insurance underwriting for auto insurance. Maybe it's aerial views of houses that have water damage on their roof. I don't know. But we have some proprietary bundle of data that would make our product so much exponentially better and cheaper.
01:09:19
Speaker
that more people would pay us. We could invest in more of that capability and spin that flywheel. We could become the Amazon, the everything store where everybody spends their online money. You know, the Amazon of XYZ, QRS, everybody wants a monopoly. Peter Thiel made it cool. He made some pretty good points of that talk. And if you're running a business, I think you probably want to strike it. So that was the dream off the jump. That is still very much the dream. But we've seen this strange thing where the LLM world is really just drinking in the internet.
01:09:44
Speaker
and, you know, your mid-journey and your dale and your whatever, it's not like, oh, well, dale is transcendently better at doing cars. Maybe it is, by the way. There might be some edge cases like that. But it's pretty clear that we're all working with the same slop here. More or less. Like, maybe not entirely, but more or less. The internet's got a lot of stuff. And it was open AI trade. And YouTube's got a lot of stuff, too. You remember the face? You remember that meme? When they interviewed the CTO of, like,
01:10:12
Speaker
was it trained on YouTube? Awesome meme, awesome meme. Such an uncomfortable position to be in. Explain that for the audience or for the listeners. I didn't even watch the whole interview. By the way, I've never talked to Mira. I have nothing against Mira. If I was in her shoes, I probably would have done much worse than she did, by the way. But I think some interviewer, I don't know if it was 60 Minutes, I don't even know, was sort of asking, well, what was it trained on? She was like, well, you know, the open internet and these things. And they were like, was it trained on YouTube? And she was like, and it was like, just kind of looked to the side and was like, damn. Like kind of knowing like,
01:10:42
Speaker
How am I going to get around this one? But everybody's working with that slop. And again, I'm not demonizing open AI. I've made my moralizing position rather clear here. Nothing but respect for me or a tough position to be in. So everybody's working with that slop. It does feel as though there will be a great advantage to embodied AI. You'll notice all the competition in biped robots now, right? It was Boston Dynamics right up the street here, and then it was nobody for Toyota 20 years ago, right? But like, get out of town.
01:11:12
Speaker
It was Boston Dynamics for the last decade. And, you know, maybe a couple like things in Virginia, you know, for like these Atlas contests or whatnot. But get out of here. Last last like 12 years looking at this stuff. That's been the only game in town. A lot more people playing that game. I suspect that when AI is doing
01:11:30
Speaker
simple manufacturing sort of work or simple manual work or what have you and then is doing more complex work like plumbing or something like that. If you have the bots, if you have the most physically instantiated bots then you could train
01:11:45
Speaker
the most generally dexterous bots to operate dexterously in the world and you could spin that flywheel. Well, why wouldn't I want to buy Tesla's bot? I'm not saying they're going to win, but you know, you can never write must off for sure. So, you know, maybe, maybe Tesla's like they're 20% better in terms of general dexterity and capability. But if, if everybody buys more of them,
01:12:04
Speaker
And now they're training more and more in physical instantiations and they can make their dexterity and their task completions or go up higher and higher. That's not the internet. Open AI can't just like scrape that from Wikipedia or, you know, whatever, Mistral, whoever else can't just scrape it from Wikipedia.
01:12:19
Speaker
You've got to have the machines. So I think that in physical embodiment, we'll have some folks in drug development, manufacturing experimentation world. We'll have some folks in sort of heavy industry world. We'll have some folks in the vehicle world. I think that space will diverge because it's not one big slush bucket that we're competing over. So does the data matter? Yes. I think it matters a lot in the physical instantiations. Breakthroughs, I cannot predict them. Compute, you need a hell of a lot of it to have any chance of mattering whatsoever. And there's no doubt about that.
01:12:48
Speaker
I can't prognosticate more.
01:12:50
Speaker
So with proprietary data or data that others do not have access to, like data from physical robots walking around and competing tasks in the world, is that fundamentally different than large language models? Where perhaps if you have access to physical data or access to footage of robots operating in the world that others don't, well, you can charge more for your model. The question is,
01:13:18
Speaker
Is there a race to the bottom in LLMs because the training data is so readily available? And that's not there with physical robots.
01:13:27
Speaker
This is what I'm saying. I'm saying that I think that the general internet scrape game is huge and is going to continue to be huge. Access to all human knowledge about all topics, video especially, right? At some point, they're going to get very good at turning video into physics sort of simulations. By the way, there's already amazing work on that, right? And then how much can you learn from these YouTube videos? YouTube videos of people taking apart a carburetor.
01:13:49
Speaker
YouTube videos of people untying a slinky that got all in a knot or something like that. If you have enough videos of that, at some point, you can actually get pretty far. But I do suspect, so I think that there's a transition into simulating physical worlds a little bit, but I don't think that's going to get us all the way there until we touch the ground, so to speak, Adam to Adam.
01:14:10
Speaker
But I do think that there's more differentiation in the embodiment side. So I think there's going to be biped, work at home kind of robots. I think there's going to be the vehicle game. I think there's going to be the airborne game. There's folks like Shield AI doing these autonomous pilots that we've had on the podcast, raising a ton of money for that kind of stuff. I think that's going to diverge more than a commonly accessible slush bucket. And I think we will find more and more companies. I would suspect this would happen. I would be surprised if it didn't get all.
01:14:37
Speaker
We'll find more and more companies trying to carve out data that's not Internet accessible, but is really important for general capability. Maybe it's...
01:14:46
Speaker
Maybe it's information about drug development, I don't know. Maybe it's information about certain user and buyer patterns within insurance products or something like that, that for some weird reason is just not a publicly listed thing. There'll be companies collecting and aggregating that from multiple players and then selling that as some meta source that people can train on. They'll become markets for these sort of like, hey, you can't just scrape this stuff sort of data. There'll be more and more of that. But I do think
01:15:10
Speaker
If you can dump it into an LLM, I think there is less general advantage value to it. I think the embodied value will be greater. At least for the next 5 to 10 years, my current supposition is that will be a place to differentiate more and it won't just be biped robots. Like I said, it'll be many other things.
01:15:27
Speaker
And why is it that we can't train on YouTube videos or other videos and then get all the way there via simulation? Why is it that you think we have to connect with the actual physical world?
01:15:41
Speaker
Well, again, I think there's way smarter people here than I, right? I'm a business model, market impact, P&L, impact of AI in the big company guy. There's folks like Danny Lange that were running AI for Amazon for a while, and there's people like that that have sort of been working on this stuff for decades that probably have a very firm opinion. My courage, let's call it non-technical understanding, which probably means I shouldn't talk about it.
01:16:01
Speaker
is that there is granularity and nuance to how this kind of robot actually would grasp this kind of an item and exactly how it would affect the rotors when it turns over to a side, you know, with certain timing under certain temperature conditions and all kinds of stuff that we can't add them for Adam replicating.
01:16:19
Speaker
So we might be able to get general ideas and decide what experiments are worthwhile or which ones are not. But we've got to see if that stuff carries to the real world. And there's not 100% fidelity there. So my current understanding is that that is the state of affairs. But again, as a non-technical guy, that's about as far as I can go on
Big Tech's Role and Resource Control
01:16:35
Speaker
it. But hopefully, I think that's advice.
01:16:36
Speaker
What do you think of the main bottlenecks? We've talked about training data and how there might be some capabilities that you can only get if you have the physical robots walking around and collecting data for you. What about energy use, for example? Could training data centers be limited by their enormous energy consumption? Absolutely. We could go on and on here, but I think, look, where do you want to
01:17:02
Speaker
Where do you want to put your money? I'm not much of a betting man. My investments are very boring. My risky investment is called a business, which a pretty risky thing. And everything else is in like really boring ETF. So I'm not like a guy that has, oh, I have this much Nvidia stock and I balance it exactly against Facebook in this way. Like I'm not even there just because I want to spend time, you know, running the business. But if you were a betting man.
01:17:22
Speaker
geez, you'd want to look at whoever's generating power, whoever's mining metal, whoever's building and running data centers, whoever's creating AI hardware, people are actually going to use that the latter caveats a little bit tough, but but AI hardware by itself is obviously
01:17:37
Speaker
And you want to look at the giant tech companies that are going to wield the majority of this data in a huge panoply of applications beyond the scope of current technology. So that stack I just articulated all the way from grinding nickel out of the earth or gold out of the earth all the way up. I think the dollars in the world are kind of going to go into that.
01:17:58
Speaker
Because the human experience, I think it's going to go more and more into virtual space, pretty much exclusively if we are, you know, ended by AGI or AGI-related conflict. And so, things are going to become more and more virtual. This is the power game. The substrate monopoly, as I refer to it, it's a Google-able term, substrate monopoly. This is the power game. Whoever owns the substrate that houses the majority of kind of human experience and activity is likely to hold the substrate that has the strongest AI, is likely to have the best runnings for kind of ruling the earth or birthing the sand god. So that's the substrate monopoly.
01:18:27
Speaker
Power is a huge limit. It would not surprise me at all, Gus, if either through proxies or in some direct way, we saw big tech beginning to kind of get into investing or getting more involved in
01:18:40
Speaker
energy itself, like securing sources of energy, not just relying on a grid that exists because it's not going to sustain the data centers we need. Maybe even securing water for cooling. There's going to be a lot of bad PR effects about that, so they've got to be real careful about how they pull that off. It might not be good to stamp Microsoft's logo on it. It'd be great if you could have a consulting company do it for you or something, but you've got to secure some water maybe. Maybe even there's going to be investments in companies that are mining wars.
01:19:04
Speaker
maybe there's even gonna be some substantial investments in that stuff. So I would expect we're gonna see these bigger companies put their tendrils deeper into the origin of where their advantage is gonna come from, which is gonna be from kind of power and from these materials and hardware that are gonna be made from those materials. So are these advantages? Yes, I think big tech is absolutely gonna keep securing that stuff. I think the US is doing some reasonably smart things to kind of keep a bit of that advantage for the states versus China. But to your point, Gus, without a doubt,
01:19:34
Speaker
What we see is that for each state-of-the-art training run, we have to spend more money on compute and we have to spend more money on running the training run in general. And so we're not, I think, at a billion dollar training run yet, but perhaps we soon will be. Do you think this game of creating state-of-the-art models will be limited by money that no entity has enough money to do the
01:20:01
Speaker
say train GPT-8, say. Yeah, look, there's there's folks that are really familiar with like, the physics of hardware and the you know, Laura's law is going to stop because of this that or the other thing. I can tell you right now just interviewing, if I just think about the last six months, AI leadership at like an SAP Salesforce cruise, so cruises the autonomous vehicles, their head of AI is Hussein Mahina, CTO of
01:20:27
Speaker
a data robot if I look at a ton of these folks and I figure out how are they thinking about compute moving forward.
01:20:33
Speaker
There's pretty clearly a demand for AI hardware that's going to sustain. There's a market and some efforts around trying to repurpose CPUs to run AI workflows. There's a lot of thoughts around how to kind of rejigger existing data centers to be more, let's call them AI friendly per square foot in terms of how much work we can get done with X number of racks and X number of hallways of said racks. I think there's a lot of effort and ingenuity
01:21:00
Speaker
around repurposing and inventing hardware, dealing with power in new ways, coming up with models that are just more energy efficient, where we can get kind of similar impacts, but it just costs a little bit less to train them in some regard. I think there's so much innovation bubbling on the sides that it's going to actually push against those cost pressures.
01:21:17
Speaker
Now, do I think they're going to be the billion, $10 billion models? My current estimate is probably. Probably there will be, but I think there's also a lot of forces pushing it in where it might say, let's say it's 100x more than any model has been trained today in terms of dollar value, but it's way more than 100x in terms of what they're able to pack in there in terms of however we want to measure.
01:21:39
Speaker
serve the capability of the data used to train such a system. So I see counter pressures that will not make it entirely cost prohibitive because there's so much ravenous innovation, even from the whole ecosystem to kind of push against that. Got it. Okay, so we've been talking a lot about capabilities and what might be limiting us there. What about in the domain of safety? So we've talked about the governance side, what we could do. Do you have opinions on technical AI safety and what you would like to see there?
01:22:09
Speaker
I don't presently. I suspect if my world was not looking at workflow impact and P&L impact and looking at that over time, over a decade, and it was instead, how do we write code now? How are we going to write code in the future? I would probably have much more technical suppositions. The current drum that I beat for being a guy that's in the intergovernmental space
01:22:32
Speaker
and in the sort of real-world impact space, is more about a necessary mandate for addressing, frankly, that there are divergent futures that are not going to get along in a friendly way, addressing, frankly, internationally, that this stuff is a gigantic risk in the way that Benjio and Hinton say it is. And they're not 100% sure, but they feel like it's enough of one where we should think about how to pump the brakes.
01:22:55
Speaker
and that we should get on the same page about preferable and non-preferable futures and figure out some way of avoiding the brutal arms race that we're in right now. I couldn't say, well, we should do this Oracle approach that Stuart Armstrong wrote about at Future of Humanity Institute eight years ago or something like that. I've read some of these papers, but to really say which of them I would prefer that would hold back a posthuman entity from its aims, I would be remiss to enter that domain.
01:23:23
Speaker
How do you balance the world of kind of near term prediction that you're in as a day job that you've talked about and then these kind of wild speculations that we've been engaging in on this podcast?
01:23:35
Speaker
I assume that there's not much tolerance for talking about the emergence of the sand guard if you're being paid to predict. And I'm being playful with that term. I don't have a shrine set up in my house or something. But yes, I use that term very playfully, but I think it's apt regardless.
01:23:54
Speaker
You are absolutely right. So here's an interesting fun fact, but I've written about this publicly and I've even said it publicly on our podcast at Emerge. I started this business under the supposition that there wouldn't be anything more important than post human intelligence within my lifetime.
01:24:09
Speaker
So when it was just a blog, this is it. So before even before it emerged was tech emergence. We rebranded before that was just on my personal blog. That was 12 damn years ago. And then it was only 10 years ago. And I was like, I should get a URL. Maybe I was running other businesses at the time. I didn't even know how I'd ever make money doing this stuff. It took me a long time to figure out the publishing and market research game and have this have my life purpose of really tracking and staying close to the developments of this important
01:24:34
Speaker
and technology be something that pays me. And as it turns out, Gus, you're right. They're paying me for stuff that affects the world today and is not the future. But it's cool because maybe I'm not smart enough to write the next new algorithm, but I could be the guy that talked to the head of AI, Arethaon and GlaxoSmithKline and whatever, and that'll be enough to get me in the room. But I started this overtly because I wanted to be as close as I could to where impact is starting to creep and crawl.
01:24:57
Speaker
in different kinds of spaces and different geographies. And so I began this because I wanted to have something meaningful and maybe even useful to contribute to the much larger conversation. My whole team knows that, my entire team knows what, if I say what the cause is, my entire team knows what the cause is. The cause is sort of this, this coordination around sort of a non Armageddon, non purely arms race AI future. But that is to say, I still can't carry that into what I do day to day all the time at Emerge.
01:25:25
Speaker
So every now and again on Emerge.com, I will write about the farther future. I'll tie it to business value, but I'll write about the farther future. I sprinkle it in there. We've run some series about AI futures with folks like Wolfram and Benjio and Stuart Russell, folks that you guys have probably talked to many times. We've done a bit of that. I try to spice it in there, but it's not what they want most of the time. So how do I reconcile? Well, I reconcile that I would have never gotten invited into the UN or the OECD if I didn't have my boots on the ground on the enterprise impact.
01:25:53
Speaker
if I didn't have a Rolodex in the Fortune 500. So I validate that it's probably served its purpose at least to some degree. And at some point when the big game becomes more important even to business leaders, I'll be well positioned to say, hey guys, here's what we might want to think about in terms of the future that we're creating. So I reconcile it by
01:26:11
Speaker
Doing my day job knowing that it is going to have its payoff and then and then having my involvement in the Intergovernmental world and being grateful that I have something useful to say because I'm probably not smart enough to like Fundamentally innovate in physics, but I can build a rolodex and I can learn about where this stuff is affecting manufacturing
01:26:30
Speaker
Yeah, so actually, let's end there. Let's talk about where is AI overhyped? Where is AI underhyped? Which industries are going to be transformed? At what time scales? How do you see the landscape of AI implementation?
AI Integration in Enterprises
01:26:45
Speaker
Being close to this for a decade with both the vendors, that is to say, the people selling into the enterprise and the enterprises themselves, I can say that it's been a pretty slow and clunky ride. It continues. The change management element of actually adopting artificial intelligence into workflows was drastically underestimated by the first wave of AI startups seven or eight years ago, drastically underestimated. They all thought they were going to
01:27:12
Speaker
totally overhaul insurance underwriting and overhaul the call center, you know, and just bring in AI and just overhaul it. And as it turns out, number one, they didn't know the market. And number two, the enterprises were not willing to do much more than light adjustments to current workflows. They were not willing to, you know, make wild and drastic changes to something so important for their business.
01:27:31
Speaker
We've seen this slow, creeping, crawling, gradual experimentation with AI. Some companies are building a strategy in a way that's just a pitch deck that died somewhere six years ago. Some companies have a strategy that's somewhat alive because they have leadership that has what we call executive AI fluency, another Google-able term, executive AI fluency. They have enough of that stuff to
01:27:54
Speaker
get not only use cases and what AI can do conceptually, but how that might fit into their strategy. So there's so companies that have a bit of that, but it's been a slow, chug-along process. So I would not sit here and say, oh, goodness, you're a local bank and, you know, Sheboygan is going to be, man, you know, it'll all be robots behind the counter tomorrow. You know, we're not even close to that. In fact, my current supposition is kind of the opposite. I'll give you the big picture, Gus. I'll let you drill down wherever you'd like to.
01:28:21
Speaker
My current supposition is we're very slow with adoption inside of legacy enterprise. And despite the massive power of Gen AI, it's going to be a bit until that stuff starts to really shake up current existing workflows in any industry, healthcare, retail, et cetera. My suspicion is that for legacy enterprise,
01:28:37
Speaker
two things are going to have to happen for them to really overhaul things. So an overhaul means let's build underwriting from the ground up, like let's talk banking and lending. Let's build lending underwriting from the ground up with AI in mind. In order for them to do that, they either need a fast moving peer or competitor or a fast moving startup to have eaten so much of their lunch that they are now in danger. So it is survival, the mother of invention, if you will.
01:29:03
Speaker
It will have to be the mother of invention herself who gets these folks to really start deploying this stuff seriously. That's going to be a while. Because that's going to be a while, I suspect they're going to remain rather reliant on these off-the-shelf things from the big providers like Google Cloud and Azure and stuff like that. They're going to be standing up on little stilts that Microsoft built and that Google built and Amazon built.
01:29:28
Speaker
and not standing on their own new and stronger legs, genuinely enabled by data and AI in a powerful way. Unfortunately, I think that the trajectory for most enterprise industry agnostic here is that they're going to become, let's call it more and more of a facile to big tech, given the current trajectory.
01:29:49
Speaker
unless, until their lunch is being eaten so thoroughly that they're in danger, they're not going to go about overhauling poor processes. And that puts them, again, in a tough spot where they're going to be more and more the vassal to people who win the substrate monopoly, because the substrate monopoly is not going to be won by Johnson and Johnson.
01:30:06
Speaker
So seeing this gradual and incremental implementation of AI in the business world, this hasn't changed your mind on general AI timelines. Why, for example, not conclude that it would take 100 years for us to get super consultants if it takes 16 years or whatever, 10 years to implement or to try to implement a new AI solution in some business?
01:30:30
Speaker
I think the wild giant leaps in capability among Microsoft and the other big players are going to continue in leaps and bounds. I think that new ventures are going to be spun up because, you know, the great race for who can have the final flex, right? Who can have that two months or two years of sort of ruling the world before the machine consumes you? Or maybe it doesn't consume you. You just get to ride it like the God Emperor or whatever.
01:30:54
Speaker
The great race to that substrate monopoly is going to be seen as the only game in town. There will be a singularity for ambitious people and for ambitious companies where it's sort of, we're either controlling or creating this thing, or we're not even in the power game. So there's going to be more and more competitors there. And the players that are on the field today already are going to be driving more and more forward and forward. I think even if all the other dinosaurs start to kind of crumble and fall apart, again, they'll be kind of filled up with
01:31:22
Speaker
stilts from big tech who doesn't want to get involved in utilities and doesn't want to get involved in health care. But all the health care companies are falling apart. You know, I'm relatively optimistic that they'll be able to find a way to kind of, you know, keep the band aids and the duct tape strong enough to kind of hold them together. But I see those those major players as as still really pushing things forward. My supposition was never that the sand god would be birthed in the basement at Wells Fargo. It was it was always that they would simply be ones that are funding and helping the growth.
01:31:52
Speaker
But it really will be those core innovators that are driving it forward. So I happen to believe both legacy enterprise. It's going to be a real tough game. It's going to be slow. A lot of them are going to go the way the dinosaurs and not going to innovate have to. And they're going to increasingly become vassals to big tech and big tech, I think could be less than 10 years from from birthing post unit.
01:32:09
Speaker
Yeah, there's something funny about thinking about, you know, a bank trying to implement some AI solution while Microsoft or whatever is training an ATI, basically. So does this mean that the stock market is going to be driven increasingly by changes in big tech? And it's going to be more like the S&P 7 and not the S&P 500. That really matters.
01:32:35
Speaker
I believe this is the trajectory we're on. I believe that it may be the case that antitrust, as it was written in the era of Rockefeller himself, is woefully unsuited to what we're dealing with right now. It may also be the case that breaking up these companies is not the right move. I don't have an opinion on that. I'm not an expert actually on the pros and cons of antitrust law and how it applies to big tech. There's people that have written books about that. I'm sure you've interviewed some of them. But yes, I very much believe so. I think, again,
01:33:05
Speaker
get metals out of the earth. Hard to differentiate how you do that, but it'll be valuable. Produce power. There are some ways to innovate there, and I think we will see some innovators in that department. AI hardware. Well, there's a company that starts with an N whose valuation happens to be slightly higher than it was four years ago to us. And then the companies that are wielding that data to enable all this wild capability that's going to be swirling and swarming us. I'm going to have a thousand agents executing on a thousand tasks and writing articles and publishing and
01:33:33
Speaker
doing all kinds of things even beyond my current workflows, swimming in this immersive AI-generated world for education and entertainment, et cetera. We didn't even get into programmatically-generated worlds for users. And I think we're all absolutely diving into virtual worlds. I don't think the physical world has a very long time to satisfy human drives. As all of that occurs, again, it's those levels of the stack, I think, are going to be the folks where all the money's at. And I think the most innovation will be at the folks that own the user relationship.
01:34:01
Speaker
who are literally, they might say, yes, Nvidia, I depend on you for data, but Nvidia, you only get access to all this data because I'm Apple, and I'm piping it through these headsets. If I want to route all that data to somebody else's hardware, I will. The people most likely, in my opinion, to win the substrate monopoly will own the end user relationship. They won't just be the people sitting there processing. They'll be the people that own the end user relationship because I can route it to whatever data center I want. So I think the most innovation power is high on the stack, the least is low on the stack, which is, you know, digging
01:34:30
Speaker
nickel out of the ground or gold out of the ground or platinum or what have you. But that whole stack is, I think, increasingly where the cash is at. The data center world is obviously part of this as well. They're somewhere in the middle. They're below hardware in terms of how much innovation you can do. You get out of the hood with these big data center companies. It's the same game. You're hanging steel. You got some idea of how you can optimize for AI, but really, you're just optimizing for
01:34:52
Speaker
reliability and upkeep and access to power. There's not as much innovation there. No offense to those businesses. I got a lot of respect for Data Bank and Compass and the players that are in that space, but that stack is where the cash is. The highest innovation is in hardware, but the most, most, most is in who owns the end-user relationship.
01:35:11
Speaker
I don't think there's a current way off that train.
01:35:31
Speaker
And this is also this has something to do with attracting top talent, right? So in order to attract top talent, you have to pay the most money basically, and only a few companies can do that. How do you think talent or what role does this talent play here?
01:35:47
Speaker
It's incredibly important. And again, you see that top talent flocking most ardently to the top of the stack I just articulated, right? There's not a lot of people getting out of Carnegie Mellon with a PhD in anything and being burningly excited to get into nickel mining or building data centers. I think there are some, I'm not gonna say it's a non-zero number for sure.
01:36:09
Speaker
But most of the talent is going to where the biggest pools of cash are and where the most ability to innovate is. And I think that's in hardware and then also in all the ways you can acquire and then leverage those user relationships and user data.
01:36:23
Speaker
So yes, those guys have the cash, the biggest war chests. They can afford the most talent. I think talent wants to work in those creative fields. I think there's a lot of self-fulfilling stuff happening there. There will be self-company that totally revivifies power in some interesting new way with solar, interesting new way with wind, interesting new way with geothermal. And they will attract really sharp people in those fields who maybe don't want to work at minute for whatever reason.
01:36:47
Speaker
Maybe they have a specialist, you know, maybe they're a geologist and they don't want to work at meta because they're not a, you know, a data scientist in a traditional way in just like a kind of, I build apps, sort of a college background, more of somebody's active interest in something else. But yeah, talent is huge. I think it is flocking to the predictable places. I think that will continue. I think we'll have some interesting players and data centers of power. Maybe even some interesting players in mining wouldn't surprise me. But the majority of the innovation, the talent in the war chess are going to be up that stack.
01:37:15
Speaker
Yeah, we can imagine disrupting that scenario or disrupting talent in a sense if you could begin to automate the role of an AI researcher, if you could begin to have AI agents that themselves make the process of researching AI faster. So I'm going to ask you about the current state of AI agents and what you think about those, and then perhaps also talk about where you see agents going in the future.
01:37:41
Speaker
Sure, this is a fun one. So what we've seen, let's just talk industry agnostic across the board and all the industries that we cover. Lots of excitement about JNAI for the last 14 months, whatever it's been 16, 18 months. Not a lot of actual deployments or traction in an enterprise context in a meaningful way, two notable exceptions.
01:38:01
Speaker
One is in sort of engineering. So people that are using this to write code, they're like, I don't want to from scratch or from Stack Overflow build the skeleton of this new dashboard that I need to build for my boss about tracking XYZ numbers. Like let me conjure 80% of this and just see if I can debug it after the fact. We're seeing traction there happening mostly in dark corners. Some companies have a mandate of how they can explore an experiment. For most people it's kind of in dark corners, but the traction is
01:38:29
Speaker
seems relatively real. Even in companies that are boring, there's somebody doing something data related who doesn't want to spend two days copying and pasting through Stack Overflow. They just want to conjure it up. So that's one area. The second area is creative. And so if we look across, certainly retail e-commerce, but even down the line into retail banking and other more
01:38:50
Speaker
you know, let's call them stodgier sectors, the folks who are coming up with marketing banners and email subject lines and Google ad copy, etc, etc. These folks can split test and there's a wiggle room for error, right? I can't do much wiggle room where people thought Gus was
01:39:08
Speaker
These agents are gonna be so powerful, wow, customer experience is clearly gonna be where all the traction is, right? So 16 months ago, this is where everybody at American Express or I don't know, some Walmart or what have you would have sort of initially suspected. But as it turns out, if you hallucinate with your user in terms of answering about refund policy, the consequences are way too sturdy. However, if you conjure up a handful of Google subject lines for your ads that like,
01:39:36
Speaker
Frankly, kind of suck, Gus. But you have 60 of them that are great and maybe even 10 of them that are better than any human could write. That's a winning game, Gus. On time and money, that's a winning game. The consequences are not there. Same thing on code. It's like, hey, I can spin it up. If it's only giving me trash, maybe I'll wind it back. I'll ask it for something more modest and maybe only build
01:39:56
Speaker
three of the Lego parts to sell of all 20 at once. But again, I can do that without an end user sort of being impacted in some negative way. And so we've seen kind of the creative, let's say marketing slash creative crowd pick this up across enterprise, including legacy, and then the engineering side of the house pick this up, not that
AI Tools and Their Applications
01:40:14
Speaker
much traction elsewhere. So if you're going to ask me, what do I think of agents now? Well, I could tell you boots on the ground.
01:40:19
Speaker
I don't have a lot of credibility in how to build the next bit of AI hardware. I'm the wrong guy to talk to. But in terms of how many people in the Fortune 500 do you talk to that are using this stuff and trying to deploy this stuff and trying to convince their boss to use this stuff, and how many people do you talk to that are trying to sell that stuff in the enterprise? I got 1,000 something interviews in that category.
01:40:35
Speaker
way more than that for surveys. So from there, I feel rather confident those are two areas where we got not only capability, but an ability to embed and trust without negative consequence, which in the legacy enterprise world is very consequential. I do suspect that even if we froze current tech as it is today on the consumer side, so let's leave my world for a second, on the pure consumer side,
01:40:59
Speaker
I think we're not that far from having an agent where I'm brushing my teeth in the morning, I spit the water out, I walk over to the kitchen in my bathrobe, and I talk to whatever my AI agent is, and I ask for my groceries to be ordered, I ask for vacation plans to be emailed to me at a Google Doc, I ask for a check-in with grandma in the old folks' home, and there would be somebody that would call them, and a bunch of actions could be taking place. I think even if we freeze the tech today,
01:41:28
Speaker
where only a few use case iterations forward from having really generally capable kind of consumer apps of that kind. But if we freeze the tech today, I think in the legacy enterprise where the consequences are bigger, it will eventually bubble into the other domains, but it's really pretty well stuck in creative and engineering. So that's current agent's work.
01:41:48
Speaker
Yeah, that's current agents. Does this mean that the notion of an AI researcher vastly accelerating the progress of AI research in sort of like a feedback loop? Does that notion not hold water? I mean, that really requires reliability and that requires multiple steps and going out on your own without supervision and so on.
01:42:08
Speaker
Even if we freeze the tech today, I would suspect that if we talk to many, probably not all, many of the really, really smart folks at OpenAI or DeepMind or wherever we want to go, we have Delete George from DeepMind coming on the show pretty soon. He talked to these folks. I would guess if I was like, hey, Delete, people are trying to come up with new breakthroughs to solve the next level of protein folding or whatever issue they're working on now.
01:42:34
Speaker
I would be somewhat shocked if 0% of their iteration process involved a conversation with an LLM, a set of questions, an ability to search and conjure information faster than Google searching and copying and pasting into a document somewhere. I would be shocked if there isn't a bit of brain augmentation on a very light level
01:42:54
Speaker
with those tools. So I would suspect on a ideation hypothesis testing what's been done before information retrieval side, already I would suspect the most cutting edge teams are pretty well eating their own dog food in terms of using this tech. I think that to get to a place where an agent could say publish a paper, say write a machine learning paper and publish something publishable, something much beyond current capabilities.
01:43:24
Speaker
Yes, yes. If we published something, we would hope it wasn't just wild hallucinations. Although Gus, unfortunately, a lot of what's published is already wild hallucinations. I come from the world of psychology originally for grad school, and I can tell you there's a lot of hallucinating happening in that field. Social science is wacky. But in any science, I think we've got a lot of hallucinations from human beings, a lot of cooked up data, you know, Daniel Arielli's, Hubla, he's going through that. There's a lot of stuff in that space.
01:43:48
Speaker
But yet, to be able to publish a paper that people would, even knowing it was AI, respect and maybe learn from, I think we're a little ways away from that. But to say it doesn't hold water, I wouldn't feel strongly about that. I think right now, maybe it doesn't hold water. I don't know if GPT-5 could somewhat easily do some levels of that. I suspect that when it comes to breakthroughs in hardware and software around AI,
01:44:14
Speaker
or in the sciences broadly, we're mostly going to have kind of the centaur model, the human and the horse together sort of deal, where we're going to be kind of combining the two. I think that that will be the norm for a good bit. I don't foresee us in two years being like, man, opening an eye can just pretty much lay everybody off.
01:44:34
Speaker
Steering this stuff is with volition and with intuition and with knowledge of the market and the boss's mandate. There's so much context in the steering and deciding around innovation that doesn't just exist in what you'd scrape on Wikipedia and put into a prompt. That context ridge ecosystem of direction of innovation, impact of innovation, I don't think is fully housed inside of these models just yet. But I think it will have credence relatively soon. And I think that
01:45:01
Speaker
fooming scenarios should not be written off laughingly. I think fooming scenarios seem, I'll disagree with Hanson, I guess, somewhat likely, in my opinion. I don't think that self-improvement is going to be impossible. Make sense. Daniel, thanks for talking with me. It's been a real pleasure. Dust, hey, glad to be here.