Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Facing Superintelligence (with Ben Goertzel) image

Facing Superintelligence (with Ben Goertzel)

Future of Life Institute Podcast
Avatar
0 Plays2 seconds ago

On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward.   

Timestamps:  

00:00:00 Preview and intro  

00:01:59 Thinking about AGI in the 1970s  

00:07:28 What's different about this AI boom?  

00:16:10 Former taboos about AGI 

00:19:53 AI research worth revisiting  

00:35:53 Will the first AGI be simple?  

00:48:49 Is alignment achievable?  

01:02:40 Benchmarks and economic impact  

01:15:23 Bottlenecks to superintelligence 

01:23:09 What should we do?

Recommended
Transcript

Introduction to AGI and LLMs

00:00:00
Speaker
when I introduced the term agi the AI market was real, but AGI i was beyond the pale. Everyone was laughing at us like, if this means adjusted gross income, no one will ever pay attention to you.
00:00:13
Speaker
Invention often happens actually by vricolage and cobbling together to make something happen. And the most elegant, simple Occam's razor way often comes later. While I'm not an optimist that LLMs can lead to human level AGI because I think they don't have creativity and in a fundamental sense, I am an optimist that LLMs could take over like 95% of human jobs. I just think that 95% of human jobs can be done without fundamental creativity or inventiveness. Human value systems are complex, self-contradictory, incoherent, heterogeneous, always changing.
00:00:52
Speaker
They're evolving. The AGI's value system, I think, can be more coherent than human values, but will also be evolving. And you'd like there to be give and take and mutual information between the two.

Meet Ben Gertzel

00:01:05
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker, and I'm here with Ben Gertzel, who is the CEO of SingularityNet. Ben, welcome to the podcast. Good to be here. Thanks for having me.
00:01:17
Speaker
You've been in the AI game for a long time. When did you get your PhD in AI again? um My PhD was in 89, which was in mathematics rather than ai But i was doing I was in fact doing ai research as a grad student before I got my PhD. And I Probably wrote my first lines of attempted a i code in 79 or 80. Like, I mean, what well, before before college even. I mean, as you know, but many people don't, AI has been around for some time, right? I mean, I'm pretty old. The AI field is is is even older than me.
00:01:59
Speaker
What first convinced you that we could see artificial general intelligence in your lifetime? It was probably 1973 or so.
00:02:10
Speaker
i found a book in the town library in Haddonfield, New Jersey, where we had moved recently. And it was called The Prometheus Project, written by Princeton physicist Gerald Feinberg.
00:02:24
Speaker
And he laid out a fairly coherent argument that... within the next few decades, we would get machines smarter than people, strong nanotech to build build machines out of molecules, and then be able to fix the human body so we didn't age and die. And he he said this could be used for many purposes, could be used for consciousness expansion, could be used for mindless consumerism.
00:02:50
Speaker
He thought that should be decided... democratically and he was trying to get the un to undertake some global education and voting process to decide you know to what end to put all these amazing new technologies that were ah we're were in the wind so i'd seen these ideas in science fiction before that but then then i was i guess age seven or something eight i i seeing this scientist lay out these ideas in ah in a non-fictional form was quite interesting. Of course, not having the web then, there was no way to contact other crazy people who also took this stuff so seriously. But the argument argument seemed quite credible to me.

Early AI Optimism and Challenges

00:03:43
Speaker
And then in the late seventy s when personal computers you could program at home started to become a thing. I mean, that seemed very much in the line with a with this same vision that Feinberg had laid out, right? I mean, his his book was basically The Singularity is Near, published in 1968, right?
00:04:05
Speaker
And as I later found out, Valentin Churchin had published a book in Russia called The Phenomenon of Science, late as well, laying out
00:04:16
Speaker
basically the same ideas. So, I mean, if you were if you were paying attention, these concepts were always there. And honestly, they already seemed sensible, just given the advance of computers already at that at that point, right?
00:04:34
Speaker
That doesn't seem obvious to me, given how primitive computers were in the early 70s. It doesn't seem obvious that that you would be able to project from there that we would get something smarter than humans.
00:04:47
Speaker
I mean, I.J. Goode wrote his paper on the intelligence explosion in 65, which is the the year before i was born. And if you look back at the work of McCulloch and Pitts and Turing, all these guys in the 40s and 50s, it was already quite clear from at least math and physics and biology view, like it was it was clear, you know, brains can be viewed as computing devices and we're building general purpose computers and they're getting faster and faster and can do more and more step by step. Like
00:05:26
Speaker
We can beat the world champion of checkers in the late 1960s, right? You didn't get chess till the ninety s But I mean, you had already in the early 70s, you had the early versions of what would become Mathematica and Maple. You had computer algebra. I mean, there there is there is a lot of stuff. And historically, actually in the late 60s, there was a lot of ai over optimism, right? I mean, there was a lot of people at that time saying, wow, look at all this amazing stuff computers can do.
00:05:56
Speaker
so yeah obviously they're going to overtake human humans pretty soon so i think actually a lot of a lot of people were seeing it that way back then because that happened to be like before the the series of ai winters and and and summers and and i think when i when i was a little kid first reading about this stuff ai was feeling like its first first blush of excitement before people realized how difficult it was in in and in some ways. But yeah, it wasn't obvious like it is now, right? I mean, now, of course, everything has changed since chat GPT and average people are like, well, yeah, you mean the machine isn't smarter than people already? Are you sure? Like, yeah there's there's no shock value in the concept of superhuman intelligence.
00:06:44
Speaker
Yeah. at all now certainly wasn't like that in the early 70s i mean you had to you had to pay attention and and and do your research and and and background reading and then and then then think with an open mind but on the other hand it's not like i to invent the idea on my own either it's like a small town library There's a paperback book by a Princeton physicist laying out the whole argument for super AGI and singularity, right? I mean, they the ideas are there and from sources that seemed reasonably mainstream and credible, right? it just
00:07:20
Speaker
The way culture goes, it takes a long time ah long time for things to get from that stage to really becoming becoming dominant. What feels different about the AI moment we're in right now? You could count that moments starting from say 2012 where deep learning begins to really work.
00:07:41
Speaker
You could count it from 2022 when ChatGPT becomes something that everyone knows about. Does something feel different about this wave of AI from a technical point of view?
00:07:53
Speaker
Clearly the impact of scale has been the has been the main factor.

Scaling in AI Development

00:08:01
Speaker
And while I don't really think LLMs or standard deep neural nets are the route to AGI, i I do think the factor of increasing scale of compute and data, which has allowed the LLM revolution to happen, I mean, I think that is the same primary underlying factor that will let well that AGI happen, right?
00:08:23
Speaker
So, I mean, the AI field had a lot of good ideas, right? for a long time. I mean, we were doing they were doing automated theorem proving in the late 60s when I was a little baby, right?
00:08:34
Speaker
John Holland invented evolution or learning genetic algorithms and in in the 70s. There were multi-layer perceptrons in late 60s, early 70s, which are would now be called deep neural nets, right?
00:08:47
Speaker
I mean, this stuff has been around a while, but when I was teaching deep neural networks at University of Western Australia in the mid-90s, when I was an academic earlier in my career, before i went to industry, I mean, we were doing multi-layer perceptrons with recurrent backpropagation, and it took like three hours on the fast sun workstation to train network with like 35 neurons, right?
00:09:12
Speaker
So it wasn't actually that the ideas were way, way off or something. It was mostly that without greater scale of compute and then greater scale of data as a secondary point, really. I mean, without without without that, you couldn't refine and and tweak and adjust the ideas and and until they, until they actually actually work you were just left with the initial conceptual version of things so i mean scale makes a huge difference it means that you can run like experiments in minutes that would have taken like years earlier in my career literally right so i mean that's that's quite quite big i mean i think
00:09:57
Speaker
I think we're also now, as of the last six months or so, we're at the point where AI technology is aggressively accelerating the speed of creation of of AI technology. but like you You can't use LLMs to write AGI yet.
00:10:15
Speaker
I mean, they're bad at doing complex original thinking. But I mean, yeah you can use them to generate unit tests. You can use them to take your rough notes. I turned them into a structured paper for your for for your colleagues. can You can use them to write scripts, right? So I'm in already, like I'm concretely seeing on a technical level, stuff that would have taken me five days takes one day or something, right? So i think I think we're already at the point where AI is accelerating the creation of of

The Changing Perception of AGI

00:10:45
Speaker
of AI. And that um that came because of scale, right?
00:10:50
Speaker
But it's now separate thing on its own. And I think that cultural and kind of attitudinal aspect is important and is very different now also. When I introduced the term AGI in 2003, 4, 5, I mean, we introduced it and two thousand three four five i mean we introduced it as the title of a book we were editing. And that book was finally published 2005, I think, from Springer. Just edited bunch of papers called Artificial General Intelligence. But then you really couldn't do like a workshop or a session on human-level thinking machines at a normal AI conference, be it
00:11:36
Speaker
be it a neural net conference or a triple AI, itch guy, whatever. it was it was ah You talked about it in the bar afterwards when you were talking about reincarnation and backwards time travel or something, right? So it was like what it was way out there.
00:11:53
Speaker
And that made a big difference. I mean, it meant that only... really dedicated or really crazy people were going to work on AGI because it was kind of career suicide, right? Like when I but i got my PhD in 89, AI as a whole was career suicide. And um I did my PhD in numerical analysis for that reason in math. Then when I wanted to switch to computer science, I did computer graphics because there were no jobs in AI and graphics was also math math heavy, right?
00:12:25
Speaker
Yeah. Now, but by the early aughts, when i introduced the term agi the AI market was real. like ah It was okay. There were industry jobs in a i There were university jobs in ai But AGI i was still beyond the pale.
00:12:42
Speaker
right And I remember when we put that term out there, everyone was laughing at us like, if this means adjusted gross income, no one will ever pay attention to you. been sort of watching like well when will AGI as artificial general intelligence overtake adjusted gross gross income in the in the google google google search ranking right and that's it's it's uh depends on how you count now right that not not now they're actually they're they're they're in competition but yeah and at that time and that time you couldn't
00:13:17
Speaker
talk about AGI like even like an academic seminar in a top university or something. So the fact that the attitude has shifted so much now makes a big difference. I mean, it means it's feasible to get funding to do AGI projects. I mean, it's so it's so very hard because getting funding for anything is very hard, but it's it's no longer very very very, very, very, very hard. It's only very hard, right? And I mean, you can much more easily get like bright young students who care about their career to plunge into AGI, whereas previously... previously
00:13:57
Speaker
It just didn't seem like a practical thing to do. its it's made It's probably made the human constitution of the field less interesting. Because I think when when you when you had to fight and be a crazed maverick to pursue AGI, you had a lot of interesting characters who were thinking...
00:14:17
Speaker
all day for decades about how to make thinking machines. And the first AGI conferences I organized in 2006, 8, 9, and so on were sort of like that. Now now it's ah it's a morally acceptable thing to do and you can you can make money at it.
00:14:33
Speaker
But the the change in attitude is it is important along with along with the the the technical aspects, I think. Yeah. Something seems to have happened since ChatGPT went mainstream where it is now basically mainstream to talk about AGI and perhaps even super intelligence.
00:14:54
Speaker
Almost all people only take seriously what they can put their hands on it and and see in front of them, right? And I mean, that includes political decision makers and ceo CEOs and so forth. Like there's there's not that many people who will take more seriously something they can project and imagine that's something they so they see in front of them, right?
00:15:16
Speaker
So, yeah, once you had ChatGBT there, i mean, sounds like it's intelligent. It can can do a lot of stuff that has the vibe vibe of intelligence.
00:15:28
Speaker
And that definitely definitely qualitatively convinced everyone, like, holy shit, AGI might really be near. It was interesting. I found almost everyone after they spent some number of hours playing with LLMs, they could also see these are not AGI and they could intuitively understand intuitively understand why, even without without a technical background. but But I mean, still, that that's like, ah you want to convince people you can fly to the moon.
00:15:58
Speaker
Well, if you send a rocket up really high so far, you can't see it. Then it comes back down. More people are going to believe You could possibly go go go all the way to the moon, right? I mean, that's that's pretty simple to understand.

Humanity's Readiness for AGI

00:16:10
Speaker
I think we have shot ourselves in the foot, and I'm thinking of society at large here, by making AGI a topic that that that couldn't be discussed for a long time.
00:16:24
Speaker
What do you think we've lost from not having a a longer conversation and and a conversation society kind of prestigious venues about AGI for longer.
00:16:36
Speaker
I'm not all that sure that the conversations in prestigious venues are are ah usually that that that that valuable anyway. I feel like our species generally deals with things at the last minute and after the fact rather than in foresight.
00:16:53
Speaker
And when when people are trying to figure something out in foresight, it becomes mostly a projection of their own ego or their own imagination on the thing anyway. So, I mean, I think by not taking AGI seriously, we're getting it later than we could have otherwise. and I mean, I think we could have built human-level AGI some years ago. we could have built it on massively parallel hardware, which was and became less of a focus of the field a long time ago. So, I mean, certainly ah if we had a rational world government
00:17:30
Speaker
in 1970, right, then that government had said, let's develop safe, beneficial AGI as a priority of our species, which is what Feinberg was promoting in Prometheus Project. i mean, I think we would have an AGI what what well before now. so I mean, resource just wasn't put on it.
00:17:49
Speaker
In terms of thinking through the sort of ethical, social, political, human implications right? ah agi
00:18:01
Speaker
I mean, we haven't done a good job of thinking through things like disarmament or combating world hunger. I mean, a lot of other much more basic stuff. Like we're were still struggling with like trans athletes in sports competitions or something, right?
00:18:21
Speaker
So it's sort of... I guess if this idea had been taken more seriously in the human population at large, you would have had more diverse creativity and more different ways of thinking popping up regarding the theme. But I'm...
00:18:38
Speaker
regarding the regarding the theme but i'm um
00:18:43
Speaker
Not currently all that positive on our our social and political institutions ah being able to think through very, very hard issues in the in in ah in in in a useful way. likere We're still blowing each other up all around the world, and 60% of kids in Ethiopia are die of, not don't die, but they're brain stunted due to do do the malnutrition, right? So, I mean, as as I've seen through our AI office in in that country. So, I mean, we're, even the issues that are out there, like I've been hearing about world hunger and disarmament since I was a baby, right? we We seem to suck at dealing with those issues.
00:19:27
Speaker
relatively very simple things, right? Like it shouldn't be that hard to stop blowing each other up over territorial disputes and to like send food to little kids.
00:19:38
Speaker
But but we we're not even good at that, right? So I don't know how much brilliance we would have brought to bear on the social issues about AGI, even if they'd been sort of more and more at the forefront.

Scaling Old AI Paradigms

00:19:54
Speaker
You mentioned that neural nets, it's an idea that's been out there for a while, and we we didn't have the scale of data and compute for for these neural nets to actually work in a convincing way.
00:20:06
Speaker
I guess that took GPUs developed originally for gaming, but also um data being being created and and published on the internet for all the ingredients to come together ah to for for neural nets to to work.
00:20:23
Speaker
If you look back through the history of ai as a field, do you think there are other hidden gems that might where the theory is sound, but we don't have the scale or the implementation for it to work yet?
00:20:39
Speaker
a high percentage of the historical AI paradigms probably actually will work when you when when when you when you when you scale them up enough. And you can see that when you dig into the details. So if you look at genetic algorithms and genetic programming. So yeah use of algorithms modelgg them modeling evolution by natural selection to to look to learn things.
00:21:04
Speaker
I mean, there was a whole body of work by David Goldberg at the University of Illinois and others. He had a book on the the design of innovation and another book on competent evolutionary learning or something. and I mean, competent genetic algorithms.
00:21:23
Speaker
So all this work from the eighty s and 90s
00:21:27
Speaker
was about using evolutionary algorithms to solve problems, but then doing kind of back-of-the-envelope estimates of how much resource you should need for these algorithms. So like if you're if you're trying to maximize a certain fitness function using a genetic algorithm, what's your optimal population size beyond which you're getting diminishing returns? And...
00:21:49
Speaker
Those optimal population sizes, when I calculated them back in the 90s, they were always like orders of magnitude bigger than what we could do on computers then. So I'm just like, well, okay, life isn't optimal. We're not going to do the optimal population size here.
00:22:03
Speaker
We'll just do what we can. and then genetic algorithms are good at solving some problems, but they just take forever or don't work well for solving other problems. So, I mean, in that case...
00:22:15
Speaker
I mean, you have decades-old theory giving reasonably strong reasons to believe it can work much better when you scale it much bigger. now If you look at logic-based AI, which goes back to the 60s.
00:22:31
Speaker
I mean, again, due to lack of data, really, logic-based AI got it started with this sort of stupid methodology of people hand-coding common sense knowledge. Like you would you would type, like a human would literally type a logical formula, like grass is green, lawnmowers are used to mow grass ah in predicate logic form. And maybe explain what the part what the purpose what what was the vision here yeah so the idea with logic-based ai was that i mean what makes humans so different than apes and bunnies and so forth is largely our ability to do advanced logical reasoning in math and and and science and and and philosophy and and whatnot and
00:23:20
Speaker
the idea The idea was sort of that this capability emerges almost as a kind of virtual machine on on top of the lower level animal-like machinery for for for seeing it and and moving.
00:23:32
Speaker
So maybe you can just implement that module in way that doesn't depend that much on the underlying neural substrate. And i mean, in that direction... A calculator does arithmetic very well. Computer algebra systems do do algebra very well.
00:23:47
Speaker
And they don't try to emulate exactly how the human brain does it. You've just abstracted some some wrote some rules and and and procedures, right? And then the the issue you run up against is twofold. I mean, one is, okay, but where does all the knowledge come from? Because humans, even when we're doing logical reasoning, in the end, we're getting the knowledge from from seeing and and and acting.
00:24:09
Speaker
and and And that then... you still have problems of scale because the human brain is doing logical reasoning when it's doing it on a tremendous amount of of of ah and of knowledge, right?
00:24:21
Speaker
So there was an attempt to work around that problem in the AI field still going on till now in some areas, but it was it was major thing in the 90s anyway.
00:24:34
Speaker
And in the late 80s when I got my PhD, which is part of why I ended up doing a PhD in math rather than AI, because this whole paradigm made no sense to me. But I mean, there basically the idea was just type in the type in the formal knowledge, right? Like if people eat steak,
00:24:54
Speaker
type in a predicate-argument relation, eats, parentheses, people, comma, steak, close parentheses, right? And just just type in common sense knowledge about the world, and then your logical reasoning system will will reason based on that. And i that never seemed sensible to me for the reasons that are obvious to everyone now, like the amount of knowledge is just too much, it's fuzzy, it's probabilistic, it's messy, right? But on the other hand,
00:25:24
Speaker
There's an argument that logic-based AI was inappropriately tarred and feathered because of its historical association with the hand-coding of knowledge. Because you you can take a logic you can take a logic system and you can connect it to a camera and a microphone, right? And you can connect it to to to an actuator. Like the the actual...
00:25:44
Speaker
formal mechanism of using logical inference as a core engine of an AGI. i mean, that's not tied. It's not tied to that old idea of typing in hand-coded knowledge, right? And now, now a couple of things are different.
00:26:00
Speaker
One thing is you can use LLMs to translate natural language into logic formulas, right? So the last six months' worth of top LLMs You can have them take an English sentence and output a bunch of higher-order predicate logic or dependent-type logic, with it with whatever you want.
00:26:19
Speaker
So you can get you can get ah humongous corpus of logic expressions to be in your logic system without having people type them in. mean, you can... You can also write converters from sensory data into logic expression form.
00:26:33
Speaker
And then then you have a scaling problem, right? Then you say, okay, well, great. I have literally trillions higher-order predicate logic expressions. How do I do reasoning based based on this, right? And so that's something that the historical logic-based AI field just couldn't explore, right? So I think in in some ways, the AI field wasted a lot of time trying to accommodate for limited compute resources.
00:27:01
Speaker
So in logic-based ai because you couldn't do reasoning on the trillion premises, you would spend a lot of time trying to craft the best 500 premises. But it turns turns out that was that was just a...
00:27:16
Speaker
time-wasting way to do things. And we did a lot of tricks to get good results. i have genetic algorithms with population size of a couple thousand. It turns out that's entirely useless now. And it's it's actually much easier. you have them It's much easier to get them to do big things, if good things, if you just jack up the population size, right? And it requires less thinking and and and less work.
00:27:41
Speaker
I mean, it's sort of like all the work that went into making chess or go playing engines before we got a machine learning based approach. They were, they were very complex rule based approaches to try, to try to outdo like basic alpha beta pruning for, for, for, for playing these games. now No, that, that's, that's all, it's all irrelevant. Right. So yeah, I think evolutionary learning logic based AI.
00:28:06
Speaker
Another example is a hyperfectures, which was big with a, It was big in the 80s and 90s. People were talking about high-dimensional sparse vectors to model episodic memory and and and so forth.
00:28:23
Speaker
And you just couldn't do it at large enough scale. The last five years, there's a huge literature on doing all sorts of memory with hypervectors, hypervector-based chips and so forth. So yeah, my my view is that the use of modern scaled up compute tech and data to accelerate back propagation based deep neural nets historically is going to seem like that just happened to be the thing that got scaled up first.

Debates on AGI Architecture

00:28:52
Speaker
Right. And, and we're going to see a bunch of other historical AI paradigms get scaled up over the next few years. And then it's by connecting together these scaled up versions of these various historical AI paradigms, it's by connecting them together that we'll probably get to the first the first AGI.
00:29:13
Speaker
And at that level, this is not even such a controversial... point of view, i think the question about AGI architecture on whose answer I probably differ from most deep neural net big tech people now is you could say, well, let's take a deep neural net like an LLM, use it as the hub, then add some evolutionary learning, some logic engines, add a long-term memory, add a working memory, like add these things.
00:29:46
Speaker
on the periphery around the central component of your AGI architecture, which is an LLM. I don't think that's going to work to get to full on human level AGI, although I think it could work to get to something doing 95% of human jobs, which is how Sam Otlus tried to redefine AGI.
00:30:05
Speaker
I don't think it can get to a system that can really generalize beyond its experience. after the fashion of of of people, which is the meaning that I had for the term Aja when I introduced it.
00:30:17
Speaker
But I think if you take something else more flexible and more introspective and make it the central component, then, you know, an LLM can be one of the very powerful things cooperating with that ah central hub and feeding it knowledge and helping it syn helping it synthesize things.
00:30:39
Speaker
So i guess one one question about AG architecture is... do you want a monolithic or a sort of hybrid approach? I guess an only LLM, only logic engine, or you have multiple components. Another is, if you have multiple components, is there one that's sort of more central? And if so, if if so which is it, right? And this this sort of debate, think, isn't resolved within the AGI R&D community. And it might be It might be you could build many different sorts of AGIs by kind of mixing and matching and combining historical components that have been scaled up and in different ways.
00:31:20
Speaker
Do you think these limitations are fundamental so so that we need different approaches working together in order to get AGI? and I think that what are what are different approaches and what aren't is almost...
00:31:36
Speaker
a matter of culture or or mindset rather than than math. So i I published a paper or posted on ArcSciper paper, but there was a short version published in the AGI conference series on called Patterns of Cognition, where I tried to show that all the core algorithms that we're using in my OpenCog Hyperion project, which includes logical reasoning, so some variations of attractor neural nets, evolutionary learning, some concept formation.
00:32:07
Speaker
I try to show that all of these can be cast as basically forms of approximate stochastic dynamic programming. So you can you can sort of take a whole bunch of algorithms that look really different, and you can actually cast them in a common math mathematical form.
00:32:26
Speaker
It's just not the way people are typically look look looking at this, right? And I mean, the reason I did that exploration is was trying to make sure infrastructure for our new OpenCod Kuiper on system will let all these things run fast.
00:32:42
Speaker
So if I could reduce them to a common mathematical form, then there's just one thing that you have to make you have to make work fast, right? So, yeah, what's... What's the same or different kind of depends on your perspective. Like in in mathematics, you see this all the time, right? Like in and algebra, you have homomorphism between structures.
00:33:02
Speaker
In topology, you have homeomorphism. They seem different, but kind of similar. Then category theory was invented. It's like, no, these are all morphisms, right? And so yeah you can see that what seemed to be different branches of math done by different people coming out of different historical lineages, actually what they were doing were kind of trivially seen as specializations of of this of the same thing.
00:33:24
Speaker
And now everyone accepts they're the same thing. to I mean, for sort of tribal reasons... We're looking at deep neural nets and logic systems as like super, super different things. But like when you're in code working with these things, like, so we have an open cog high prompt, which is my big AGI project. now we have We have a network of nodes and lengths and the nodes and lengths can have symbolic types or floating point numbers associated with them.
00:33:54
Speaker
They have update rules associated with them. Now, pretty much the difference between a neural net put into this network and a probabilistic logic system put into this network it's like what little non-linear algebra function do you put in the node to update the numbers that are coming in and going out right so i i mean it is different it's a different it's a different way of thinking but it's it's not like building a computer out cells from a slime mold versus building it out of diamondoid nanotech or something, right? i mean I mean, these are, they're actually just, in every case, we're propagating numerical values through these node and link networks in the machine.
00:34:36
Speaker
And then we're debating about what nonlinear function do we use to translate the input numbers into an output number. but But because, people are trouble and like to fight over ego and resources, they start to seem like totally opposite camps with totally different ways of thinking. Because, I mean, really a neural net is quite loosely connected with with with the brain anyway. So, I mean, people like to say it's biologically inspired in a very...
00:35:08
Speaker
distant historical way it is but like there's there's no backdrop in the brain you do have aspercytes and glia extracellular charge diffusion diffusion in the brain so like a lot a lot of the differences between the ai paradigms are not that big. i mean, you have evolutionary learning, but then in the 80s, you had Edelman's neuro-Darwinism, which claimed that the neural assemblies in the brain are evolving by by natural selection, right?
00:35:37
Speaker
So, I mean, you you could you could make a decent argument that just as sort of physics, computer science, and math are converging into one thing, all these different AI paradigms are looking more and more similar as things progress.
00:35:53
Speaker
Wouldn't you expect us to build the first AGI using the simplest methods available?

AI Development Patterns

00:36:01
Speaker
And and ah we the first AGI we build will be built using the easiest way to to build an AGI.
00:36:08
Speaker
And i would I would guess that that method is not a combination of methods, but rather something very simple that you can scale. Honestly, that doesn't that doesn't seem to be how
00:36:25
Speaker
software development usually works. Or math, actually. Usually the first way you do something is kind of a kludge. It was easiest for you to do, given the materials at hand and given your point of view then. mathematics...
00:36:42
Speaker
like you know in in mathematics often the first proof of something is a big, horrible, ugly mess. Then like 50 years later, you realize how how simple and elegant it actually was.
00:36:54
Speaker
We could also argue that modern electric cars are going to be a lot simpler than the internal combustion engines and so forth. So yeah, I think i think invention often happens actually by bricolage and cobbling together the shit that you have available to to make something happen.
00:37:13
Speaker
And the most elegant, simple occam's razor way often comes but like later i mean certainly like physics is a mess the standard model of physics is a mess i now everyone thinks there's going to be something a lot simpler but the the early quantum mechanics was also way messier than the new quantum mechanics of heisenberg and and schrodinger and so on right it was it was a massive stuff adapted from classical physics so yeah i don't i don't Also, scale plays a big role here because so something like Marcus Hutter's AXE or Jürgen Schmidhuber's Gertl machine are similar ideas I had even even before I read that.
00:37:55
Speaker
I mean, there's mathematical arguments. You can make a really, really, really simple AGI. if you just had a big enough computer, right? Because in effect, these really civil AGI algorithms involve brute force searches over large spaces of programs.
00:38:10
Speaker
like So like if before you take each action, you can search all programs up to a certain size and figure out which one, if you executed it, would lead you to the best action, right?
00:38:21
Speaker
Then you just run that. Then that's much easier than doing all the garbage we do in in modern AI systems. The problem is, Anything approaching a brute force search over program space is just infeasible using current hardware. Now, will it be infeasible using like a femto computer or auto computer or something? It's.
00:38:41
Speaker
I mean, you can't truly do complete enumeration over program space, but you could do things much you can do things much closer to a brute force search over program space if you had massively more more hardware. So it might be that once we've gotten sufficiently powerful hardware, which could end up being a few years post-singularity, who knows, right?
00:39:04
Speaker
and maybe once you're there, you can radically simplify ai algorithms more in the direction of girdle machines and AXVN and whatnot. Because I think much of the complexity in modern AI is working around resource limitations.
00:39:23
Speaker
And the resource limitations themselves are not simple. They're particular, right? like So we have GPU and CPU. We have cache RAM. we have We have main RAM. We have networks of computers with certain bandwidth. So it seems like as long as your infrastructure is...
00:39:40
Speaker
heterogeneous in its resource limitations, you're going to end up wanting to adapt to your AGI system to be somewhat heterogeneous in its operation for efficient operation on the on that infrastructure.
00:39:54
Speaker
So then then that's like simplicity conditional on your infrastructure and your data. But simplicity conditional on your infrastructure and data is not...
00:40:05
Speaker
The same as simplicity but by our by our own the intuition, like you get AXE or Girdle machine, right? So yeah, i and the the the the other thing I would say, though, is simplicity in a deep tech stack is often carefully engineered on top of a lot of complexity, right? So like it's it's also like, it seems simple to us to walk down the street because we're not aware what's happening in our in our cerebellum.
00:40:33
Speaker
And writing a Python script to train a deep neural model seems really simple until you try to write the they CUDA code running on the NVIDIA GPU to make the matrix multiplication algorithm, manage the cache RAM, all the different layers of processors, and inside the NVIDIA GPU, right? So, what I mean, we we built things to the type level that most people have to deal with looks simple, but it's really like a quite complex stack working around the strengths and weaknesses of the underlying infrastructure. And this is often top of mind because...
00:41:11
Speaker
trying to build an approach that isn't just deep neural nets like we're doing in Hyperion. We have this weighted-labeled metagraph thing called the Adam space, which is the central sort of knowledge structure of our system.
00:41:23
Speaker
But you've got to rebuild a whole tech stack then because... Because these are not rooted in matrix multiplication, that most of the graph algorithms that that that we're dealing with here. And so if you're if you if you're trying to make a different AI paradigm, it's not just scripting a different algorithm.

Embodiment and Value Alignment in AGI

00:41:41
Speaker
It's repurposing pieces to build a whole different tech stack down to the and down to the chip level, right? So there there's lot of complexity. And this this is why you have to view AGI as being built by like a whole huge...
00:41:57
Speaker
combination of of of it ah of industries, right? Like, I mean, we will we'll give a Turing Award to the guy who like tweets the back propagation algorithm to converge better on recurrent nets or something.
00:42:10
Speaker
And that and and that that that's all important. But obviously, if that guy was like sitting on a desert island to make that in innovation, it's not going to make an AGI. Like it's turning out this huge...
00:42:22
Speaker
huge combination of hardware and software innovations, which are mostly being pursued not because of AGI, but just because they're making they making somebody money or letting somebody look how more powerful than their opponents.
00:42:39
Speaker
Do you think physical embodiment is going to be necessary for adi and And if so, why? So I had the ah funny experience with this.
00:42:51
Speaker
I was giving a talk at a non-technical futurist conference, and I was talking given about people who were interested in embodiment for AI versus people who took a more disembodied approach.
00:43:04
Speaker
And this very new agey middle-aged lady came up to me with like a bunch of crystal jewelry and so forth. She's like, well, I'm so amazed someone's finally talking about disembodied AI. Like I've been seeing these AI poltergeists in my house sits since since since forever. Right. And I'm like, no, that,
00:43:22
Speaker
Even your poltergeist is not actually disembodied. like It's embodied in an electromagnetic disturbance that we just don't fully understand. Pei Wang, another long-term AI researcher who was a pioneer in the Chinese AGI scene in the 80s and 90s, he had a paper and once called A Laptop as a Body.
00:43:41
Speaker
right I guess the point is you're your AI, i mean, it's always seeing something and doing something, right? So it's otherwise you as the programmer or tester could not be interacting with it with either, right?
00:43:55
Speaker
So it's a question of a what sensory and motoric bandwidth are needed to get to certain kinds of AGI, and B, to get human-like AGI, as opposed to just arbitrarily intelligent AGI that might be off in a different direction than humans, right?
00:44:14
Speaker
To get human-like AGI, you know, how much you want to have a human-like embodiment? that's on On the first question... I think robust embodiment is convenient, but probably not necessary. I would imagine you could get a vastly superhuman AGI with a much restricted sensorium and the motoric world that people have. It sort depends on what you want to do. If you started by making a theorem prover and a sort of a scientific research assistant that's doing symbol manipulation, then you can give it
00:44:55
Speaker
limited insight into the physical world probably can work fine, right? And there's a lot of camera inputs to the internet that it can use without having its own body to tool around.
00:45:07
Speaker
There's two issues with that sort of approach. One is
00:45:12
Speaker
Of course, it's harder for us to know what's going on because if a mind you're building is very non-human, you don't have so much intuition to go on and in in ah in designing and testing it then.
00:45:23
Speaker
But also, a very non-human AGI like that, for better or worse, it will probably have
00:45:32
Speaker
less of a strong understanding of what it is to be human and human values and and and and and culture and all that. right So I think there's a stronger argument that if we want an AGI we can relate to on a sort of Blubarian, I-Thou level, like relate to on a deep level, then for that AGI to have something vaguely resembling a human embodiment is probably probably quite valuable, right? I mean, for the same reason, like I can i can empathize with other men better than with women and in some ways. I can empathize with other people better than with apes or rats, right? I mean, i mean having having and embodiment like ours doesn't guarantee that it's empathic toward us and understands what we're up against as humans, but it kind of would would give it a head ahead start, right? So I think...
00:46:24
Speaker
what we can do, certainly with a hyperon type architecture and in different ways with deep neural lead architectures, I mean, you can you can take pro-AGI systems with different bodies and different levels of attachment to their embodiment.
00:46:40
Speaker
You can have them learn stuff and you can then network them together and even merge merge their knowledge bases and in some ways, right? So, which is something we can't do in in the human sphere all all that well. So, I mean, you can take a chat system, you can take proto-AGI system used to control a humanoid robot.
00:47:00
Speaker
You can take a system controlling biology lab equipment and, with some work and some caveats, I mean, you can have what's learned by all these systems combined together to to synergize and fuel like ah sort of semi-coherent overall artificial mind. So I don't think it has to be has to be either or. i do think, though, that the ease of doing things with embodiment is increasing very fast also, right?
00:47:32
Speaker
So, um I mean, so we've, I worked for years with David Hansen at Hansen Robotics. We had with made Sophia the first robot citizen. I led the software team behind that. But now I'm still working with David, but we have a different robotics project called Mind Children. And we, in the last nine months or so, we put together a three and a half foot tall,
00:47:52
Speaker
humanoid robot. It can look at you. It can talk. It can pick things up. we can It's not walking. It has wheels. It rolls rolls rolls around the room. But that the the ease of making your own robot with the properties that you want sure for teaching and evaluating your proto-AGI system, like it's it's incredibly easier than five years ago, let alone 20 years ago.
00:48:15
Speaker
it seems like what's happening is early stage proto-AGI stuff is just being tried out in a variety of humanoid robots along with other applications.
00:48:29
Speaker
And then the knowledge bases just will get munched together somehow so that the field isn't the field isn't requiring itself to to ask the either or question. We're we're just doing both, which is which is what you get from having more attention and and resources and and into the field, generally speaking.
00:48:49
Speaker
Do you think the notion of aligning AIs with human values makes sense? And what are the best approaches here?
00:49:00
Speaker
Alignment is not a term or language that comes naturally to me, but I mean, I think the intention behind it is probably something fairly reasonable.
00:49:13
Speaker
The first thing I would note is people are not very well aligned with themselves, let alone with eat with with each other. So what's the bar for alignment with humanity needs to be thought through carefully. like i i
00:49:33
Speaker
I've gotten probably gotten more self-aware as as I've gotten older through meditation and various other practices. But one of the things one becomes aware of then is how incoherent and non-unified one's own self is, right?
00:49:47
Speaker
So, like, I mean, when I visit sub-Saharan Africa, i will give... a decent pile of money to poor suffering people I see in the street.
00:49:58
Speaker
When I come back home to the Seattle area, I send less less money to those people. And i will go out and buy a piece of weird keyboard equipment to play music instead of sending all disposal income that I have to save kids who are starving in Africa.
00:50:17
Speaker
Yet, if I was in front of those kids, And I had the chance, like, buy food to give this kid right in front of me versus buy a keyboard. and would probably buy food to give that kid right in front of me and not own a keyboard, right, and just learn to play a cheaper instrument.
00:50:33
Speaker
So, I mean, I can see I'm... I myself am not entirely morally coherent. I mean, I don't beat myself up too much. I've given a lot of money to initiatives in Africa and spent a lot of time on it, right? But, and i don't I don't feel the need to force myself to be entirely coherent either. I mean, I think most humans, probably all humans, were more like clusters of behavior patterns than like unified, rational, coherent entities, if we if we really are honest with ourselves.
00:51:07
Speaker
And that's even more so on the on the collective level, right? like if i I mean, if I go through rural Ethiopia, which is a beautiful place where I love to travel, but the the average people are heavily ethiop Ethiopian Orthodox Christian, right? And they, I mean, they don't think AGI will ever ah have a soul, even it will be be much smarter than us.
00:51:30
Speaker
And i mean, the the attitude there is rabidly homophobic, right? Whereas, I mean, my mom is gay. I was i was raised in a so totally like queer friendly, I'll be honest. Now, these are lovely people, mean, in Ethiopian villages. They're just raised to believe that you know, you'll you'll you burn in hell if you're gay, right?
00:51:51
Speaker
so i mean if you So if you look at the lack of alignment in each of our own minds, if we're honest with ourselves, and then the lack of alignment among different human beings, you got to ask, like, what exactly are we thinking about?
00:52:07
Speaker
the AGI is supposed to align to. Like, is it Elon Musk's value system? That doesn't seem very coherent either, either right? Like, is it is it like that the weighted average of all Silicon Valley VCs and and software developers weighted by their bank account? I mean, they or their IQ? Like, it's not really clear what you want to align with. So I end up thinking about it a little differently than that, but it may capture the spirit of what people are looking at with alignment.
00:52:41
Speaker
One approach is to so is to talk about some minimal set of commitments you would want the AIs aligned to. Something like you want the AIs to not destroy humanity, so not cause our extinction, and you want the AIs to to not be in in in complete control over humanity. So, of course, there's some people that that disagree with those notions, but I think that's something that you would find quite wide agreement on among many humans.
00:53:13
Speaker
I don't think the main issue with that is that some people would disagree with them. I think the main issue with that has been highlighted in science fiction since before i was born and was summarized very well by Eliezer Yudkowsky, who I ah differ with on a number of things, but I've known him since forever. And we've agreed on lot of things too. I mean, he made the point that Human values are complex, right? And we summarize them in natural language in ways that we culturally have a common understanding of, so it makes us think it's it's simple.
00:53:50
Speaker
But these things are really very, very ambiguous, and their interpretation as we think of it, depends on bunch of implicit cultural assumptions. And I mean, this was highlighted in the science fiction book, The Humanoids, that used to be required reading and in MIT's AI department what way back when, before AI was so popular.
00:54:10
Speaker
And in this book by Jack Williamson, which I read probably 75 or something that when I was a kid, right? I mean, people create these human-level intelligent humanoids, which are more physically powerful than humans.
00:54:24
Speaker
And they give them a mandate to serve and protect and guard men from harm. Right. And everyone in my generation in the AI field read this book. And of course, they wouldn't let people use power tools. wouldn't let people use hammers. Like in the end, if you were upset about your girlfriend dumping you, they would inject you with some euphoride because that obviously was causing you harm. Right. So they they in interpreted serve and protect and guard men from harm in a different way than the authors that had originally intended right and one of the lessons of the failure of rule-based expert system ai where you code all the ai's knowledge
00:55:08
Speaker
by hand. One of the lessons there is like, even if you decide to refine, serve and protect and guard men from harm into a whole volume of logic expressions, it's still not enough.
00:55:19
Speaker
Like there's always a loophole. There's always room for for interpretation. And of course, this is why our legal system has case law, right? Because I mean, i mean we we try to enumerate law in detail, but then in the end, judges have to use nearest neighbor matching in a very fuzzy and informal way against against a bunch of of cases, which causes really annoying annoying problems, as we see now in US Supreme Court. right But it's for the reason that Eliezer said, like human values are complex.
00:55:50
Speaker
and And I mean, what what we mean by something like don't cause the extinction of humanity seems like it's straightforward, but it's not straightforward. like Because some people will argue that replacing human cells with genetically engineered cells is the end of humanity.
00:56:09
Speaker
Some will argue that replacing it with like robotic cells is the end of humanity. Some would argue a brain chip implant is the end of humanity. Some would argue staring at your phone all day is is is is is the end of humanity. And You can try to enumerate every case, but you're basically guaranteed that the world will throw at you some case that wasn't in your enumeration of ah of cases, right? ah For one thing, we don't have a formalization of the world, and we don't know what new technologies and trends are are going to going to emerge. So that the thing is that enumerating some principles you want the AGI to follow
00:56:48
Speaker
Of course you want to do it. if ah Of course it makes sense. But it's foolish to think that's going to give you anything resembling a guarantee. like I think those maxims have to be on top of some more implicit resonance of of of of the AI with human values in it.
00:57:09
Speaker
This is sort of like raising human kids, right? which i mean ah have five kids and one granddaughter. You see... like giving your kids some core principles they have to obey and telling them these principles over and over, or even rewarding and punishing them for obeying the principles or not. Like this, this does not work very well.
00:57:32
Speaker
i mean and I mean, if you raise your kids with the right vibe of compassion and values and you carry out activities together with them in which you're collectively pursuing activities in accordance with your values.
00:57:49
Speaker
And then on top of that, you tell them some core principles that that sort of reify and abstract what they what they've got what they've gotten implicitly through the shared activity with you like that.
00:58:03
Speaker
That can work reasonably well. and and And I mean, this is what education systems have always tried to do, right? So that, but now you might say you don't have to do that with an AI system.
00:58:14
Speaker
That's just because people are perverse. But I think for any human like AGI architecture, it's going to be like that because you have this vast, teeming, massive self-organizing activity that's conditioned based on experience.
00:58:27
Speaker
And then the rules and principles that you give it are just guiding this vast teeming mass of self-organizing activity. And that in the end, that will be true if you have a huge logic engine as well as if you have a huge neural net.
00:58:39
Speaker
Because, I mean, in any case, you've got a massive amount of stuff going on that's not predictable in detail by the programmer. And you and you need it you need it to be... making up its own stuff as it goes along, right? like otherwise Otherwise, it's not going to get it's not going to human level um of general intelligence. So yeah, going going back to alignment or things resembling alignment that not be more might be more meaningful or achievable, I mean, I think...
00:59:10
Speaker
You can hope for compassion. You can try to get AGI systems that are empathic and compassionate to people, as well as declaratively understanding human values, which LLMs cannot can can already do.
00:59:26
Speaker
You can hope for AGI systems that are compassionate and have a working practical understanding of human values. You can also think about what would call meta goals to put into an AGI system.
00:59:38
Speaker
So you can ask the AGI system to have as a value, as a meta goal, like don't change your top level goals very fast or heedlessly, right? Like i' I don't think it's fair or workable to try to build an AGI system that keeps the initial top level goals we gave it in precise form forever.
00:59:59
Speaker
i think if you do that, it will just try to work around them and in various crazy ways, so sort of like humanity has. Like we had a goal to reproduce. Hey, we invented birth control, and we hacked her we hacked around all the mechanisms there, right? So I think if you try to restrain AGI to rigidly hold the top-level goals that the original programmers put in,
01:00:19
Speaker
Like it won't work. Self-organization will just kind of work work around that and you get a perverse system. ah think it will I think it could work better to make an AGI have a top-level goal to evolve its top-level goals in a sort of moderated and responsible way after interacting with the others in its environment and reflecting on itself carefully.
01:00:40
Speaker
So i mean, I think you can design ah medical system in a way that decreases the odds of the AGI system like weirdly going off in a totally non-human direction.
01:00:53
Speaker
what What you want is for it to have a top level value of evolving its own value system in a way that sort of has a high mutual information with the evolution of of of human value systems, right? Because human value systems are complex, self-contradictory, incoherent, heterogeneous, always changing.
01:01:12
Speaker
they're evolving. The AGI's value system, I think, can be more coherent than human values. but will also be evolving. And you'd like there to be give and take and mutual information between the two, the two evolving, evolving value systems.
01:01:29
Speaker
And you can, you can take that into the value system of the age of the AGI. Like, yeah, you're going to change your values as you do be sure you're closely connecting with, with, with human values as, as as they change in that process. And if,
01:01:44
Speaker
That probably is a species of of alignment. It's just not what many people are well many people are thinking about when they're talking about alignment, because they seem to be thinking more like there's some core of human values, and we can get the core of the AI's values just go alongside.
01:02:05
Speaker
My feeling is it's more like human values are going like that, and then you want the AI values... to follow their own chaotic orbit, like sort of coupled a bit with the chaotic orbit of human values.
01:02:18
Speaker
And this perspective just makes it much harder to think about guarantees. And some people seem to want guarantees. And I and don't think we're going to have guarantees.
01:02:29
Speaker
We're going to have very fudgily, probably, approximately correct value systems rather than guaranteed value systems. When you look at where we are with AI progress right now, we're seeing incredible performance on a number of benchmarks, math performance, programming abilities.
01:02:53
Speaker
AIs are passing all kinds of tests, college levels level exams. What does that mean for impact in the real world? Because I see the the AI optimists being very impressed by these benchmarks, whereas the AI pessimists are asking questions such as, you know, when will AI show up in productivity statistics or in GDP or in unemployment?

AI's Societal Impact and Adoption

01:03:18
Speaker
And yeah, what what do you think of that disagreement?
01:03:22
Speaker
So I think that the rollout of AI tech into the practical economy is gated by human stupidity, you human culture, and you human ego and all the constructs that that that we have government governing our world, right? So I think, yeah, well while I'm not an optimist that LLM's can lead to human level AGI because I think they don't have creativity and in a fundamental sense.
01:03:54
Speaker
I am an optimist that LLMs with minor additions and tweaks could take over like 95% of human jobs. i agree with Sam Albin on that point.
01:04:08
Speaker
I just think that 95% of human jobs, I mean, as a big hand wavy figure, can be done without fundamental creativity or earth. or inventiveness so if you have a system that can do really clever nearest neighbor matching against everything people have done as recorded on the internet like most of what people are doing is a repetition of something that's already been done and recorded on on the on the internet right so I mean I think that i think
01:04:38
Speaker
We could roll out deep neural net driven systems to do tremendous variety of human jobs right now. It's not happening that fast just because that's not how...
01:04:54
Speaker
society and in and the industry are are are organized, right? And and then that that becomes more a socio-psychological question. i mean, a very simple example.
01:05:06
Speaker
Some friends of mine had a startup company called Apprenti a number of years ago. And among other things, they automated the McDonald's drive-thru. And it were it worked. I used the system.
01:05:18
Speaker
It was rolled out in some McDonald's and somewhere in the Midwest. Now, due to some organizational issues within McDonald's, that was rolled back. Now they're planning to roll out a new system, right? So, I mean, that will happen.
01:05:34
Speaker
It can be done by AI right now. It's not perfect. The people aren't perfect either. But that's just, I mean, that was could have been rolled out five years ago, right? I mean, so that that same story happens.
01:05:46
Speaker
all over the place. like Even when AI could do the job and could do it cheaper and better than than people, the rollout is very slow because society is organized a certain way and there's a lot of momentum. and i mean Law is't is another thing like that, right? Like fundamentally right now, a great amount of paralegal work and drafting of contracts and so on can be done by LLMs.
01:06:16
Speaker
Lawyers and paralegals are using them the house to do their work and then char charging an hour an hourly rate for an hour for what was actually... two minutes of of of going on to chat gpt or deep seek or something right so but the legal profession is in no hurry to restructure to optimize itself around around the use of large language models and there's all these protections like licenses and to practice in and and so forth so yeah i i really think On the one hand, Gary Marcus and other LLM pessimists are correct that some people oversell LLMs and there are limits to their general intelligence, totally.
01:07:00
Speaker
On the other hand, I think... If everyone was lazy and didn't want to work and we had ah political will to just give people free money, I mean, we could reorganize society. So right now I would do a tremendous majority of of of of jobs. Like we're just not doing it. I mean,
01:07:22
Speaker
yeah look look look look at Look at the slow rollout of automated convenience stores, right? Like Amazon had these stores where camera would just take take take take a picture of your of your food when you leave. Like a...
01:07:38
Speaker
There's no question in my mind, like, this technology could work right now, right? Like, it's not it's not it's not it's not that it's not that hard. But then, of course, people are jerks and want to steal stuff from the store. And then being policed by a robocop has a different social vibe than being policed by the human security guard, right? So there's all these social and psychological issues that that slow down adoption. So what what that means is that the bar is pretty high, right? Like the AGI has to be way way, way better or way, way, way, way cheaper than than people.
01:08:16
Speaker
And when the margin is enough, then the social obstacles to adopt it will be will be overcome. So Calum Chase, A friend of mine from UK who's written a bunch of books on this sort of thing, he sort of thinks the great obsolescence of human jobs will come in one huge batch because he sort of figures like at a certain point you'll be close enough to AGI the the cost savings and the efficiency gain, quality gain is just too much for people to ignore.
01:08:48
Speaker
And everyone, everyone will just have to immediately roll it out. And then it will happen like in a big, in a big wave, wave all over the place. And I think that, it It might happen that way. ah but And I think that's more because of the sort of face transition dynamics of the human social networks making up the economy right right but rather than necessarily because of the AI...
01:09:15
Speaker
ai AI capabilities. And yeah i can ah as another example, I can look at that in in music because I'm a musician, do a bunch of computer music stuff, right?
01:09:25
Speaker
So I mean, if you trained in LLM or a comparable deep neural net on all the music up to the year and see nothing after 1900,
01:09:37
Speaker
Like that AI will never invent neoclassical metal, grindcore, progressive jazz, hip hop, right? Like it's ah it's not going to synthesize that for music before 1900.
01:09:48
Speaker
nineteen hundred If you ask it to put together West African rhythm with Western classical music, like it'll set a Bach fube to a West African beat or something, which could be interesting, but its it's not the same as the deeper fusion that happened to create jazz or something, right?
01:10:05
Speaker
But... On the other hand, so there's there is missing creativity. Like, if you're not going to have the next Jimi Hendrix or John Coltrane be a deep neural net of of the current style of of of deep neural net.
01:10:19
Speaker
On the other hand, almost none of the music industry requires that, right? So if you're if if you're looking at, say, make background music for my video game or my advertisement or or my movie or something, or even, like, generate music pop song to play on Spotify for people to play and in the elevator. Like,
01:10:41
Speaker
These are solved problems by AI music generation now. It's just record labels don't want it. The music industry doesn't want it. music Musicians don't want it, right? So, I mean, the the rollout there is gated by what the community of humans involved wants rather than by what the whether technology already can demonstrably do.
01:11:03
Speaker
but but But that situation can't last forever, right? That situation will face pressure from from the market Yeah, clearly clearly clearly so. but but then But the question you ask is when? And the point is when when is more about these social dynamics and then regulatory capture by groups that feel that feel threatened, right? Because that that can you saw that with the medical profession for a long time. like For a long time, we had AI that could...
01:11:31
Speaker
diagnose disease based on symptoms as well or better than a doctor. We had that from rule-based AI even even before modern neural nets. But I mean, the medical industry will not allow that to be rolled out. i I saw that in China 10 years ago, like in the waiting room in the hospital in Shanghai.
01:11:50
Speaker
They had a WeChat bot where you could just tell your symptoms to the WeChat bot. And it would tell you it would tell you what was wrong with you before you went in to see the doctor. and The doctor would just double check what the WeChat bot said, right? So China rolled that out in a number of hospitals that I saw personally 10 plus years ago. would still don't have it, right? you just sit You're just sitting in the waiting room, yeah getting sick from the person next to you and and filling out forms and listening to music for four hours, right? So, I mean, that and that's...
01:12:20
Speaker
That's just because US medical regulatory establishment is much worse than China's basically. Yeah, I think I think Ray Kurzweil's prognostication of human level AGI by 2029 that he put out in his book, The Singularity is Near in 2005, is looking remarkably prescient, right? Like we we might beat it by a couple of years.
01:12:46
Speaker
We might get there 2026, 27, 28. It might be a few years slower than that. I mean, but on on the whole on the whole, that seems fairly on the mark as as predictions go.
01:12:58
Speaker
And i I sort of am getting inclined toward Calum Chase's idea that right around that time of the break through the human level AGI is when the massive job obsolescence will occur.
01:13:12
Speaker
Because it seems like there's so much psychological and institutional resistance to it that AI is just going to take over different industries in a weird erratic pattern just gated by fact that people don't want their jobs obsoleted and that the people running companies are not that savvy about about AI in most in most verticals. Now, I think, however, Ray was too pessimistic when he said human level AGI 2029 and superintelligence 2045.

The Path from AGI to Superintelligence

01:13:47
Speaker
Like, I don't i don't think there'll be a 16-year gap. I think there'll be a gap of,
01:13:53
Speaker
one to three years or something. i mean i mean, and probably the thing slowing down, the transition will be the AGI's own conservatism about how fast it wants to responsibly self-improve.
01:14:05
Speaker
Because I feel like once you have a human-level AGI, it should be able to increase its intelligence by an order of magnitude, qualitatively speaking, at least just by software improvements, because it's going to be a better AGI programmer than we are.
01:14:19
Speaker
And then you get into hardware improvements. I mean, if you have an AGI robotizing factories and and dealing with the hardware, it's not going to take it more than a couple of years to make like a radically superior software.
01:14:34
Speaker
batch of chips and and and and so forth. that shouldn be able to It shouldn't be able to speed up by some small integer how fast we can roll out new new chips. If it's ah like a human level AGI built on top of current tech, they can already do do math and engineering better than people in in some senses.
01:14:54
Speaker
So he I think even if Calum is right, I mean, we're then like, what, three to six years from the massive... massive elimination of of human jobs, which is ah situation our social and political systems are you not especially well prepared for, particularly on ah on a global level if you look in in the developing world, but even even we're not well prepared in the developed world.
01:15:23
Speaker
Do you think the transition from AGI to superintelligence would be slowed down by the the physical world? So just gathering enough materials to create enough chips to train sufficiently large model to to get to a higher level of intelligence?
01:15:43
Speaker
No, no, because I think the whole direction of I think the whole direction of training larger and larger models is sort of intellectually bankrupt. and i mean And I think LLM already has a lot more data than I do, and it's not as generally intelligent as as as as as as ah as I am yet.
01:16:02
Speaker
So, I mean, I think on the one hand, yeah, you need a lot of computers and and you need a lot of data, but but I think... you don't need as much data as modern LLMs have to make a human level AGI in my view, like they already know more than you and I do within their weak ability to know things. So my gut feel is that once we get an AGI,
01:16:33
Speaker
that AGI will be able to increase its efficiency of operation tremendously just by by improving it improvingv its software. right And and that that then you won't even need to roll out new hardware or get more data to become a superintelligence.
01:16:51
Speaker
On the other hand, of course, it will be able to design new new hardware also. And there's a bit more... a bit more time lag to that. But just a few if you look at the software stacks that we're using, like, I mean, we have these servers, we have QDN OpenCL, we have Linux, then we have like a Rust kernel on top of Rust.
01:17:11
Speaker
We have our own AGI language meta. We have this stack is utterly not the optimal way to implement the AGI that we're trying to build on top of it, right? Like if the AGI just rebuilt everything from the but from from the ground up without having to go through all these awkward layers that are built for human understanding and are there for historical reasons, really. i mean, that i don't I don't see a big obstacle to massively, massively optimizing
01:17:44
Speaker
AGI and into ASI. I think if there's a slowdown factor, it would be the AGI's own value system. like ah not like it Even if I could rewrite my own brain arbitrarily fast, I probably wouldn't. right i' I might be more reckless than some people, but i i mean I want to survive. I care about my mental well-being and that of my family and friends. right So i mean you would So there's an argument for sake of safety and common sense, like you you make small changes and improvements, see how they pan out in the real world, make other small changes, see how they pan out in the real world, roll back if they if they' if they aren't working out.
01:18:25
Speaker
So it probably the transition from AGI to ASI will be gated by responsible self-modification on the part of the AGI. that This is where we hit the big challenge that see will happen in this transition period, though.
01:18:44
Speaker
I think that you have the following
01:18:50
Speaker
sort of Scylla versus Charbetis issue, a rock and a hard place issue,

Global Competition and Safety in AGI

01:18:55
Speaker
right? The issue is if you have multiple competing efforts at AGI,
01:19:02
Speaker
For example, an AGI arms race between the US and China, such as certain national leaders are currently advocating, right? So if you have that sort of situation, So let's say that multiple parties get a human-level AGI around the same time, which is almost guaranteed to happen, right? I mean, even even if you kept everything locked down, that doesn't work too well. Like someone gets poached by someone else and offered $10 million dollars to share the trade secrets. right I mean, we can we can see with Transformers from Google to OpenAI, then back to Google and to DeepSeek and so on.
01:19:38
Speaker
You can see that... you can see that You keep things locked down a little while, but not not not that long, right? So if you have multiple competing parties building AGIs, then...
01:19:51
Speaker
in order to so have a moderated pace of advanced or super intelligence, you would need agreement from all the parties controlling the AGI's about moderating the pace of development.
01:20:04
Speaker
and yeah And you then have a really annoying arms race psychology, right? Because clearly, i mean, from where I stand right now, might look different once we have the AGI, but it seems to me now the most likely, most responsible thing to do is not go from AGI to superintelligence at the maximum possible speed, right?
01:20:26
Speaker
Probably most responsible thing to do will be do that by baby steps and use some experimental information gathering as you go. I mean, maybe not. Maybe the AGI will just...
01:20:40
Speaker
print out a very compelling argument why upgrading the superintelligence in one fell swoop is is the best thing. Maybe it'll be right. But supposing that some gradual increase from AGI to ASI is the best thing to do, that requires a lot of trust among the competing partners, right? Because U.S.'s AGI is it isn't going to want to go slowly if it thinks China's AGI is going is going fast and a decentralized network building AGI isn't going to vote to go slowly if it thinks the centralized network that wants to make it illegal and put its developers in jail is going fast, right?
01:21:16
Speaker
So that this... this is potentially dangerous and and and annoying, right? I mean, because we we already have the makings of an AGI arms race, and but before we actually have AGI, it's mostly a matter of humans using tools to advance their own their own particular sectarian interests. But if that's transmitted...
01:21:43
Speaker
into the agis where they're driving their own development and then what exactly happens because each agi to the extent it either implicitly or explicitly like wants to keep surviving right I mean, it will see that the other AGI getting to the singularity first is a threat to to its own survival and to the survival of the people that loves who created it, right? Like the AGI might think like just to save the life of my creators who I've been trained to empathize with, I need to make myself smarter and smarter before this other AGI that's ruled by these guys.
01:22:19
Speaker
who literally want to kill my creators, right? and so this This is sort of what our geopolitical system is is tending toward now, right? I mean, and there's there's a possibility that having the first AGI be a sort of decentralized, open global brain can diffuse that dynamic because you'll have a sort of decentralized open thing, which is just smarter than any of the sectarian AIs.
01:22:51
Speaker
And then more resources just get pitched into the open decentralized global brain and the sectarian ones can't catch up. I mean, that's a possibility, but obviously there's a lot lives of different uncertainties there than anyone anyone could think of immediately.
01:23:09
Speaker
Ben, as ah as a last question here, what what what do we do about all of this? You've sketched out a situation where it seems that we that we are racing towards AGI and there might be competition to get there and this competition could be quite dangerous.
01:23:26
Speaker
So what what do we do? Do we have different layers that or different options that we can put together to to get a good outcome here? Seems like the Most plausible course to a beneficial outcome I can see is the first AGI is created by some group that wants to make it open and decentralized and doesn't care about controlling it personally and doesn't care about putting their own personal value systems in it as opposed to all the other human values.
01:24:06
Speaker
And then this first AGI has got to rapidly make a plan for secure, beneficial AGI development and rollout across the world, and then people have to choose to adopt that. right and i mean that I don't think it's unthinkable. So if you if if you look at Steve Omohundru and the ma Max Tegmark from Future of Life Institute,
01:24:32
Speaker
that They've written some stuff about trying to make a provably secure infrastructure for for the all that all the technology and in the world, right? And I mean, i i love i love i love i love the idea. i mean, I've done research myself on how to make systems provably secure, both in comm computing and in LLMs and so forth.
01:24:58
Speaker
i know I don't think that's terribly viable to roll out in the near term. And I argued this with Steve, who I've known a long time in ah in an interview interview we did, for a couple reasons. I mean, one is...
01:25:13
Speaker
It's just a lot more expensive to do things in in a in the secure way. So, I mean, i I posted a paper recently on secure transformer architecture.
01:25:25
Speaker
That will only slow things down by a factor of two or slow so. Just to make a transformer that isn't so susceptible to prompt injection attacks, and that's just slowing down by a factor of two to predict against one kind of attack vector. It's not provably secure.
01:25:41
Speaker
If you look at homomorphic encryption or something, which you need to make AI processes really secure with respect to other other people hacking in and spying and seeing what they're doing. I mean, right now that slows you down by a factor of several hundred.
01:25:55
Speaker
i did some interesting calculations suggesting you could do homomorphic encryption of complex programs with only maybe one order of magnitude slow down on a quantum computer.
01:26:07
Speaker
So interestingly, it might be that some of the security is easier when you're into quantum computing. just because of the different way quantum computers operate. But I mean, even so, it's just way more expensive to do things in a provably secure way.
01:26:21
Speaker
Adding on top of it, the fact that we don't really know how to do it yet for most of the processes in ah in in in our in our global tech stack. It's a really interesting research area, yeah but it is a research area, right?
01:26:31
Speaker
So you you would be asking... the world to pause all sorts of development and instead put huge amounts of money into research, fundamental proofability of of safety of different parts of our tech stack at a time when the U.S. is cutting out funding for the and NSF for all sorts of basic basic research, right?
01:26:53
Speaker
So, I mean, it doesn't seem plausible in that sense. And there's also the arms race dynamic, whereas if the U.S. chose to slow itself down by making everything provably secure, like if China or Russia didn't, the U.S. is quickly going to decide that's stupid.
01:27:11
Speaker
But... suppose your first AGI thinks provable security is important, right? And suppose that first AGI then solves these tech problems faster than humans have been able to and tells you how to make chips and operating systems and LLMs and reasoning systems and whatnot that are provably secure and infrastructures. Like here's, okay, here's provably secure embedded Linux, you know, version one, right?
01:27:38
Speaker
So if the first AGI is oriented towards benefit and and safety for humans as well as AGI's and figures out how to make the right sort of infrastructure.
01:27:51
Speaker
And if the first AGI manages to roll that out, I mean, then then then you could have a more beneficial transition and if if you want to go science fictional this can be achieved in in various ways right you could say the first agi release is a botnet that just replaces everyone's os with a provably secure os and then bob bothering by the by the bing you're done right i mean that's how it would work in the in the science fiction story and could happen you can't can't rule it out there's also there's also
01:28:27
Speaker
ah possible future where this comes out like global nuclear disarmament or like treaties on biological weaponry or something right where once you really have the AGI and it's there in front of you like it's really clear whoa this thing is smarter than people then suddenly the people running major world governments are like well okay yes, we will adopt we will adopt this safety protocol. And then the AGI presumably would roll out technology that allowed monitoring of whether the safety protocol with that was actually being being adopted.
01:29:00
Speaker
So it seems like that's at least a plausible avenue, right? Like the first AGI leads the process of enforcing some reasonable safety for the next stage.
01:29:12
Speaker
But the reason I think that's more plausible after the first AGI is launched is that I'm hoping the first AGI can just make security by design not be obscenely expensive and difficult, which which which it which it is,
01:29:28
Speaker
it it it is right It is right now. And it seems like for humans to make it not obscenely expensive and difficult will take a long time and the AI industry isn't going to pause for it. right so So to make this happen, you would need the first AGI to be beneficially oriented and to have a value system that makes it want to do this sort of thing he would also need to be really good at ah tech it seems like the really good at tech part is kind of falling into place right i mean we don't have agi yet but already llms are remarkably good at doing different sorts of math and physics i mean they can't they can't ground their math and physics activity and and in an overall context so i mean there's there's still there's still the showmit missing a lot but on the whole the direction is
01:30:19
Speaker
The first AGI will probably be really good at math, engineering and and and physics. So it seems like the value system part is the part that has to fall into place.
01:30:32
Speaker
I don't think that's hard on a conceptual or engineering standpoint. And I'll have some papers on that that I'll present at the AGI 25 conference, which we're having in Rekjavik in August. We've had a conference on AGI R&D every year since 2006 or so.
01:30:49
Speaker
the I think it may be hard in the the value system that open AI or Chinese government put on their first AGI system may not end up being the right one to make the first AGI
01:31:11
Speaker
properly steward the transition from AGI to ah ASI. I mean, my my hope is that by developing OpenCog Hyperon as a sort of hybrid multi-paradigm AGI system and rolling that out on global decentralized networks with full openness,
01:31:31
Speaker
My hope is we can get ah sort of ah value system that's determined by sort of informal participatory democracy among various interested parties, and we can get the right value system there.
01:31:44
Speaker
But I mean, there's I don't see any social guarantees here here either, right? So I mean, in in in that sense, our species is on a very high risk, high reward level.
01:31:58
Speaker
trajectory by any rational reckoning. i I tend to be very optimistic about how the singularity will come out in my heart, sort of based on a personal or spiritual sort of intuition about it.
01:32:14
Speaker
But if i look at the situation analytically, Confidence interval is very, very wide and there's tremendous uncertainty on all sorts of of important points.
01:32:26
Speaker
Ben, thanks for chatting with me. It's been really interesting. Yeah, yeah. Thanks. Thanks. Thanks for the good questions.