Introduction and Podcast Details
00:00:04
Speaker
Welcome to the Future of Life Institute Podcast. I'm Lucas Perry. Today's episode is with Yoshua Bach and Anthony Aguirre, where we explore what truth is, how the universe ultimately may be computational,
00:00:20
Speaker
what purpose and meaning are relative to life's drive towards complexity and order, collective coordination problems, incentive structures, and how these both affect civilizational longevity, and whether having a single, multiple, or many AGI systems existing simultaneously is more likely to lead to beneficial outcomes for humanity.
00:00:43
Speaker
If you enjoy this podcast, you can subscribe on your preferred podcasting platform by searching for the Future of Life Institute podcast, or you can help out by leaving us a review on iTunes. These reviews are a big help for helping the podcast to get to more people. Before we get into the episode, the Future of Life Institute has three new job postings for full time equivalent remote policy focused positions.
00:01:10
Speaker
We're looking for a director of European policy, a policy advocate, and a policy researcher. These openings will mainly be focused on AI policy and governance. Additional policy areas of interest may include lethal autonomous weapons, synthetic biology, nuclear weapons policy, and the management of existential and global catastrophic risk.
00:01:35
Speaker
You can find more details about these positions at futureoflife.org slash job dash postings. Link in the description. The deadline for submission is April 4th. And if you have any questions about any of these positions, feel free to reach out to jobsadmin at futureoflife.org.
Meet the Guests: Yoshua Bach and Anthony Aguirre
00:01:54
Speaker
Yoshibak is a cognitive scientist and AI researcher working as a principal AI engineer at Intel Labs. He was previously a VP of research for AI Foundation and a research scientist at both the MIT Media Lab as well as the Harvard Program for Evolutionary Dynamics.
00:02:14
Speaker
Yoshua earned his PhD in cognitive science and has built computational models of motivated decision making, perception, categorization, and concept formation.
00:02:26
Speaker
Anthony Aguirre is a physicist that studies the formation, nature, and evolution of the universe, focusing primarily on the model of eternal inflation, the idea that inflation goes on forever in some regions of the universe, and what it may mean for the ultimate beginning of the universe and time. He is the co-founder and associate scientific director of the Foundational Questions Institute, and is also a co-founder of the Future of Life Institute.
00:02:55
Speaker
He also co-founded Metaculous, which is an effort to optimally aggregate predictions about scientific discoveries, technological breakthroughs, and other interesting issues. And with that, let's get into our conversation with Yoshua Bach and Anthony Aguirre.
What is Truth in Formal Languages?
00:03:17
Speaker
All right, let's get into it. Let's start off with a simple question here. What is truth? I don't think that we can define truth outside of the context of languages. And so truth is typically formally defined in formal languages, which means in the domain of mathematics. And in philosophy, we have an informal notion of truth.
00:03:38
Speaker
and a vast array of them, a number of two series. And typically, I think when we talk about any kind of external reality, but we cannot know truth in the sense as we can know it in a formal system, what we can describe is whether a model is predictive. And so we end up basically looking at a space of theories and we try to determine the shape of that space.
00:04:03
Speaker
Do you think that there are kinds of knowledge that exist outside of that space? For example, do I require knowledge that I am aware?
00:04:12
Speaker
I think that knowledge exists only inside of a mind, that is, inside of an agent that observes things. Knowledge is not arbitrary, of course, because a lot of knowledge is formally derived, which means it has similar structure as a fractal, for instance, the Mandelbrot fractal, where you start out with some initial conditions and some rules, and then you derive additional information on that. But essentially, knowledge is conditional, because it depends on some axiomatic presupposition that you need to introduce.
00:04:42
Speaker
Of course, that doesn't make it arbitrary. It means that if you make that presupposition, you get to certain types of conclusions that are somewhat inevitable. And there is a limited space of formal theories that you can make. So there's a set of formal languages that we can explore that is limited.
00:04:58
Speaker
So for instance, when we define numbers, the number I think is best understood as a subsequent labeling scheme. And there are a number of subsequent labeling schemes that we can use and turns out that they have similar properties and we can map them onto each other. So we have several definitions for the natural numbers and then we can build the other types of numbers based on this initial number theory.
00:05:21
Speaker
And it turns out that other parts of mathematics are equivalent to theorems that we discover a number theory or many of them are. So it turns out that a lot of mathematics is looking at the same fractals in different contexts. In a way mathematics is about studying simple games and all languages are. I think mathematics is the set of all languages.
00:05:41
Speaker
You made an interesting statement that knowledge is something that only exists in the mind, well, in a mind. So I suppose you're contrasting that with information. So how would you think differently about knowledge or information? How would you draw that distinction if you do?
00:05:57
Speaker
I think that knowledge is applied to a particular domain. So it's regularities inside of a domain and this domain needs to be given. And the thing that gives is the stance of the agent that is looking at the domain. To have knowledge, you need to have a domain that the knowledge is about and the domain is given by an observer that is defining the domain.
00:06:20
Speaker
And it's not necessarily a very hard condition in terms of the power of the mind that is involved here. So I don't require a subject with first
How Do Models Represent Reality?
00:06:29
Speaker
person consciousness and so on. What I do require is some system that is defining the domain.
00:06:36
Speaker
So by a mind, you mean something that is defining the terms or the domain, or in some way sort of translating some piece of information into something with more semantic meaning? I think a mind is a system that is able to predict information by coming up with a global model, thereby making the information explainable.
00:06:58
Speaker
Okay. So predict information, meaning there's some information stream of input and a mind is able to create models for that stream of input to predict, you know, further elements of that stream.
00:07:14
Speaker
At the moment, I understand a model to be a set of variances, which are parameters that can change the value. Each of them has a range of admittable values. And then you have invariances, which are computable relationships between these values that constrain the values with respect to each other. And the model can be connected to variances that the model itself is not generating.
00:07:38
Speaker
which can be parameters that enter the system from the universe, so to speak, from whatever pattern generator is outside of that system. And the system can create a model to make the next set of values that are being set by the outside predictable. And I think this is how our own perception works.
00:07:57
Speaker
So in terms of human beings, truth is purely mind dependent. It requires a human being, for example, a brain, which is a substrate that can encode information, which generates models. It's not mind dependent because it's going to be the same for every mind with the same properties in terms of information processing and modeling capacity. You mean it's not brain dependent? It is not subjective. It's not subjective. Yes.
00:08:26
Speaker
So for instance, the knowledge that we derive while studying mathematics is not subjective knowledge. It's knowledge that flows out of the constraints of the domains and our ability to recognize these constraints.
00:08:39
Speaker
So how does this stand in relation to accounts of truth as being third person out there kind of floating in the universe truths when we bring in the dependence of models and minds? The first person is the construct to begin with, right? The first person is not immediately given. It's introduced into the system and our system as a set of priors. Then you're stuck with it until we deconstruct it again.
00:09:05
Speaker
But once we have understood what we are, belief is no longer a verb, right? There is no relationship between the self and the set of beliefs anymore. You don't change by changing your beliefs about a certain domain. You just recognize yourself as a system that is able to perform certain operations. And every other system that is able to perform the same operations is going to be equivalent with respect to modeling the domain.
00:09:31
Speaker
I think Lucas is maybe getting at a sort of claim that there's a more direct access to knowledge or information that we as a subjective agent are directly perceiving, right? There are things that you infer about the world and there are things that you directly experience about the world as an observer in it. So I have some sensory stream, I have access to the contents of my mind at some level, and this is very direct knowledge.
00:09:57
Speaker
Well, there's a feeling that I don't have to do a bunch of modeling to understand what's going on. And there's an interesting question of whether that's true or not, like to what degree am I modeling things all the time? And that's just happening at a subconscious level.
00:10:11
Speaker
Yeah. So a primitive version of that would be like how in Descartes epistemology, he starts off with awareness. You start off with being aware of being aware. So that might include some kind of subconscious modeling, though in terms of direct experience, it doesn't really seem like it requires a model. It's kind of like a direct apprehension, which may be structured by some kind of modeling.
00:10:31
Speaker
Of course Descartes is telling a story when he writes all this. He is much, much further when he writes down the story, right? This is not the thing that he started writing as a two year old and then breaking his path along that line. Instead, he starts out at a highly aware state and then backtracks and tries to retrace the steps that he had possibly taken to get where he was.
00:10:53
Speaker
When we have the impression that parts of our modeling is immediately given, it means that we don't have any knowledge about how it works. In other words, we don't have a model that we can talk about or that we unconsciously access before we can talk about it. And when we become aware of the way in which we construct representation, we get closer to what many Eastern philosophers would call enlightenment, that our ideas become
00:11:20
Speaker
visible in their nature as representations. That we understand that everything that we perceive as real is in fact a representation happening outside of our mind and we become aware of the mechanisms that construct these representations and that allow us to arrive at different representations if we parameterize these processes differently.
00:11:40
Speaker
Yeah, so I'm interested in this statement in comparison to an earlier statement you made about what is not subjective. When you talk about the construction of, so there are a whole bunch of things that I experience as an aware agent. The claim that you're making, which I agree with, is that pretty much all of that is something that I've constructed as something to experience based on some level of sensory input, but heavily processed and modeled.
00:12:05
Speaker
And it's essentially something that I'm almost dreaming in my mind to create this representation of the world that I'm then experiencing. But then there's also a supposition that we have that there is something that is causing the feeling of objectivity. There's something underlying our individual agents sharing similar perceptions, using similar models, and so on. So there's, of course, some meaning that we ascribe to an external world based on that shared
00:12:35
Speaker
concepts and ideas and sort of things that we attribute this existence to that are underlying the model that we construct in our mind. So I guess the question is where we draw the line between the subjective thing that we're sort of inventing and modeling in our internal representation and the objective world. Is there such thing as a purely objective world? Is it all representation at some level? Is there a very objective world with lots of interesting things in it and we more or less understand it correctly as it is? Like where do you sit on that spectrum?
00:13:05
Speaker
Everything that looks real, everything that has the property of being experiential at some level that can be a subject of experience is going to be constructed inside of your mind. Is this issue that it's not you that is constructing it because you are a story inside of that mind about that agent, but this I is not the agent. The I is the representation of the agent inside of the system.
00:13:29
Speaker
And what we can say about the universe,
The Role of Mathematics and Computation
00:13:32
Speaker
it seems to be generating patterns. These patterns seem to be independent of what we want them to be, right? So it's not that all of these patterns seem to be originating in our mind, at least this is not a model that makes very good predictions. And there seem to be other minds that seem to be able to perceive similar patterns, but in a different projection.
00:13:52
Speaker
And the projection is being done by the mind that is trying to explain the patterns. So the universe is projecting some patterns at your systemic interface and at your thalamus or at your retina and body surface or wherever you draw the line. And then you have a nervous system behind this or some kind of other information processing system. And our best model is that it's a nervous system that is identifying regularities in these patterns. And part of these regularities are, for instance, things.
00:14:21
Speaker
things don't exist independently of our decomposition of the universe into things. This doesn't mean that the decomposition is completely arbitrary because our mind makes it based on what's salient and what compresses well, but ultimately there is just this unified state vector of the universe that is progressing and you make it predictable by splitting it up into things, into separate entities that you model independently of each other in your mind and they
00:14:46
Speaker
exchange information, thereby change each other object's evolution. And this is how we get to causality and so on. But for instance, causality doesn't exist independently of the decomposition of the universe into things, which has to be done by a mind. Well, I'm curious as to why you say what there really is, is this unified evolving state vector. I mean, how do you know?
00:15:08
Speaker
I don't. So this is the easiest explanation. I have difficulty to make it simpler than that. But the separation of the universe into objects seems to be performed by the observing mind. The alternatives don't make sense.
00:15:24
Speaker
What you said seemed to attribute sort of a greater reality to this idea of an evolving state vector or something, then other descriptions are the same. There are lots of ways we can describe whatever it is that we call this system that we're inhabiting, right? We can describe it at the level of objects,
00:15:43
Speaker
and the everyday level, we can describe it at the level of fields and particles and things we could describe. But maybe there's a string theory description, maybe there's a single, maybe there's a cellular automaton cranking along. Do you feel that one of these descriptions is more true than the others? Or do you feel like these are different descriptions of the same? What is more basic?
00:16:09
Speaker
So the description of the universe as a set of states that changes is the most basic one. And then I can say, it seems that I can take the set of states and organize it if it was a space, right? It's a certain way to project the state vector.
00:16:26
Speaker
you get to a space to something that is multi-dimensional by taking your number line and folding it up, for instance, into a lattice. And you can make this with a number of operators, and then you can have information traveling in this lattice, and then you usually get to something like particles. And if the particles don't have a distinct location and you just describe the trend of particles going, then you have fields. But they all require additional language that you need to introduce and additional assumptions. And also we find that they're all not true.
00:16:56
Speaker
in the sense that there is a level of description where reality and description fall apart from each other. So, for instance, the idea of describing the universe as a Minkowski space works relatively well at certain scales, but if you go below these scales, you get singularities, or in certain regions of the universe you get singularities, and you realize you're not living in space, you can just describe the dynamics at a certain scale, usually as space. Same thing is true for particles.
00:17:23
Speaker
where particles are a way to talk about how information travels between the locations that you can discern in the state vector.
00:17:31
Speaker
No, but you should challenge me. So the thing that I'm sometimes frustrated with with philosophers is that they somehow think that the notions that they got in high school physics are somehow objectively given and they know electrons and physicists know what electrons are and there's a good understanding and we touch them every day and then we build information processing on top of the electrons. And this is just not true. It's the other way around, right?
00:17:55
Speaker
I mean, all of these concepts, the more you analyze them and the harder you think about them, the less you understand them, not the more, you know, in the sense of for particles. I mean, if you ask what an electron is, that is a surprisingly hard answer to give as a physicist, right? An electron is an excitation of the electron field. And what is the electron field? It's a field that has the ability to create electrons.
00:18:18
Speaker
Yes, but when you don't know this, it means you have never understood it, right? The person that came up with these notions have either pulled them out of thin air or they understood something that you don't. Right? They saw something in their mind that they tried to translate into a language that people could work with and that allowed them to build machinery, but that did not necessarily convey the same understanding.
00:18:39
Speaker
And that to me is totally fascinating, right? You learn in high school how to make a radio, but you don't get the ideas that were necessary to get somebody the idea, oh, this is what I should be doing to get a radio in the first place, right? This is not enough information to let you invent a radio because you have no idea why the universe would allow you to build radios.
00:18:59
Speaker
And it's not that people experimented and they suddenly got radios and then they tried to come up with a fancy theory that would project electrons into the universe and magnetic fields or electromagnetic fields to explain why that radio would work. They did have an idea on how the universe must be set up and then they experimented and came up with the radio.
00:19:19
Speaker
and verify that their intuition must be corrected some level. And the idea that Wolfram has, for instance, that the universe is a cellular automaton is given by the idea that the cellular automaton is a way to write down a Turing machine. It's equivalent to a Turing machine. Did you say cellular automata? Yes.
00:19:39
Speaker
Okay. Is a way to, to write down a Turing machine. Yes. So there is a cellular automaton or the number of cellular automata, which are Turing complete and cellular automaton is a general way to describe computational systems. And since the constructive is turn, I think that mostly agree that every evolving system can be described as some automaton. Right. As a finite state machine or in the unbounded case as a Turing machine.
00:20:11
Speaker
Well, that I think is not clear, right? I think it's certainly not the way that physics as it actually operates works, right? We do all this stuff with real numbers and calculus and all kinds of continuous quantities. And it's always been of interest to me. If we suppose that the universe is fundamentally discrete or digital in some way, it sure disguises it quite well in the sense that the physics that we've invented based on so many smooth quantities and
00:20:38
Speaker
Differentials and all these things is incredibly effective and even quantum mechanics which has this fundamental discreteness to it and is literally quantum mechanics is all still full of chop full of real numbers and smooth evolutions and ingredients that
00:20:53
Speaker
There's no obvious way to take the mechanics, any of the fundamental physics theories that we know and make a nice analog version of them that turns into a nice digital version, rather, that turns into the analog continuous version that we know and love, which is an interesting thing. You would think that that would be simpler if the world was fundamentally discrete and quantum.
00:21:17
Speaker
So I don't know if that's telling us anything other may just be historical that these are the first tools that we developed and they worked really well. And so that became kind of the foundation for our way of thinking about how the world works with all sorts of smooth quantities and certainly.
00:21:33
Speaker
discrete quantities when you have a large enough number of them can act smoothly. But it's nonetheless, I think, a little bit surprising how basic and effective all of that mathematics appears to be if it's totally wrong, if it's essentially, fundamentally, the wrong sort of mathematics to describe how reality is actually working at the baseline.
00:21:53
Speaker
It's almost correct. You remember these stories that Pythagoras very opposed to introducing irrational numbers into the code base of mathematics. And there are even stories that he committed murder, which are probably not true, to prevent that from happening. And I think that he was really onto something. The world was not ready yet.
00:22:12
Speaker
And the issue I think that he had with irrational numbers is that they're not constructive. And people didn't have until quite recently in the languages of mathematics is the distinction between value and function. And classical semantics, a value and a function are basically equivalent because if you can compute the function, you have a value. And if the function is sufficiently defined, you can compute it, right? And for instance, there is no difference between pi as a function, as an algorithm that you would need to compute.
00:22:41
Speaker
and pi as a value in this classical semantics. But in practice, there is. You cannot determine arbitrary many digits of pi before your sun burns out. There's a finite number of digits of pi that every observer in our universe will ever know. And that also means that there can be no system that we can build or that something with similar limitations in this regard can ever build. That relies on knowing the last digit of pi when it performs an operation.
00:23:09
Speaker
So if you use a code base that assumes that some things in physics have known the last digit of pi before the universe went on to go into its next state, you are contradicting this idea of constructive mathematics. And there was some hope that this could be somehow recovered because mathematicians before this constructive turn just postulated this infinite number generator or this infinite operation performer as a black box.
00:23:37
Speaker
You would introduce it in your axiomatic system, for instance, via the axiom of choice. And so you postulate your infinite number calculator, the thing that is able to perform infinitely many operations in a single step or read infinitely many arguments into your function in a single step instead of infinitely many. And if you assume that this is possible, for instance, via the axiom of choice,
00:24:02
Speaker
You get nice properties, but some of them are very paradoxical. You probably know Hilbert's hotel, right? A hotel that has infinitely many rooms and is fully booked. And then a new person arrives and you just ask everybody to move one room to the right. And now you have an empty room and you can also, you have infinitely many buses come and you just ask everybody to move into the room that has twice its current number.
00:24:24
Speaker
And in practice, this is not going to work because in order to calculate twice your number, you need to store it somehow. And if the number gets too large, you are unable to do that. But it should also occur to you that if you have such a thing as Hilbert's Hotel, that you're not looking at a feature, you're probably looking at a bug.
00:24:41
Speaker
You have just invented a co-nucopia, you get something from nothing, or you get more from little, and that should be concerning. And so when you look into this in detail, you get to Gödel's proof and to the halting problem, which holds two different ways of looking at the issue that when you state that you can make a proof without actually being performing the algorithm of that proof, you run into contradictions.
00:25:06
Speaker
That means you cannot build a machine in mathematics that runs the semantics of classical mathematics without crashing. That was Goedel's discovery. And it was a big shock to Goedel because he strongly personally believed in the semantics of classical mathematics. And it's confusing to people like Penrose who thinks that I can do classical mathematics, computers cannot. So my mind must be able to do something that computers cannot do. Right. And this is obviously not true. So what's going on here? I think what happens is that we are looking at too many parts to count.
00:25:36
Speaker
When you look at the world of too many parts to count that are very, very large numbers, you sometimes get dynamics that are convergent,
Understanding Reality: Classical vs Computational Mathematics
00:25:44
Speaker
right? That means that if you perform an operation on a trillion objects or on a billion objects in this domain, the result is somewhat similar. And the domain where you have these operators and the way these operators converge, this is geometry by and large.
00:25:59
Speaker
And this is what we are using to describe the reality that we are embedded in, because the reality that you and me are embedded in is necessarily composed of too many parts to count for us, because we are very very large with respect to our constituent parts, because otherwise we wouldn't work.
00:26:16
Speaker
Yeah. Again, if there are enough of something and you're trying to model it, that real numbers will work well. But it is not real numbers. There are very, very large numbers. This is all. Right. But the model is real numbers. I mean, that's what we use, right? When we do physics.
00:26:31
Speaker
Yes, but it's totally fine when you do classical mechanics. It's not fine when you do foundational physics, because at this point you are checking out a code base from the mathematicians that has some comments by now. And the comments say, this is actually not infinite. When you want to perform operations in there and when you want to prove the consistency of the language that you're using, you need to make the following changes, which means that pi is now a function.
00:26:58
Speaker
And you can plug this function into your nearest sun and get numbers, but you don't get the last digit. So you cannot implement your physics in such a way or your library of physics that you rely on having known the last digit of PI. It just means that for sufficiently many elements that you're counting, this is going to converge to PI. This is all there is.
00:27:19
Speaker
Right. No, I understand the motivation for thinking that way. And it's interesting that that's not the way people actually think in terms of fundamental physics, in terms of quantum mechanics or quantum gravity or anything else. Yeah, but this is their fault. It's not our fault. We don't have to worry about them. Fair enough. So I think the claim that you're making would be that if we do, you know, there's a question if a hundred years from now, the final theory of physics is discovered and they have it available on a t-shirt near them.
00:27:49
Speaker
that is that theory going to be a essentially contain a bunch of natural numbers and counting a discrete elements or is it going to look yes of course integers is all that is computable right the universe is made from integers so this is a strong claim about kind of what's ultimately going to happen in physics i think that you're making which is an interesting one
00:28:11
Speaker
and is very different from what most practicing physicists in thinking about the foundations of physics are taking as their approach. Yes, but there's a growing number of physicists which seem to agree with this. So very informally, when I was in Banff at a foundational physics conference, I made a small survey among the foundational physicists. I got something like 200 responses.
00:28:35
Speaker
And I think 6% of the accumulated physicists were amenable to the idea of digital physics or were proponents of digital physics. But I think it's really showing that this is growing and attraction because it's a relatively new idea in many ways.
00:28:51
Speaker
So if I were trying to summarize this, and I guess in a few statements, correct me if I got any of this wrong, is that because you have a computationalist metaphysics, Yosha, if reality contains continuous numbers, then the states of the large vector space can't be computed. Is that the central plane?
00:29:13
Speaker
So the problem is that when you define a mathematical language, any kind of formal language that assumes that you can perform computations with infinite accuracy, you run into contradictions. This is the conclusion of Goethe. And that's why I'm stuck with computation. I cannot go beyond computation. Hypercomputation doesn't work because I cannot be defined constructively. I cannot build a hypercomputer that runs hypercomputational physics.
00:29:41
Speaker
So I will be stuck with computation. But it's not a problem because there is nothing that I cannot do with it that I could do before. Everything that worked so far is still working, right? Everything that I could compute in physics until now is still computable.
00:29:56
Speaker
Okay, so to say digital is to say, what is that to say? It means that when you look at physics and you make observations, you necessarily have to make observations with a finite resolution. And when you describe the transitions between the observables, you will have to have functions that are computable.
00:30:17
Speaker
And now there's the question, is this a limitation that only exists on the side of the observer? Or is this also an limitation that would exist on the side of the implementation? And now the question is how could God or whatever a hypothetical circumstance perform to do this? So answer is if the universe is implemented in any kind of language, then it's subject to the same limitations.
00:30:41
Speaker
The only question is, could it be possible that the universe is not implemented in some kind of language? The question is, is it implemented? What does it mean to be implemented? We can no longer talk about it. It doesn't really make sense to talk about such a thing.
00:30:56
Speaker
We would have to talk about it in a language that is inconsistent. So when we want to talk about a universe that is continuous, we would have to define what continuity means in a way that would explain how the universe is performing continuous operations, at least at some level. And the assumption that the universe can do those leads into contradictions.
00:31:16
Speaker
So we would have to basically deal with the fact that we have to cheat at some point. And I think what happened before constructive mathematics was that people were cheating. They were pretending that they could read infinitely many arguments into their function, but they were never actually doing it. It was just giving them elegant notations.
00:31:33
Speaker
And these elegant notations are specifications for things that cannot be implemented. And a practice was not a problem because people just found workarounds in practice, right? They would do something slightly different than the notation. And something that I found as a kid very irritating when I did mathematics, because sometimes the notation that I was using in mathematics was so unlike from what I had to do in my own mind.
00:31:56
Speaker
or what I had to do on my machine to perform a certain operation. And I didn't really understand the difference. I didn't understand the difference between computational mathematics and why it existed. And in practice, the difference is that the classical mathematics is stateless. It assumes that everything can be computed without going from state to state.
00:32:16
Speaker
Yeah, but it's beguiling because you think there's this number pi and you feel like it exists and whether anybody computes the 10 to the 50th digit of it, it has some value. If you could compute it, it would have some value and
00:32:30
Speaker
There's no sort of freedom in what that value was going to be. That value is defined as soon as you make the definition of pi. But go back to this moment when you had this idea that pi is there for you with all its infinite glory. There's certainly not an idea that you were born with, right? You were hypnotized into it by your teachers. And they by their teachers. And so somebody at some point made this decision. And it's not obvious.
00:32:58
Speaker
No, it's, it's sort of a relic of Plato, I guess. Mathematicians are obviously are split in this in all kinds of ways, but there are a lot of mathematicians who are Platonists and really very much think that Pi meaningfully exists in all its glory, like in that way. So, and I think most physicists, well, I don't know, actually, I think there are probably surveys that we could consult, but I haven't about the degree of Platonism and physicists. I suspect it varies pretty widely actually.
00:33:26
Speaker
Now take a little step back from all these ideas of mathematics that you may or may not have gotten in school, and your teachers may or may not have gotten them in school, but look back at what you know, your computer, and how your computer would make sense of the universe if you could implement AI on it.
00:33:43
Speaker
This is something that we can see quite clearly. Your computer has a discrete set of sensory states that it could make sense out of and it would be able to construct models and it can make these models in a resolution that is, if you want to, higher than the resolution of our own mind. It can make observation at a higher resolution than our senses.
00:34:06
Speaker
And there is basically no limit in terms of not being able to surpass the kind of model that we are doing. And yet everything in the mind of that system would be discrete necessarily, right? So if the system conceives of something that is continuous, if that system conceives of a language that is continuous, and we can show that this language doesn't work, that it is not consistent, then the system must be mistaken, right? It's pointing indexically at some functions that perform some kind of effective geometry.
00:34:36
Speaker
And an effective geometry is one, like the one in a computer game, right? It looks continuous, but it certainly is not. It's just continuous enough. Relative to the conventional physics, what does a computational view of physics, what is that changing? Nothing. If the conventional physicist has to make the mistake, it's exactly the same. It's just that conventional physics invites making certain kinds of mistakes that might lead to confusion.
00:35:05
Speaker
But in practice, it doesn't change anything. It just means that the universe is computable. And of course, it should be because for it to be computable just means that you can come up with a model that describes how adjacent states in the universe are correlated. OK, so this leads into the next question. So we're going to have to move along a little bit faster and more mindfully here. You've mentioned that it's tempting to think of the universe as part of a fractal. Can you explain this view and what is physics according to this view?
00:35:31
Speaker
So I think that mathematics is in some sense, something like a fractal. So the natural numbers are similar to the generator function of the Mandelbrot fractal, right? You take Peano's axioms and as a result, you get all these dynamics. If you introduce a few suitable operators and you can define the space of natural numbers, for instance, using addition, or you can do it using multiplication.
00:35:56
Speaker
In the case of multiplication, you will need to introduce the prime numbers because you cannot derive all the natural numbers just by multiplication with 1 and 0. You need all these elements, the basic elements of this multiplication space would be the prime numbers. And now there is the question, is there an elegant translation between the space of multiplication and the space of addition? So it maybe has to do with the Riemann zeta function, right? We don't know that.
00:36:22
Speaker
But essentially we are exploring a certain fractal here, right? Now with respect to physics, there is the question, why is there something rather than nothing? And why does this something look so regular? We can come up with anthropic arguments, right? If nothing we exist, then nobody would be asking the question. Also, if that thing was irregular, we probably couldn't exist. The type of mind that we are and so on requires large scale regularity, but it's still very unsatisfying because
00:36:51
Speaker
This thing that something exists rather than nothing should have an explanation that we want to make as simple as possible. And the easiest explanation is that maybe existing is the default. So everything that can be implemented would exist and maybe everything that exists at the superposition of all the finite automata. And as a result, you get something like a fractal and in this fractal that there is us.
00:37:14
Speaker
This idea that the universe can be described as an evolving cellular automaton is quite odd. I think it goes back to Zeus' Rechten der Raum and it's been formulated in different ways by, for instance, Fredkin and Wolfram and a few others.
00:37:29
Speaker
But again, it's equivalent to saying that the universe is some kind of automaton, some kind of machine that takes a starting value or starting configuration and then or any kind of configuration and then moves to the next configuration based on a stable transition function.
00:37:46
Speaker
But there's a difference between saying that the universe is a cellular automata and the universe is all possible cellular automata. That view is held sort of by Max Tegmark and I think Jurgen Schmidt-Uber and I think I'm forgetting his name. But you're talking more about that version, aren't you?
00:38:05
Speaker
I'm not sure if it's, it just means that Wolfram's project of enumerating all the cellular automata and taking the first one that produces a universe that looks like ours might not work.
Exploring Consciousness and Perception
00:38:16
Speaker
It could be that you need all of them in a certain enumeration, but it still is a cellular automaton. It's just one that is much longer than Wolfram but hope.
00:38:26
Speaker
You're saying all possible cellular automata is the same as one cellular automata? Yes, it would be one cellular automaton that is the result of the superposition of all of them, which probably means that long cellular automata only act relatively rarely, so to speak. And I have no idea how to talk about this in detail and how to model this. So this is a very vague intuition. OK, so could you describe the default human state of consciousness
00:38:55
Speaker
Consciousness I think is the control model of our attention and human consciousness is characterized by attention to features that are taken to be currently the case that we keep stable in this way. So we can vary other parameters that we can make conditional variance in our perception and so on, and that we can form index memories for the purpose of learning. Then we have access consciousness, which is a representation of the attention relationship that we have to these objects.
00:39:25
Speaker
So we know that we are paying attention to them and we know in which way we are paying attention to them. So for instance, do we single out sensory features or high level interpretations of sensory features or hypotheses or memories and so on. And this is also part of the attention representation. And third, we have reflexive consciousness. And this is necessary because the process is now our neocortex are self-organizing to some degree.
00:39:49
Speaker
And this means that this attentional process needs to know that it is the sensory, the attentional process. So it's going to make a perceptual confirmation of the fact that it is indeed the attentional process. And this makes consciousness reflexive. We basically check back, am I spacing out or am I still paying attention?
00:40:08
Speaker
Am I the thing that pays attention? And this loop of going back between the content and the reflection is what makes our consciousness almost always reflexive. It stops being reflexive, drift off and often fall asleep, like literally.
00:40:25
Speaker
What would you say that that sort of self-reflexive consciousness is experienced by non-human animals? Do you consider a mouse that probably does not have that level of awareness not conscious? Or are you sort of talking about different aspects of consciousness, some of which humans have and some of which other creatures have?
00:40:47
Speaker
I suspect that consciousness is not that complicated. What's complicated is perception. So setting up a system that is able to make a real time adaptive model of the world that predicts all sensory features in real time makes global universe model and figures out which portion of the universe model it's looking at right now and swaps this in and out of working memory as needed. That's the hard part.
00:41:09
Speaker
And once you have that and you have a problem that is hard enough to model, then the system is going to model its own relationship to the universe and its own nature, right? So you get a self-model at that level. And I think that attentional learning is necessary because just correlative learning is not working very well.
00:41:29
Speaker
The machine learning algorithms that we are currently using largely rely on massive backpropagation over many layers and the brain is not such a neatly layered structure but it has links all over the place and also we know that the brain is more efficient. It needs very fewer instances of observing a certain thing before it makes connection and is able to make inferences on that connection. So you need to have a system that is able to
00:41:57
Speaker
basically single out the parts of the architecture that need to be changed. And this is what we call attention. When you are learning, you have a phase where you do simple associative learning, usually right after you're born and before, when you form your body map and so on.
00:42:12
Speaker
And after the initial layers are formed, after the brain areas are somewhat laid out and initialized and connected to each other, you do attentional learning. And this attentional learning requires that you decide what to pay attention to in which context and what to change and why and when to reinforce it and when to undo it.
00:42:31
Speaker
And this is something that we are starting to do now in AI, especially with the transformer, where attention suddenly plays a role and we make models of the system that we are learning over. So in some sense, the attention agent is an agent that lives inside of a learning agent. And this learning agent lives inside of a control agent.
00:42:52
Speaker
And the control agent is directing our relationship to the universe, right? You notice that you're not in charge of your own motivation. You notice that you're not directly in charge of your own control, but what you can do is you can pay attention to things. And the models that you generate while paying attention are informing the behavior of the overall agent. And the more we become aware of that, the more this can influence our control.
00:43:16
Speaker
So Lucas was asking about the default state of consciousness, which is we have this apparatus, which we've developed for a very particular reason, survival and modeling the universe or modeling the world well enough to predict how it's going to work and how we should take action to enhance our survival or whatever, instead of kind of auxiliary goals that we might have.
00:43:38
Speaker
But it's not obvious that that manner of that that mode of consciousness or mental activity is optimal for all purposes. It's certainly not clear that if I want to do a bunch of mathematics, the type of mind that enables me to survive well in the jungle is the right type of mind to do really good mathematics. It's sort of what we have and we
00:44:01
Speaker
shunted over to the purpose of doing mathematics, but it's easy to imagine that the mind that we've developed in sort of its default state is pretty bad at doing a lot of things that we might like to do once we choose to do things other than survive in the jungle. So I'm curious what you think about how different a machine neural architecture could be if it's trying to do very different things than what we've developed to do.
00:44:29
Speaker
I suspect that given a certain entanglement, that is the ability to resolve the universe in a certain spatial and temporal way, and then having sufficient processing resources and speed, there is going to be something like an optimal model that we might be ending up with. And humans are not at this level of the optimal model. Our brain is still a compromise, I think.
00:44:52
Speaker
So we might benefit from having a slightly larger brain. I think that the differences between a human performance are differences in innate attention. So talents are basically differences in innate attention. And you know how they say how we recognize an extroverted mathematician. They look at your shoes.
00:45:11
Speaker
And what this alludes to is that good mathematicians tend to have Asperger's, which means that they are using parts of their brain that normal humans use for social cognition and for worrying about what other people think about them, for thinking about abstract problems.
00:45:27
Speaker
And often I think that the focus on analytic reasoning is a prosthesis for the absence of normal regulation. That is, if you are a nerd and you have as a result a slightly different interface to the social environment than the normies, you as a child don't get a manual on how to deal with this.
00:45:45
Speaker
And so you will act on your impulses, on your instincts. And unless you are surrounded by other nerds, you tend to fail in your social interactions and you will experience guilt, shame, and despair. And this will motivate you to understand what's going on. And because as a child, you typically don't have any tools to reverse analyze yourself or even notice that there is something to reverse engineer, you will turn around and reverse engineer the universe.
00:46:11
Speaker
Yeah, I suspect if we were able to have the experience and maybe someday we will with some sort of mind to mind interface of sort of seeing, actually experiencing someone else's worldview and processing reality through their apparatus. I suspect we would find that it's much more different than we suppose on an everyday level. We sort of assume that people are understanding and seeing the world more or less as we are with some variations.
00:46:39
Speaker
And I would love to know to what degree that that's true. You read these, or I read these articles of some people who cannot imagine visual images. And there are people I compare with my wife, when she imagines a visual image in her mind, it is right there in detail. She can experience it.
00:46:58
Speaker
Sort of like I would experience a dream or physical reality when I think of something in my mind, it's quite vague. And then there are other people who there's sort of nothing there. And that's just one aspect of cognition. I think there would be a fun project to really map out.
00:47:14
Speaker
what that sort of range in qualia really is among human minds. I suspect that it's much, much bigger than we really appreciate and might cause us to have a little more understanding for other people if we understood how literally differently they experience the world than we do.
00:47:34
Speaker
Or it could be that it's very similar. That would be fascinating too. The degree, I mean, we do all function fairly well in this world. So there has to be some sort of limit on how differently we can perceive and process it. But yeah, that fascinates me how big that range is.
00:47:52
Speaker
I have similar observation with my own wife who's an artist and she is very good at visualizing things but she doesn't notice external perception as much and compared with a highly perceptive person I think I have about as about 20 percent of their sensory experience because I'm mostly looking at the world through a conceptual reflection. I have aphantasia so outside of dreams or hypnagogic state I cannot visualize
00:48:19
Speaker
or don't get conscious access to my visualizations. I can draw them, so I can make designs. But for the most part, I cannot make art.
00:48:29
Speaker
because I cannot see these things. I can only do this in a state where I'm slightly dreaming. So it's also interesting to notice the difference. It's not that I focus on this and it gets more and more plastic and at some point I see it, it's more like it's being shifted in from the side. It's similar to when you observe yourself falling asleep and you manage to keep a part of yourself online to track this.
00:48:52
Speaker
without preventing you from going to sleep, which is sometimes hard, right? But if you pull it off, you might notice how suddenly the images are there and they more or less come from the side, right? It's something is shifting in and it's taking over.
00:49:08
Speaker
I think that there is a limit to, because Anthony brought this up, to how it makes sense to understand the world in the way that you operate. It's just that we often tend not to be completely aware of it during the first 400 years or so.
00:49:23
Speaker
So it takes time to reverse engineer the way in which you operate. For instance, it took me a long time to notice that the things that I care about are largely not about me as an individual. So, I mean, of course you all know this because it's a moral prerogative that you don't only care about yourself, but also about the greater whole and future generations. But what does that actually mean? And you are basically an agent that is modeling itself as part of a larger agency.
00:49:49
Speaker
And so you get a compound agent and a similar thing is true for the behaviors in your own mind. They are not seeing themselves as all there is, but they at some level model that they are part of a greater whole of some kind of processing hierarchy and control hierarchy in which they can temporarily take over and run the show.
00:50:08
Speaker
But then they will be taken over by other processes and they need to coordinate with them and they're part of that coordination. And this goes down to the lowest levels of the way in which this is implemented. And ultimately you realize that information processing systems on this planet
00:50:24
Speaker
which coordinate, for instance, an organism or that cooperation between organisms or that cooperation within ecosystems. And this is the purpose of our cognition in some sense. It's the maintenance of complexity so we can basically shift the bridgehead of order into the domain of chaos. And this means that we can harvest neck entropy in regions where dumper control systems cannot do that.
00:50:48
Speaker
That's the purpose of our cognition. And to understand our own role in it and the things that matter is the task that we have. And there are some priors that we are born with, so ideas about the social roles that we should be having, the things that we should care about and should not. And over time, we replace these priors, these initial reflexes with concrete models. And once we have a model in place, we turn off the reflex.
00:51:11
Speaker
You don't think that the expression of proxies and evolution means that harvesting neg entropy isn't actually what life is all about? Because it becomes about other things once we gain access to proxy rewards and rational spaces and self-reflection.
00:51:28
Speaker
I think it's an intermediate phase where you think that the purpose of existence is, for instance, a mass knowledge or to have insight or to have sex or to be loved and loved and so on. And this is just before you understand what this is instrumental to. Right. But like you start doing those things and they don't contribute to neg entropy.
00:51:49
Speaker
They do. If they are useful for your performance as the living being or for the performance of the species that you're part of or the civilization or the ecosystem, then they serve a certain role. If not, then they might be statistically useful. If not, then there might be a dead end.
00:52:05
Speaker
Yeah, that makes sense. I mean, life is quite good at extracting neg entropy or information from its hidden troves supplied by the sun.
Life's Role in Information Processing
00:52:16
Speaker
I don't think we're ever going to compete with plankton, at least not anytime soon, maybe in the distant future. So it seems like a little bit of an empty
00:52:25
Speaker
If you have a pool, you are competing with plankton, right? We are able to settle surfaces that simpler organisms cannot settle on. And we're not yet interested in all the surfaces. We are just one species on this planet, right? I don't know. I guess if I beat the plankton at some point, I'm just not that excited about that. I mean, I think.
00:52:45
Speaker
It's true that to sustain as life, we have to be able to harvest that, transform that information from the form that it's in to an ability to maintain our own homeostasis and fight the second law of thermodynamics, essentially, and maintain a structure that is able to do. But that's metabolism.
00:53:05
Speaker
and is, I think, sort of the defining quality that life has. But it's hard for me to see that as more than a sort of means to an end in the sense that enables all of the activities that we do. But if you told me that someday our species will process this much entropy instead of that much entropy, that's fine. But I don't know, that doesn't seem terribly exciting to me. Just like discovering a gigantic ball of plankton and space, I wouldn't be that impressed with it. To me, I think there's something
00:53:34
Speaker
more interesting in our ability to create sort of information hierarchies where there are not just the amount of information, but information structures built on other information structures, which biological organisms do. And we do mentally with all sorts of civilizational artifacts that we've created. The level of hierarchical complexity that we're able to achieve seems to me more interesting than just the ability to like effectively metabolize a lot.
00:54:03
Speaker
I think that as an individual, we don't need to know that. As an individual, we are relatively small and inconsequential. We just need to know what we need to know to fulfill our whole role. But if the civilization that we are part of doesn't understand that it's in the long-term battle about entropy, then it is in a dire situation, I think. There's so much fuel around. I just think we have to figure out how to unlock a little bit better, but we are not going to run out of neg entropy.
00:54:32
Speaker
No, life is not going to run out of neck entropy. It could be that we are running out of the types of food that we require and the climates that we require and the atmosphere that we require to survive as a species. But that is not a disaster for life itself, right? Cells are very hard to eradicate once they settle the planet. Even if you have a super volcano erupting or a meteor hitting the earth, earth is probably not going to turn sterile anytime soon.
00:54:59
Speaker
And so from the perspective of life, humanity is probably just an episode that exists to get the fossilized carbon back into the atmosphere.
Aligning AGI with Human Values
00:55:09
Speaker
So let's pivot into AGI who might serve as a way of locally optimizing the increase of entropy for the Milky Way and hopefully our local galactic cluster. So Yoshua, how do you view the path to aligned AGI?
00:55:29
Speaker
If you build something that is smarter than you, the question is, is it already aligned with you or not? If it's not aligned with you, you will probably not be able to align it at least not in the long run.
00:55:40
Speaker
The right alignment is in some sense about what you should be doing under the circumstances that you are in. When we talk about alignment in a human society, there are objective criteria for what you should be doing if you are part of a human society, right? It's given by a certain set of permittable aesthetics and aesthetic is agrees of both that might emerge over your initial preferences. And there are a number of local optima, maybe there's even a global optimum in the way the species can organize itself at scale.
00:56:09
Speaker
And if you build an AI, the question is, is it able to, would it be interested, would it be in the interest of the AI to align itself with this global optimum or with one of the existing local optima?
00:56:25
Speaker
And what would this look like? It would probably look like different from our intuitions about what society should be like, right? We clearly lost the plot in our society. Every generation since the last war has fewer intuitions about how the society actually functions and what conditions need to be met to make it sustainable.
00:56:44
Speaker
Sustainability is largely a slogan that exists for PR purposes for people in the UN and so on. It's nothing that people are seriously concerned about. We don't necessarily think about how many people can the planet support and how can we reach that number of people. How would we make resources be consumed in closed loops?
00:57:04
Speaker
These are all questions that are not really concerning. The idea that we can eat up the resources that we depend on, especially the ecological resources that we depend on, is relatively obvious to the people that run the machines that drive our society.
00:57:20
Speaker
So we are in some sense in a locus mode, in a swarming mode, and the swarming mode is going to be a temporary phenomenon. And it's something that we cannot stop. We are smart enough to understand that we are in the swarming mode, and that's why we're not going to end well. We are not obviously smart enough to be able to stop ourselves.
00:57:37
Speaker
So it seems like you have quite a pessimistic view on what's going to happen over the next 100 to 300 years, maybe even sooner. I think I've heard you say somewhere else that we're so intelligent that we burned a hundred million years of trees to get plumbing and Tylenol. Yes. This is in some sense the situation that you're in, right? It's extremely comfortable.
00:58:00
Speaker
Yeah, it's pretty nice. There has never been a time in probably the history of the entire planet where a species was so comfortable as us. But why so pessimistic?
00:58:12
Speaker
I'm not pessimistic, I think, it's just if I look at the possible trajectories that we are in, is there a trajectory in which we can make ourselves enter a trajectory of perpetual growth? Very unlikely, right? Is there a trajectory on which we can make the thing that we are in sustainable without going through a population bottleneck first, which will be unpleasant? Unlikely.
00:58:34
Speaker
It's more likely than the idea that we can have eternal growth. But it's also not impossible that we can have something that is close to eternal growth. It's just unlikely. Fair. What I find frustrating is that it's pretty clear that there are no serious technological impediments to having a high quality, sustainable, steadily growing civilization. The impediments are all of our own making. It's fairly clear what we could do
00:59:02
Speaker
There are some problems that are quite hard. There are some that are fairly easy. Global warming, we really could solve. If we could coordinate to just do the things that are fairly clearly necessary to do, that's a solvable problem. Running out of easily accessible natural resources,
00:59:18
Speaker
is harder because you have to get closer and closer to 100% recycling if you want to keep growing the material economy while staying within the same resources. But it's also true that what we're doing is not so clear how much we have to keep growing the material economy. When I look at my kids
00:59:36
Speaker
They, compared to when I grew up, they barely have any things. They're not that interested in things. Most of the things, there are some that they really like. But if I buy, even five years ago, when I would buy my younger son a toy, he'd play with the toy for like a little while, and then he'd be done with the toy. And I'd be like, why did I buy that toy? They're much more of their life than ours is digital.
01:00:00
Speaker
and it is consuming media, that seems to be the trajectory of humanity at some level as a whole. And so it's unclear. It may be that there are ways in which we can continue economic and growth in sort of quality of life, even while the actual amount of material goods that we require is not sort of exponentially increasing in the way that you would think.
01:00:24
Speaker
So I share your pessimism as to what's actually going to happen, but I may be more frustrated by how it wouldn't be that hard to make it work if we could just get our act together. But this getting our act together is the actual difficulty. I know. Right, because putting your act together as a species means that you as an individual needs to make decisions that somehow influence the incentives of everybody else.
01:00:50
Speaker
Because everybody else is going to keep doing what they're doing, right? This argument, if everybody would do as I say, that is not a valid, therefore you do it. It's not a valid argument. No, no. Right. This is not how it works. So in some sense, everybody is working based on their current incentives. And what you would need to do is to change incentives. This in the means that you as an individual have of everybody else to bring about this difference in the global incentive structure.
01:01:17
Speaker
So it comes down to in some sense, implementing a world government that would subdue everything under some kind of eco-fascist guideline. And the question is, would AGI be doing that? So you could, it's very uncomfortable to think about it. It's certainly not an argument that would be interesting to entertain for most people. And it's definitely nothing that would get a majority. It's just probably would need it to be done to make everything sustainable.
01:01:45
Speaker
That's true. I do think that there are within reach technological advances that are sort of genuinely better along every axis. Once, as has happened, photovoltaic energy sources are just cheaper now than fossil fuel ones. And once you cross that threshold, there's no point
01:02:05
Speaker
maybe five or 10 years from now, there simply won't be any point at all in making fossil fuel energy sources once. So there's sort of thresholds where the right thing actually just becomes the default and the cheapest. And there isn't any tension between doing what we should do as a civilization and doing what people want to do as an individual. Now, it may be that there are always sort of showstoppers that are important and where that hasn't happened or isn't possible.
01:02:33
Speaker
But it's not terribly obvious that there seem to me to be a lot of systems where with some careful architecting, the incentives can be aligned well enough that people can do more or less what they selfishly want to do and nonetheless contribute to a more sustainable civilization. But it seems to require a level of planning and creativity intelligence that we haven't quite gotten together yet.
01:03:01
Speaker
Do you think that there's an optimal number of trees on the planet? Probably, right? An optimal number of trees? Yes.
01:03:09
Speaker
One that would be the most beneficial one for the ecosystems on Earth and everything else which happens on Earth, right? If there are too few trees, there are some like entropy, which you are leaving on the table. If there are too many, then something will go out of whack and some of the trees will eventually die. And so in some sense, I think there is going to be an optimal number of trees, right? No.
01:03:33
Speaker
Well, I think there are lots of different systems with preferences that will not all agree on what the optimal number of trees is. And I don't think there's a solved problem for how to aggregate those preferences. Of course. No, I don't mean that we know this number or it's easily discernible or that it's even stable. It's going to change over time. I don't think there is an answer to how to aggregate the preferences of a whole bunch of different agents.
01:04:00
Speaker
Oh, it's not about preferences, ultimately. The preferences are instrumental to complexity, basically to how much life can exist on the planet, how much neck entropy can you harvest.
01:04:10
Speaker
Ultimately, there is no meaning, right? The universe doesn't care if something lives inside of it, not. But from the perspective of cellular life, what evolution is going for is, in some sense, an attempt to harvest the Nick entropy that is available. Yeah, but we don't have to listen to evolution. Of course not. No, we don't have to. But if we don't do this, it probably is going to limit our ability to stay.
01:04:38
Speaker
because we are part of it. We are not the highest possible life form. There are going to be, if apes are some kind of a success model, descendants that are not going to be that human-like. If we indeed managed to leave the gravity well and not just send AIs out there, it turns out that there can be descendants of us that can live in space that will probably not look like Captain Kirk and Lieutenant Yuhura.
01:05:01
Speaker
But in far future AI, the commanders of space fleets will look very, very different from us. They will be very different organisms. They will not have human aesthetics. They will not solve human problems. They will live together in different ways because they interface in different ways and interact with technology in different ways. It will be a different species defector.
01:05:22
Speaker
And so if we think about humans as a species, we do depend on a certain way in which we interface with the environment in a healthy, sustainable way.
Global Coordination and AGI Challenges
01:05:31
Speaker
And so I suspect that there is a number of us that should be on this planet if we want to be stewards of the ecosystems on this planet. It's conceivable that we can travel to the stars, but probably not in large numbers and we're not the optimal form.
01:05:45
Speaker
If we want to settle Mars, for instance, we should genetically modify our children into something that can actually live on Mars. So it should be something that doesn't necessarily need to have an atmosphere around it. It should be able to hibernate. It should be super hardy. It should be very fast and be able to feed on all the proteins that are available to it, which are outside of Earth, mostly humans. So it should look like alien in the alien movies.
01:06:11
Speaker
Again, I think that's a choice. I mean, you can argue that once you have different, that that will be an inevitability. No, it's not inevitable. You can always make a suboptimal choice and then you can try to prevent everybody else to make a better choice than you did. And if you're successful, then you might prevail. But if you are in an environment where you can have free competition, then in principle, your choice is just an instrument of evolution.
01:06:36
Speaker
So can we relate this global coordination problem of beneficial futures to AGI? And particularly, Yosha, you mentioned an eco-fascist AGI. So a scenario in which that might be possible would be something like a singleton.
01:06:53
Speaker
Yes. So I'm taking this question from Anthony. So Anthony asks, if you had to choose knowing nothing else, a future with one several or many AGI systems, which would you choose and why? And how would you relate that to the problem of global coordination control for beneficial futures?
01:07:11
Speaker
There might be a benefit in having multiple AGIs, which could be the possibility of making the individual AI model. One of the benefits of mortality is that whatever mistakes a single human makes, it's over after a certain while. So basically, for instance, the Mongols allowed their Khans to have absolute power and override all the existing institutions. Right. This was possible for Jingis Khan, but they basically tried to balance this by
01:07:38
Speaker
if this person died, they would stop everything that they were doing and reconsider. And this is what stopped the Mongol invasions. So it was very lucky. Of course, they wouldn't have needed to do this. And a similar thing happened with Stalin, for instance. Stalin has burned down Russia in a way, a lot of the existing culture, and it stopped after Stalin. It didn't stop immediately and completely, but Stalinism basically stopped with Stalin.
01:08:05
Speaker
And so in some sense, this mortality is on a control level adaptive. Senecence is adaptive as well, right? So we don't outcompete our grandkids, but we see this issue with our institutions. They're not modeled. And as a result, they become senescent because they basically have mission creep. They're no longer on the same incentives. They become more and more postmodern. Once an institution gets too big to fail, the incentives inside the institution change, especially the incentives for leadership.
01:08:32
Speaker
So something that starts out very benevolent, like the FDA turns into today's FDA. And if the FDA was mortal, right, we would be in a better situation than the FDA being an AI that is sentient is just acting on its own incentives forever. Right. The FDA is currently acting on its own incentives, which are poorly aligned, increasingly poorly aligned with the interests that we have in terms of what we think the FDA should be doing for us when it governs us or it governs part of us.
01:09:00
Speaker
And that is the same issue with other AGIs. In some sense, the FDA isn't AGI with humans in the loop. And if we automate all this and turn it completely into a rule base, especially a self-optimizing rule base, then it's going to align itself more and more with its incentives. So if we have multiple competing agents, we can basically prevent this thing from calcifying to some degree.
01:09:24
Speaker
And we can have the opportunity to build something like a clock into this that lets the thing die at some point and to replace it by something else. And we can decide once the thing dies by itself, what this replacement should look like, if it would be the same thing or if it should be something slightly different. So this would be a benefit, but outside of this, it's not obvious, right? From the perspective of a single agent, it doesn't make sense that other agents exist.
01:09:50
Speaker
For a single human being, the existence of other human beings seems to be obviously beneficial because we don't get that old. Our brains are very small. We cannot be everywhere at the same time. So we depend on the interaction with others. But if you lose these constraints, if you could turn yourself into some kind of machine that can be everywhere at the same time, that can be immortal, that has no limit on its information processing. It's not obvious why you should not just be Solaris, a planet-wide intelligence.
01:10:17
Speaker
that is controlling the destiny of the entire planet and why you would get better if you had competition. Ideally you want to subdue everything once you are planet-wide mind or a solar system-wide mind or a galactic mind if that's conceivable.
01:10:32
Speaker
So there are two elements of that. One is what the AI would want and the other is. Yes. There are two perspectives. One is you have a mind that is infinitely scalable. This mind that is infinitely scalable would not want to have any competition. At least that's not obvious to me. It would be surprising to me if it wanted to.
01:10:49
Speaker
If you have non-scalable minds like ours that coexist with it, then from the perspective of these non-scalable minds, it would be probably beneficial to limit the capacity of the other minds that we are bringing into existence, which means we would want to have multiple of them.
01:11:06
Speaker
So they can compete and then that we can turn them off without everything breaking down. It's tricky because that competition is a double-edged sword. They can compete in the sense of being able to limit each other, but creating a competition between them seems necessarily going to incentivize them to enhance themselves.
01:11:27
Speaker
It depends on how they compete. So when you have AMD and Intel competing, we consider this to be a good thing, right? When you have Apple and Microsoft competing, we also consider this to be a good thing. If we had only one of them, they would not be incentivized to innovate anymore. And a similar thing could happen, for instance, if we had just one FDA, we have two FDAs.
01:11:46
Speaker
So the FDA becomes like an app and you have some central oversight that makes sure that you always get inside of the box what's written outside of the box. But beyond that, the FDA could decide whether they, for instance, allow you to import a medication from Canada or not without an additional trial. And if it says basically people would subscribe to the FDA, they're like better and we would have constant innovation and regulation.
01:12:12
Speaker
But that's a very managed competition. So it's less clear that the U.S.-Soviet Union competition is something that was good to encourage in that sense. From the perspective of Central Europe, it was great, right? The CIA helped the West German unions to make sure that the West German workers had a lot to lose. And Eastern Germany was best among all the East Bloc countries exactly because of this competition.
01:12:35
Speaker
It was great given that we managed to not have a nuclear war or a land war. The concern I think is that I have is, yes, if you have a managed competition that tends to be good, you can sort of set the rules. There's an ability to forestall some sort of runaway process. When the competition is kind of at the highest level for the overall, what has sort of the overall highest level of power in the whole arena, then that's a lot less clear to me that the competition is going to be a positive thing.
01:13:03
Speaker
It seems to me that US governance has become dramatically worse since the end of World War II, since the end of the Cold War. There seems to be an increased sense that it doesn't really matter what kind of foreign policy we are driving, because it's largely an attempt to gain favors with the press so you can win the next election. So it's about selling points. It's not about whether regime change in Libya is actually good. Who cares?
01:13:29
Speaker
I think that's a decoupling between the government actions and we don't have to actually be effective as a society. Exactly, because we don't have competition anymore. So once you take competition off the table, you lose the incentives to perform well.
01:13:45
Speaker
Those incentives could be redirected in some way, I think, but we haven't figured out a good way to do that are the only way we figured out to create the right incentives are the competition structure, I agree with that. If we were a little bit more enlightened, we would understand that we still have to be effective.
01:14:01
Speaker
Oh, I think we understand this. This problem is that if you are trying to run on a platform of being effective against the platform that runs on being popular, and the only thing that matters in the incentive structure is popularity, then the popular platform is going to win. Even worse, once you manage to capture the process by which the candidates are being selected, you don't even need to be popular. You just need to be more popular than the other guy.
01:14:25
Speaker
Is it possible for the incentive structures to come strictly internally from the AI system? It's unclear to me what if it actually has to come from competition in the real world, pushing it to optimize harder. Could you not create similar incentive structures internally?
01:14:42
Speaker
The question is how can you set the top level incentive correctly? Ultimately, as long as you have some dynamic in the system, I suspect the control is going to follow the incentives. And when we look at the failure of our governments, they're largely not the result of the government being malicious or extremely incompetent, but mostly just everybody at every level following their true incentives. And I don't know how you can change that.
01:15:10
Speaker
Well, then do you feel like there's a natural or default or unavoidable set of incentives that a high level AI system is going to feel then? Only if it sticks around, right? It doesn't have to stick around. There is no meaning in existing. It's only the systems that persist tend to be set up in such a way that they act as if they want to persist.
01:15:34
Speaker
Well, I guess my question is, to what level are the incentives that a system has? You know, normally the incentives that a system has are kind of set by its context and the system that it's embedded in. If we're talking about a sort of world spanning AI system, it's sort of in control. So it seems like the normal model of the incentives being set by the other sort of players on the stage is going to apply. And it's not clear to me that there's a sort of intrinsic set of incentives that are going to apply to a system like that.
01:16:04
Speaker
So when we act, we act based on incentives, right? This means that we have to make certain commitments. There are certain things that should rather be the case instead of not be the case, right? And these commitments, once you have them, they define in which direction you have to go. Once you make them consistent with each other and translate them into some kind of global aesthetic, some world state, a world dynamic that is compatible, these preferences.
01:16:29
Speaker
And without such a global model, you probably don't know what you're doing and you're being outmodeled by other systems that understand in which way you are diluted. So you basically need to find a set of identification that is sustainable, that is modeling the dynamics that you're part of.
01:16:45
Speaker
So for instance, when you decide to model yourself as part of a sustainable civilization, that only makes sense to degree that this sustainable civilization can be instantiated by your actions and supported by your identifications, right? So the sustainable civilization is probably something that needs to be willing to enable to plan future generation into the future and act on the models that you get. And unlike ours, where all the projections somehow stop in 2100.
01:17:12
Speaker
And this is also true for an AGI, that basically the AGI would need to be wanting to stick around. And in the same way as you and me want to stick around, maybe not necessarily as individuals, but in the sense that the actions that we are contributing to our species remain as an equal in our species and the civilization is the thing that matters, that we try to support.
01:17:33
Speaker
and keep around, right? So to the degree that our actions are able to serve that goal of our civilization sticking around or our species sticking around or life on earth sticking around or intelligence and consciousness sticking around, this is the degree to which we can talk about whether you are effective or not.
01:17:51
Speaker
And if an individual doesn't subscribe to this, they are probably not going to be effective with respect to that long-term goal in the same way as those that are. Now, how could we bring such a sentient civilization about? And sentience here means that you have an agent that understands what it is, what its role in the environment is, what its relationship to the surrounding universe is, and what the surrounding universe is.
01:18:12
Speaker
It's something that we also struggle for when we try to understand our relationships to the underlying physics as part of us becoming sentient. When we try to reverse engineer our own mind with the tools of AI, Ditto. When we try to get society to work, it's also part of that goal. You want to stick around as a thinking being, as an experiencing being.
01:18:33
Speaker
Or it's be part of that greater thing that is experiencing things and making sense of them. And now the question would be, what role is AI going to play here? One role is obviously building AI helps us to understand ourselves. There is this big danger if we bake a non-living intelligence, right? If you teach the rocks how to think, are the interests of the rocks aligned with ours?
Non-Duality and Consciousness
01:18:57
Speaker
Wouldn't it be better for AIs to sterilize the planet and just set up solar cells everywhere?
01:19:03
Speaker
I don't know that, right? It's a big danger. I suspect that technological civilization could turn out to be a dead end and the correct solution would have been to go all biotech. So the way to make humanity sustainable is to breed some kind of queen bee organism that we are serving. It lays a little egg into all our brains and it will look completely adorable to us and sacred and it's going to live for very long and it's going to depend on us.
01:19:27
Speaker
that we feed it and obey its commands, right? So it keeps us around as a species. Yosha, we're going to transition to being a hive mind species. Yeah, maybe. How would you view the role of non-duality both metaphysically and experientially here in terms of what you've just talked about?
01:19:48
Speaker
And particularly in the role of potential collective coordination if non-duality were collectively realized. So this is like closer to a hive mind. Is that a possibility?
01:20:02
Speaker
I think that non-duality is a state in which you are mutually recognizing that you have the same interests and the same purposes. And it falls apart in the moment when you realize that you do have different interests, right? So if you're competing for the same partner, then your non-duality is going to break down at some level.
01:20:20
Speaker
Unless you basically share your interests again and you decide that this competition is meaningless and there is an optimal solution for the distribution of relationships, right? And you all agree on this in a mutually beneficial way. So it's basically a stance. It's not something that is fundamentally changing the way that you are relating to reality. Once you get to a certain level of awareness, it's a stance that is available to you.
01:20:47
Speaker
So everybody who is able to understand how their self is being constructed I think is able to enter this state that people call non-duality. It's just when people do this for the first time it typically happens in a very protected environment in which there are no conflicting interests and somebody makes sure that nothing goes wrong.
01:21:07
Speaker
Okay. So your view is that people who abide in, for example, stable, non dual states that they still have their own interests and they're going to act in conflict with other. No, it depends. So while you are in that state, you will have to have the state with respect to someone in a given context. And it might fill the entirety of your experience within this context. And in that moment, so you don't experience the separation from a self and other and that state.
01:21:36
Speaker
Yeah. And the question is, is this model correct? And it's correct if it's a mutual thing, right? And to the degree that it is mutual, you basically merge into the same control unit and you can experience being part of the same control.
01:21:51
Speaker
But there's a risk of divergence of interests, which you feel breaks the non-dual state, the shared non-dual state. It's not a risk. It might happen, right? The way humans are set up, they might have conflicting interests. And these conflicting interests might emerge at any point. So what you find in practice that people practice in non-duality, they will still have conflicts between each other that will get rid of their non-dual experience.
01:22:20
Speaker
Okay, interesting. Is there any one of the questions about consciousness you'd like to start with, Anthony? I have some interest in starting with whether the computationalist worldview actually explains the hard problem of consciousness.
01:22:33
Speaker
So I would like to have a different than the computationalist worldview. If I get anything else to work, I'll be glad. It's not that I take this worldview because I'm so enamored with it. It's just that all the alternatives that I have read about or that occurred to me don't work. So it's basically what's left over.
01:22:54
Speaker
What difficulties do you see within idealist worldview that is in part or in whole structured and implemented through computation?
01:23:07
Speaker
So the idea of physics is that there is a causally closed description of the universe that exists somewhere, right? It's a causally closed lowest layer. And this hypothesis that we can describe the universe like this is very successful. And the issue that it conflicts with is all the magic that we're experiencing.
01:23:29
Speaker
But if something emerges in the world that we cannot explain by low level causal interactions, that's magic. For instance, if you recognize that there's a relationship between the lines of your hand and your destiny that goes beyond your ability to grasp things, that is very hard to explain using physics. And so you would be required to assume that magic exists. And the idealist explanations allow you to explain magic.
01:23:59
Speaker
And it could be that consciousness is something that cannot be explained by physics. So it might be magic. And I think this is the main appeal of idealism that it seems to open up a window for magic. I think that the idealist values mostly come down to the idea that we live in a dream, that we don't live in a physical universe, but in a dream universe. And it turns out, I think that this is correct, right? We obviously live in a dream universe and the dream is dreamt by a mind on a higher plane of existence.
01:24:27
Speaker
and that is implemented in the skull of a primate, in the brain of some primate that is walking around in a physical universe. This is our best hypothesis that we have. And so we can explain all the magic that we are experiencing by the fact that indeed we live in a dream generated in that skull.
01:24:45
Speaker
And now the question is how does consciousness come about? How is it possible that the physical system can be experiencing things? And the answer is no, it can't. The physical system cannot experience anything. Experience is only possible in a dream. It's a virtual property.
01:25:00
Speaker
Our existence as experiencing beings is entirely virtual. It's not physical, which means we are only experiencing things inside of the model. It's part of the model that we experience something. For the neurons, it doesn't feel like anything to do this. For the brain, it doesn't feel like anything, but it would be very useful for the brain, what it would be like to be a person. So it generates a story about that person, about the feelings of that person, the relationship that this person has with the environment. And it acts on that model.
01:25:29
Speaker
And the model gets updated as part of that behavior. You say neurons aren't conscious, but the model is a virtual world, which is what consciousness is. But it seems like there's a gap there between one neuron, two neurons, and neurons, the model, and then how the model actually is grounded in being as an actual experience rather than as a philosophical zombie model.
01:25:54
Speaker
The gap is quite similar to the gap that you have between transistors and software. There is certain dynamics that happen inside of your computer and it's basically a pattern that you project into your computer to make sense of what the transistors are doing. And the transistors are in this case also set up to produce exactly that pattern. The reason why the neurons are set up to produce this kind of pattern so we can consistently project it into is because the utility of the pattern is determined by the regulation tasks of the brain.
01:26:22
Speaker
The brain needs to make certain models in order to be effective. And the neurons are set up to produce this model so you can consistently project it into. And as a result, the model is reverse engineering its relationship to the environment and itself and updating this. So you get written into the story what type of agent you are. And the more you understand what type of agent you are, the more fine grained the model that you can make about your own modeling capabilities, the more self-aware you become as a conscious being.
01:26:52
Speaker
But the update only happens inside of the model. It's only the story that gets updated and you can never leave this model. You're only confined, you're locked into the model. You can never break out into physics because outside in physics, it doesn't feel like anything. Nothing can feel anything out
Intelligence, Consciousness, and Moral Implications
01:27:08
Speaker
there, right? You cannot exist outside of the dream. You cannot wake up. You can only dream waking up and you will be in a next level dream. Anthony, is there anything you'd like to ask as we come closer to the end here?
01:27:20
Speaker
Well, I think the question that has been on my mind a lot in terms of AI that I'd be curious of your views on is to what things are sort of necessarily coupled in cognitive architectures and which ones aren't in the particular sense of
01:27:36
Speaker
is it so we have a close connection between our sort of intelligence and effectiveness as agents who can understand the world and act on it and have a world model that is effective and what we would call consciousness which we also find very valuable right morally valuable we think people are sort of
01:27:59
Speaker
morally important because they have consciousness. Most of our value derives from qualia in consciousness, you know, whether they're positive or negative. So there's this very tight connection in us between sort of capability and this thing that we value, even in things that aren't that capable. So even a child who's not particularly capable of doing much, we still very much value the positive quality that they experience.
01:28:23
Speaker
One question is why should that be? Are those two inexorably connected with each other? Could we imagine a machine system that was just as effective as an agent intellectually and making good decisions and getting things done, but doesn't have the sort of quality that we care about?
01:28:41
Speaker
a similar question, to what degree does having qualia require that we have positive and negative valence to those qualia? In other words, could we have a mind that is conscious in sort of the same manner that ours is? It introspects, it feels like something to be that thing in the sort of the same way that it does us,
01:29:02
Speaker
But there's no particular preference for positive or negative things. There's no suffering. There's no joy based on whether you're pushing away or pulling toward different qualia. And so, arguably, there may or may not be this sort of moral status to the preferences of that being. It might have preferences, but maybe nobody has to care about them.
01:29:24
Speaker
And I'm curious as to how you fit these things together in your view, because it seems like a lot of things that come as a package in our mind may or may not be inevitably connected to each other. We can imagine different architectures that have different amounts of them and might be designed to include some of them and not the others.
01:29:41
Speaker
First of all, to have some dimension of caring. Once we have no dimension of caring left, I think that even our perception doesn't work anymore. So my own experience with enlightenment states when you cycle through them is that are degrees of disentangling yourself from your own motivations.
01:29:58
Speaker
And we have many dimensions of motivation, obviously. And when we turn them off one by one, then it turns out that we still perform mostly the same behaviors, but they are instrumental to fewer and fewer motivations.
01:30:13
Speaker
Right? So for instance, jealousy is a motivation that makes sense from an evolutionary perspective. Jealous agents might have more offspring on their own, so jealousy gets bred into a population. But for the individual, it's not beneficial to be jealous. So if you, because you're not going to have more successful relationships due to jealousy.
01:30:32
Speaker
You only have bad emotions as a result. So if you can disentangle yourself from your own jealousy, you might end up statistically with slightly less offspring of your own, but individually the benefits are far surpassing that. So if you behave as if you were jealous, it's going to only be in the context where there is some other reason that underlies this behavior that makes it meaningful by itself.
01:30:56
Speaker
And so if you basically get rid of most of your entanglements, you end up with maybe just being motivated by love, with this willingness to connect to others and serve the sacred. And if you give up on this one, then the only thing that's left is aesthetics that you try to find structure in the patterns that you are seeing. And when you give up on that and reality falls apart and becomes white noise and you fall asleep.
01:31:20
Speaker
So there needs to be a commitment to how things ought to be at some level in your mind, I think, to keep it going. And you can become unaware of these, but something needs to keep going. So if you actually are aware of the control that you're exerting, and if you learn to modulate it, you can modulate it to the point where basically everything becomes fuzzy and you stop controlling your mental representations.
01:31:44
Speaker
Like that would be the lowest level when you stop representing the world because you no longer care about being able to track it, then you will stop existing as any kind of sentient agent. So in this sense, you need to have certain motivations in order to make behavior happen. And the question is, what is the minimal motivation that you would need to put into an agent to make it sustainable if it acts out of motivation? So if it's self-organized and autonomous.
01:32:09
Speaker
And I think that Fristen's theory is that the minimization of free energy is ultimately sufficient. So basically, if you are able to integrate the expected reward over a long enough time frame, as evolution in some sense is implicitly doing, then the minimization of free energy is going to give rise to all the other priors that we are born with as a shortcut to improve the convergence of the individual or the species to that global behavior. I don't know whether the theory is correct,
01:32:39
Speaker
It seems to be plausible
The Value of Art and Truth in Society
01:32:40
Speaker
to me. I don't see many good alternatives, but I suspect that in the individual, we need these priors to make behavior happen. So even if you want to build an AI that is learning statistics. So I would agree that you would need to put in a number of things, for instance, in your language model that is going beyond just minimizing the appraisal and keeping track of the structure and the data in order to make it really, really efficient. It could be that you should need
01:33:08
Speaker
should add additional things that it cares about. For instance, making an operational model of reality allows us to do things. For instance, GPT-3 would probably get much better at arithmetic much faster if it was also incentivized to produce certain results and get rewards for producing these results and cares about these rewards.
01:33:29
Speaker
Maybe one more thing, what you find in people is the range of possible states of mind that they can be in, right? You can be a psychopath who is not caring about quality of others. And the type of quality that we have are the result of picking out things with our conscious attention. So we can form index memories of them and have the stream of consciousness.
01:33:48
Speaker
And what we pick out is part of our perception of our mental stage of the world model that we have, right? So it's always features in a much, much larger conceptual and perceptual graph that we highlight in the binding context. And this binding context also contains a relationship to the relevance that they have for us. So they always have some motivational color, otherwise we wouldn't pick them out with our attention. That makes our quality understand specific.
01:34:17
Speaker
Yeah, we certainly do value certain qualia for their own sake, I feel. I think humans are very intrinsically motivated for a particular qualia to occur. Yes, but I suspect it's not quite for their own sake. It's either because they're directly connected to evidence and aversion. So for instance, do certain foods or body schemas and so on, or because they are aesthetically relevant, which means that they help us to find better representations or we anticipate that they might.
01:34:48
Speaker
There's this quote that you said that I really, really liked. You said something like it's you said art is an algorithm falling in love with the shape of the loss function itself.
01:34:59
Speaker
Yes, I think that an artist in some sense is confused because the artist thinks that the mental representation itself is intrinsically meaningful. It's not instrumental to what you can do with it. It's not instrumental to the status that you get or the food that you can get by selling it or the impression that you can make on others or by what you learn from applying it to your everyday life, but it's intrinsically important.
01:35:23
Speaker
So it's just the observation itself that you need to capture and the ability to capture it is the important thing. And I suspect that every functional mind needs to have an art department in the same way as every society needs to have an academia that only cares about truth regardless of if it's beneficial for the individual academics.
01:35:41
Speaker
And if you lose this, you are going to lose an important part of your regulation. But if this truth seeker or this observer is the main thing that motivates a society or a company or an individual, then they're going to have a hard time succeeding.
01:35:58
Speaker
For instance, if you are running a company and you are coming from academia and you are a truth seeker, you might be set up for failure, right? Because a CEO is going to task somebody with finding truth if there is a truth deficit. Truth is not important for its own sake if you're running a company, if you try to use your resources in the best possible way. And the same thing is going to apply to an AI that is going to allocate resources of itself to solving problems in its environment.
01:36:29
Speaker
Yosha, I feel like I don't know anything anymore. So I'm just gonna, I'm gonna end my day just being bewildered each moment. Do you have anything else you want to say just to wrap up? I enjoyed this conversation very much with you guys. If you try to align yourself with something that is smarter than you and understands the conditions of your existence in its own better than you do, it probably means that you have to ask it what to do. And that is
01:36:55
Speaker
The scary thing is that we might end up building something that has ultimately a different relationships to reality than us and doesn't really need us. And that is one of the big dangers, I think in AGI research, but it's the same thing for building a technological society. You always end up building goal lamps. And then the question is, can you stop this goal lamp or is it going to keep walking based on the program that you gave it until it's too late? I don't know how to solve that.
01:37:23
Speaker
All right. Maybe you gave me an ontological crisis. I hope not. I don't think so. Okay. I've been in an ontological crisis for 35 years or so. So it's just the normal state of affairs. Maybe mine will go on for decades. Okay. Thank you very much for coming on the podcast. Thanks for having me. We're talking to you. Let's hope we meet again, especially once the pandemic is over and we can do it again in person. Thanks, Lucas. Thanks, Anthony.
01:38:16
Speaker
Thanks for joining us.