Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson image

AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

Future of Life Institute Podcast
Avatar
81 Plays5 years ago
In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide? Would you step into this machine? Is the person who emerges on Mars really you? Questions like these –– those that explore the nature of personal identity and challenge our commonly held intuitions about it –– are becoming increasingly important in the face of 21st century technology. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. AI enabled bio-engineering will allow for human-species divergence via upgrades, and as we arrive at AGI and beyond we may see a world where it is possible to merge with AI directly, upload ourselves, copy and duplicate ourselves arbitrarily, or even manipulate and re-program our sense of identity. Are there ways we can inform and shape human understanding of identity to nudge civilization in the right direction? Topics discussed in this episode include: -Identity from epistemic, ontological, and phenomenological perspectives -Identity formation in biological evolution -Open, closed, and empty individualism -The moral relevance of views on identity -Identity in the world today and on the path to superintelligence and beyond You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/15/identity-and-the-ai-revolution-with-david-pearce-and-andres-gomez-emilsson/ Timestamps:  0:00 - Intro 6:33 - What is identity? 9:52 - Ontological aspects of identity 12:50 - Epistemological and phenomenological aspects of identity 18:21 - Biological evolution of identity 26:23 - Functionality or arbitrariness of identity / whether or not there are right or wrong answers 31:23 - Moral relevance of identity 34:20 - Religion as codifying views on identity 37:50 - Different views on identity 53:16 - The hard problem and the binding problem 56:52 - The problem of causal efficacy, and the palette problem 1:00:12 - Navigating views of identity towards truth 1:08:34 - The relationship between identity and the self model 1:10:43 - The ethical implications of different views on identity 1:21:11 - The consequences of different views on identity on preference weighting 1:26:34 - Identity and AI alignment 1:37:50 - Nationalism and AI alignment 1:42:09 - Cryonics, species divergence, immortality, uploads, and merging. 1:50:28 - Future scenarios from Life 3.0 1:58:35 - The role of identity in the AI itself This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Recommended
Transcript

Introduction to the Podcast

00:00:12
Speaker
Welcome to the AI Alignment Podcast. I'm Lucas Perry.

Identity and AI Alignment with Guests

00:00:16
Speaker
Today, we have an episode with Andres Gomez-Emilson and David Pierce on Identity.
00:00:23
Speaker
This episode is about identity from the ontological, epistemological, and phenomenological perspectives. In less jargony language, we discuss identity from the fundamental perspective of what actually exists, of how identity arises given functional world models and self-models in biological organisms,
00:00:46
Speaker
and of the subjective or qualitative experience of self or identity as a feature of consciousness. Given these angles on identity, we discuss what identity is, the formation of identity in biological life via evolution, why identity is important to explore and its ethical implications and implications for game theory,
00:01:12
Speaker
And we directly discuss its relevance to the AI alignment problem and the project of creating beneficial AI. I think the question of how is this relevant to AI alignment to be a very useful one to explore here in the intro. So I'll go ahead and do that for a little bit.

AI Alignment: Technical and Philosophical Issues

00:01:32
Speaker
The AI alignment problem can be construed in the technical limited sense of the question of how to program AI systems to understand and be aligned with human values, preferences, goals, ethics, and objectives.
00:01:50
Speaker
In a limited sense, this is strictly a technical problem that supervenes upon research in machine learning, AI, computer science, psychology, neuroscience, philosophy, etc.
00:02:05
Speaker
I like to approach the problem of aligning AI systems from a broader and more generalist perspective. So in light of this, I do so through a broader view of AI alignment that takes into account the problems of AI governance, philosophy, AI ethics, and that reflects on the context in which the technical side of the problem will be taking place.
00:02:31
Speaker
the motivations of humanity and the human beings engaged in the AI alignment process, the ingredients required for success, and other civilization-level questions on our way, hopefully, to beneficial superintelligence. It is from both of these perspectives that I feel exploring the question of identity is important,

Impact of Researcher Identity on AI Alignment

00:02:51
Speaker
AI researchers have their own identities and those identities factor into their lived experience of the world, their motivations, and their ethics. In fact, the same is of course true of policymakers and anyone in positions of power to influence the alignment process. So being aware of commonly held identity models and views is important for understanding their consequences and functions in the world as more and more powerful AI systems begin to be developed and deployed.
00:03:21
Speaker
From a macroscopic perspective, identity has evolved over the past 4.5 billion years on Earth, and surely will continue to do so in both AI systems themselves and in the humans which hope to wield that power. Some humans may wish to merge with the AI, others may simply be content with passing away or death, and other humans may wish to be upgraded or uploaded in some way.
00:03:47
Speaker
Questions of identity are also crucial to this process of relating to one another and to AI systems in a rapidly evolving world where what it means to be human is quickly changing, where copies of digital minds or AIs can be made trivially, and the boundary between what we conventionally call the self and the world begins to dissolve and break down in new ways.
00:04:11
Speaker
challenging our commonly held intuitions and demanding new understandings of ourselves and of identity in particular. I also want to highlight an important thought from this podcast that any actions we wish to take with regards to improving or changing understandings of lived experience, of identity, must be sociologically relevant, or such interventions simply risk being irrelevant.
00:04:37
Speaker
This means understanding what is reasonable for human beings to be able to update their minds with and accept over certain periods of time, and also the game theoretic implications of certain views of identity and their functions in society and civilization.
00:04:55
Speaker
This conversation is thus an attempt to broaden the discussion on these issues outside of what is normally discussed, and to flag this area as something worthy of further consideration.

Guests' Backgrounds

00:05:05
Speaker
For those not familiar with David Pierce or Andres Gomez-Amilson, David is a co-founder of the World Transhumanist Association, rebranded Humanity Plus.
00:05:15
Speaker
and is a prominent figure within the transhumanism movement in general. You might know him from his work on The Hedonistic Imperative, a book which explores our moral obligation to work towards the abolition of suffering in all sentient life through technological intervention. Andres is a consciousness researcher at the Qualia Research Institute and is also the co-founder and president of the Stanford Transhumanist Association. He has a master's in computational psychology from Stanford.
00:05:45
Speaker
The Future of Life Institute is a nonprofit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org slash donate. If you'd like to be a regular supporter, please consider a monthly subscription donation to make sure that we can continue our efforts into the future. These contributions make it possible for us to bring you conversations like these and to develop the podcast further.
00:06:15
Speaker
You can also follow us on your preferred listening platform by searching for us directly or following the links on the page for this podcast found in the description.

Main Conversation: Self-Identity and Technology

00:06:23
Speaker
And with that, here's my conversation with Andres Gomez-Emilson and David Pierce.
00:06:33
Speaker
So I just want to start off with some quotes here that I think would be useful. So the last podcast that we had was with Yuval Miller Harari and Max Tegmark. And one of the points that Yuval really emphasized was the importance of self-understanding, questions like who am I, what am I in the age of technology. Yuval said, quote, get to know yourself better. It's maybe the most important thing in life. We haven't really progressed much in the last thousands of years. And the reason is that, yes, we keep getting this advice, but we don't really want to do it.
00:07:01
Speaker
He goes on to say that, quote, especially as technology will give us all at least some of us more and more power, the temptations of naive utopias are going to be more and more irresistible. And I think the really most powerful check on these naive utopias is really getting to know yourself better. So in search of getting to know ourselves better, I want to explore this question of identity with both of

Identity: Logical and Personal Aspects

00:07:24
Speaker
you. So to start off, what is identity?
00:07:28
Speaker
One problem is that we have more than one conception of identity. There is the strict logical sense, philosophers call the indiscernibility of identicals. If A equals B, then anything true of A is true of B. In one sense, that's trivially true. But when it comes to something like personal identity,
00:07:49
Speaker
just doesn't hold water at all. One is a different person from a namesake who went to bed last night, and it's very easy to shift between these two different senses of identity, or one might speak of the United States. In what sense is the United States the same nation in 2020 as it was in 1975? It's interest relative
00:08:14
Speaker
Yeah, and to go a little bit deeper on that, I would make the distinction as David made between ontological identity, like what fundamentally is actually going on in the physical world, in instantiated reality. But then there's conventional identity, definitely the idea of continuing to exist from one moment to another as a human and also countries and so on. And then there's also phenomenological identity.
00:08:37
Speaker
which is kind of our intuitive common sense view of what we are and basically what are the conditions that will allow us to continue to exist. We can go into more detail, but the phenomenological notion of identity is an incredible can of worms because there are so many different ways of experiencing identity and all of them have their own interesting idiosyncrasies.
00:09:01
Speaker
Most people tend to confuse the two. They tend to confuse ontological and phenomenological identity. And just as a simple example that I'm sure we will revisit in the future, when a person has, let's say, a ego dissolution or a mystical experience and they feel that they merged with the rest of the cosmos and they come out and say, oh, we're all one consciousness.
00:09:22
Speaker
that tends to be interpreted as some kind of grasp of an ontological reality. Whereas we could argue in a sense that that was just a shift in phenomenological identity, that your sense of self got transformed, not necessarily you actually directly merging with the cosmos in a literal sense. Although, of course, it might be very indicative of how conventional our sense of identity is if it can be modified so drastically in other states of consciousness.
00:09:52
Speaker
Right. And so let's just start with the ontological sense. How does one understand or think about identity from the ontological side?

Perspectives on Time and Identity

00:10:00
Speaker
In order to reason about this, you need a shared frame of reference for what actually exists and the nature of a number of things, including the nature of time and space and memory, because in the common sense view of time called presentism, where basically there's just the
00:10:15
Speaker
present moment, the past is a convenient construction and the future is a fiction, useful in practical sense, but they don't literally exist. In that sense, this notion that A equals B in the sense of like, hey, you could modify what happens to A and that will automatically also modify what happens to B kind of makes sense. And you can perhaps think of identity as moving over time along with everything else.
00:10:40
Speaker
On the other hand, if you have an eternalist point of view, where basically you interpret the whole of space-time as just basically there, in their own coordinates in the multiverse, that kind of provides a different notion of ontological identity because each, in a sense, moment of experience is its own separate piece of reality.
00:11:01
Speaker
In addition, you also need to consider the question of connectivity, like in what way different parts of reality are connected to each other. And in a conventional sense, as you go from one second to the next, you continue to be connected to yourself in an unbroken stream of consciousness. And this has actually led some philosophers to hypothesize that the proper unit of identity is from the moment in which you wake up to the moment in which you go to sleep, because that's an unbroken chain of stream of consciousness.
00:11:31
Speaker
But from a scientific and philosophically rigorous point of view, it's actually difficult to make the case that our stream of consciousness is truly unbroken. And definitely if you have a eternalist point of view on experience and the nature of time, what you will instead see is from the moment you wake up to the moment you go to sleep, there's actually been an extraordinarily large amount of snapshots of discrete moments of experience.
00:12:00
Speaker
And in that sense, each of those individual moment of experiences would be its own ontologically separate individual. Now, one of the things that become kind of complicated with a kind of an eternalist account of time and identity is that you cannot actually change it.
00:12:18
Speaker
There's nothing you can actually do to a so like that reasoning of if you do anything to a and a equals b then the same will happen to be. Doesn't even actually apply in here because everything is already there you cannot actually modify a anymore that you can modify the number five. Yes it's a rather depressing perspective in many ways the eternalist few.
00:12:39
Speaker
If one internalizes it too much, it can lead to a sense of fatalism and despair a lot of the time. It's probably actually best to think of the future as open. Okay. So this helps to clarify some of the ontological part of identity. Now you mentioned this phenomenological aspect, and I want to say also the epistemological aspect of identity. Could you unpack those two and maybe clarify this distinction for me if you wouldn't parse it this way? But I guess I would say that.
00:13:08
Speaker
The epistemological one is the models that human beings have about the world and about ourselves. It includes how the world is populated with a lot of different objects that have identity, like humans and planets and galaxies. And then we have our self model, which is the model of our body and our space and social groups and who we think we are.
00:13:29
Speaker
then there's the phenomenological identity, which is that subjective qualitative experience of self or the ego in relation to experience or where there's an identification with attention and experience. So could you unpack these two later senses?
00:13:44
Speaker
Yeah, for sure. So I mean, in a sense, you could have like an implicit self model that doesn't actually become part of your consciousness, or it's not necessarily something that you're explicitly rendering. You know, this goes on all the time. I mean, you've definitely, I'm sure had the experience of riding a bicycle and after a little while, you can kind of like almost do it without thinking. Of course, you're engaging with the process in a very embodied fashion, but you're not cognizing very much about it. And
00:14:10
Speaker
Definitely you're not representing, let's say your body state or you're representing exactly what is going on in a cognitive way. It's all kind of implicit in the way in which you feel. And I would say that paints a little bit of a distinction between a self model, which is ultimately functional. It has to do with, are you processing the information that you require to solve the task that involves modeling what you are in your environment and distinguishing it from the felt sense.
00:14:36
Speaker
Are you a person? Where are you? How are you located? And so on. The first one is the one that most of robotics and machine learning that have like an embodied component are really trying to get at. You just need the appropriate information processing in order to solve the task they're not very concerned about. Does this feel like anything or does it feel like a particular entity or a self to be that particular algorithm?
00:15:01
Speaker
Whereas yeah we're talking about the phenomenological sense of identity then yeah that's like very explicitly about how it feels like and there's all kinds of ways in which a healthy so to speak sense of identity can be broken down in all sorts of interesting ways there's many failure modes we can put it that way.
00:15:22
Speaker
One might argue, I mean, I suspect, for example, David Pierce might say this, which is that our self models, our implicit sense of self, because of the way in which it was brought up through Darwinian selection pressures, it's already extremely ill in some sense, at least from the point of view of it actually telling us something true and actually making us do something ethical. It has all sorts of problems, but it is definitely functional. I mean, you can anticipate being a person tomorrow and plan accordingly.
00:15:51
Speaker
leave messages to yourself by encoding them in memory. And yeah, this convenient sense of conventional identity, it's very natural for most people's experiences.
00:16:01
Speaker
I can briefly mention a couple ways in which it can break down. One of them is depersonalization. It's a particular psychological disorder where one stops feeling like a person and it might have something to do with basically not being able to synchronize with your bodily feelings in such a way that you don't actually feel embodied. You may feel kind of a disincarnate entity or just a witness experiencing a human experience, but not actually being that person.
00:16:29
Speaker
Then you also have things such as empathogen induced sense of shared identity with others. If you take MDMA, you may feel that all of humanity is deeply connected or we're all part of the same essence of humanity in a very positive sense of identity, but perhaps not in a evolutionary adaptive sense. Finally is people with multiple personality disorder.
00:16:54
Speaker
where in a sense they have like a very unstable sense of who they are and sometimes it can be so extreme that there's kind of epistemological blockages from one sense of self to another.
00:17:05
Speaker
as neuroscientist Donald Hoffman likes to say, fitness trumps truth. Each of us runs a world simulation, but it's not an impartial, accurate, faithful world simulation. I am at the center of a world simulation, the David Pearson centric world simulation. I'm the hub of reality that follows me around. And of course there are billions upon billions of other analogous examples too. Now this is genetically
00:17:34
Speaker
extremely fitness enhancing, but it's systematically misleading. In that sense, I think Darwinian life is malware.
00:17:45
Speaker
So wrapping up here on these different aspects of identity, I just want to make sure that I have all of them here. Would you say that those are all of the aspects? One could add this distinction between type and token identity and that it be possible, let's say from scratch to create a molecular duplicate of you. Is that person you? It's type identical, but it's not token identical.
00:18:10
Speaker
Oh, right. So I think I've heard this used in some other places as numerical distinction versus qualitative distinction. Is that right? Yeah, that's the same distinction. Yeah. So unpacking here more about what identity is, let's talk about it purely as something that the world has produced. So what can we say about the evolution of identity and biological life?

Evolution of Self-Models

00:18:34
Speaker
What is the efficacy of certain identity models in Darwinian evolution?
00:18:39
Speaker
I would say that self-models most likely have existed, potentially since pretty early on in the evolutionary timeline. You may argue that in some sense even the bacteria has like some kind of self-model. But again, a self-model is really just functional. The bacteria does need to know at least implicitly its size in order to be able to navigate its environment, follow chemical gradients and so on, not step on itself.
00:19:05
Speaker
But that's not the same again as a phenomenal sense of identity. And that one I would strongly suspect came much later, perhaps with the advent of the first primitive nervous systems. That would be only if actually running that phenomenal model is giving you some kind of fitness advantage.
00:19:24
Speaker
One of the things that you will encounter with David and I is that we think that phenomenally bound experiences have a lot of computational properties. And in a sense, the reason why we're conscious has to do with the fact that unified moments of experience are doing computationally useful legwork.
00:19:42
Speaker
It comes when you merge implicit self models in just the functional sense together with the computational benefits of actually running a conscious system that perhaps for the first time in history, you will actually have a phenomenal self model. Now, I would suspect probably in the Cambrian explosion, this was already going on to some extent, all of these interesting evolutionary oddities that happened in the
00:20:10
Speaker
Cameron Explosion probably had some kind of rudimentary sense of self. I would be skeptical that it's going on, for example, in plants. One of the key reasons is that running a real-time world simulation in a conscious framework is very calorically expensive.
00:20:30
Speaker
Yes. It's a scandal. What, evolutionarily speaking, is consciousness for? What can a pea zombie not do? The perspective that Andreas and I are articulating is that essentially what makes biological minds special is phenomenal binding.
00:20:51
Speaker
the capacity to run real time phenomenally bound world simulations, i.e. not just to be 86 billion discrete membrane bound pixels of experience, but somehow an entire cross-modally matched real time world simulation made up of individual objects somehow bound into a unitary self, the unity of perception.
00:21:17
Speaker
is extraordinarily computationally powerful and adaptive. Simply saying that it's extremely fitness enhancing doesn't explain it because something like telepathy would be extremely fitness enhancing too, but it's physically impossible. But yes, how a biological mind actually managed to run phenomenally bound world simulations is unknown. It would seem to be classically impossible.
00:21:43
Speaker
One way to appreciate just how advantageous non-psychotic phenomenal binding is, is to look at syndromes where it even partially breaks down. Simultaneous where one can only see one object at once or motion blindness, where you can't actually see moving objects.
00:22:04
Speaker
or florid schizophrenia down just imagine those syndromes combined why aren't we just micro experiential zombies.
00:22:15
Speaker
Do we have any interesting points here to look at in the evolutionary tree for where identity is substantially different from ape consciousness? Like if we look back at human evolution, it seems that it's given the apes and particularly our species a pretty strong sense of self. And that gives rise to much of our ape socialization and politics.
00:22:37
Speaker
So I'm wondering if there's anything else like maybe insects or other creatures that have gone a different direction. And also if you guys might be able to speak a little bit on the formation of ape identity.
00:22:48
Speaker
Definitely, I think the perspective of the selfish gene is pretty illuminating here. Nominally, our sense of identity is kind of the sense of one person, one mind. In practice, however, if you make sense of identity as well in terms of that which you want to defend or that which you consider worth preserving,
00:23:09
Speaker
You will see that people's sense of identity also extends to their family members. And of course, you know, with a neocortex and ability to create more complex associations, then you have crazy things like sense of identity being based on race or country of origin or other, you know, constructs like that, that are building on top of imports from the sense of, hey, the people who are familiar to you feel more like you.
00:23:36
Speaker
It's genetically adaptive to have that and from the point of view of the selfish gene, genes that could recognize themselves in others and favor the existence of others that also share the same genes are more likely to reproduce and that's called inclusive fitness in biology. That you're not just trying to survive yourself or make copies of yourself, you're also trying to help those that are very similar to you do the same.
00:24:00
Speaker
Almost certainly it's a huge aspect of how we perceive the world. Just anecdotally from a number of trip reports, there's like this interesting thread of how some chemicals like MDMA and 2CB for those who don't know, it's kind of these pathogenic psychedelic that people get the strange sense that people they've never met before in their life are as close to them as a cousin or maybe a half brother, half sister.
00:24:28
Speaker
And it's a very comfortable and quite beautiful feeling. And you could imagine that nature was very selective on who do you give that feeling to in order to maximize inclusive fitness.
00:24:42
Speaker
All of these builds up to the overall prediction I would make that the sense of identity of ants and other extremely social insects might be very different. The reason being that they are genetically incentivized to basically treat each other as themselves.
00:24:59
Speaker
Most ants themselves don't produce any offspring. They are genetically sisters. And all of their genetic incentives are into basically helping the queen pass on the genes into other colonies. And in that sense, I would imagine an ant probably sees other ants of the same colony pretty much as themselves.
00:25:21
Speaker
Yes, there's extraordinary finding a few years ago of members of one species of social ant actually pass the mirror test, which has traditionally been regarded as the gold standard of concept of a self. And it was shocking enough to many people when a small fish was shown to be capable of mirror self-recognition. If ants can pass the mirror test, it suggests some form of metacognition self-recognition is extraordinarily ancient.
00:25:51
Speaker
So what is it that distinguishes humans from non-human animals? I suspect it's relating to something which is still physically unexplained. How is it that a massively parallel brain gives rise to serial, logical, linguistic thought unexplained? But I would say that is what distinguishes us most of all, not a possession of a self-concept.
00:26:22
Speaker
So is there such a thing as a right answer to questions of identity? Or is it fundamentally just something that's functional? Or is it ultimately arbitrary? I think there is a right answer.
00:26:35
Speaker
from a functional perspective, there's just so many different ways of thinking about it. And I mean, as I was describing, perhaps with ants and humans, their sense of identity is probably pretty different. But you know, they both are useful for passing on the genes. So in that sense, they they're all equally valid.
00:26:53
Speaker
Imagine in the future some kind of a swarm mind that also has its own distinct functionally adaptive sense of identity. And I mean, in that sense, yeah, there's no ground truth to what it should be from the point of view of functionality. It really just depends on what is the replication unit. Ontologically, though, I think there is a case to be made that either open or empty individualism are true. Maybe it would be good to define those terms first.
00:27:20
Speaker
Before we do that, your answer then is just that yes, you suspect that also ontologically, in terms of fundamental physics, there are answers to questions of identity. Like, identity itself isn't a confused category. Yeah, I don't think it's a leaky reification, as they say.
00:27:35
Speaker
And then from the phenomenological sense, is the self an illusion or not? Is the self a valid category? Is your view also on identity that there is a right answer there? From the phenomenological point of view, no, I would consider it a parameter mostly, just something you can provide and just trade offs or different experiences of identity. Okay. How about you, David?
00:27:58
Speaker
I think ultimately, yes, there are right answers, but in practice, life would be unlivable if we didn't maintain these fictions. These fictions in one sense are deeply immoral. Let's say one punishes someone for their deed that their namesake performed 10, 15, 20 years ago. I mean, America executed a murderer for a crime that was done 20 years ago. Now, quite aside from issues of freedom and responsibility and so on.
00:28:28
Speaker
This is just a scapegoating. So David, do you feel that in the ontological sense there are right or wrong answers to questions of identity and in the phenomenological sense and in the functional sense? Yes. Okay, so then I guess you disagree with Andres about the phenomenological sense. I'm not sure. I agree about most things.
00:28:55
Speaker
Are we disagreeing, Andreas? I'm not sure. I mean, what I said about the phenomenal aspect of identity was that I think of it as a parameter of a world simulation. And in that sense, there's kind of no true phenomenological sense of identity. They're all useful for different things.
00:29:13
Speaker
The reason I would say this too is that, okay, even if you assume that something like each snapshot of experience is its own separate identity, I'm not even sure you can accurately represent that in a moment of experience itself. This is itself a huge can of worms. It opens up the problem of reference. Can we even actually refer to something from our own vintage point of view?
00:29:36
Speaker
My intuition here is that whatever sense of identity you have at a phenomenal level, I think of it as a parameter of the world simulation and I don't think it can be an accurate representation of something true. It's just going to be a feeling, so to speak.
00:29:51
Speaker
I could endorse that. We fundamentally misperceive each other, and the Hogan sisters, craniophagus, twins, know something that the rest of us don't, and that the Hogan sisters share a thalmic bridge which enables them partially into a limited extent to mind meld.
00:30:11
Speaker
The rest of us can see each other's essentially yes people or objects that have feelings and anyone that thinks of one's ignorance on the whole one might be lamenting one's failures as a mathematician or a physicist or anything else but an absolutely fundamental form of ignorance that we take for granted.
00:30:34
Speaker
is that other people, other non-human animals, essentially objects with feelings, whereas we individually have first person experience. Whether it's going to be possible to overcome this in future, I think it's going to be
00:30:49
Speaker
Immensely, technically challenging, building something like reversible thalmic bridges, a lot depends on one's theory of phenomenal binding. But let's pretend a future civilization in which partial mind building is routine. I think it will lead to a revolution, not just in morality, but in decision theoretic rationality too.
00:31:13
Speaker
And that, yeah, one will be taking account, let's say, the desires, the interests and the preferences of what will seem different aspects of oneself.
00:31:23
Speaker
So why does identity matter morally? I think you guys have made a good case about how it's important functionally historically in terms of biological evolution. And then in terms of like society and culture, identity is clearly extremely important for human social relations, for navigating social hierarchies and understanding one's position of having a concept of self and identity over time. But why does it matter morally here?
00:31:52
Speaker
One interesting story where you can think of a lot of social movements and in a sense a lot of ideologies that have existed in human history as attempts to hack people's sense of identities or make use of them for the purpose of the reproduction of the ideology or the social movement itself.
00:32:11
Speaker
to a large extent, a lot of the things that you see in therapy have a lot to do with expanding your sense of identity to include your future self as well, which is something that a lot of people struggle with when it comes to impulsive decisions or irrationality. There's this interesting point of view of how a two year old or a three year old hasn't yet internalized the fact that they will wake up tomorrow and that the consequences of what they did today will linger on in the following days.
00:32:40
Speaker
It's kind of like a revelation when he finally internalizes the fact that oh my gosh, I will continue to exist for the rest of my life. There's gonna be a point where I'm gonna be 40 years old and also there's gonna be a time where I'm 80 years old and all of those are real and I should plan ahead for it.
00:32:58
Speaker
Ultimately, I do think that advocating for a very inclusive sense of identity, where the locus of identity is consciousness itself, I do think that might be a tremendous moral and ethical implications. We want a sense of us that embraces all sentient beings, I think, which is extremely ambitious, but that I think should be the long term goal.
00:33:21
Speaker
Right. So there's a spectrum here and where you fall on the spectrum will lead to different functions and behaviors, solipsism or like extreme egoism on one end, pure selflessness or ego death or pure altruism on the other end. And perhaps there are other degrees and axes on which you can move. But the point is it leads to radically different identifications and relations with other sentient beings and with other instantiations of consciousness.
00:33:51
Speaker
Would our conception of death be different if it was convention to give someone a different name when they woke up each morning? Because after all, it is akin to reincarnation. Why isn't it that when one is drifting asleep each night, one isn't afraid of death? It's because in some sense, one believes one's going to be reincarnated in the morning.
00:34:16
Speaker
I like that. Okay, so I want to return to this question after we hit on the different views of identity to really unpack the different ethical implications more, but I wanted to sneak that in here for a bit of context. So pivoting back to this sort of historical and contextual analysis of identity, we talked about biological evolution as like instantiating these things.

Religious Influences on Identity

00:34:40
Speaker
How do you guys view religion as codifying an egoist view on identity? Religion codifies the idea of the eternal soul and the soul kind of, I think, maps very strongly onto the phenomenological self. It makes that the thing that is immutable or undying or which transcends this realm.
00:35:04
Speaker
I'm talking obviously specifically here about Abrahamic religions, but then also in Buddhism, there is the self as an illusion, or what David referred to as empty individualism, which we'll get into, where it says that that identification with a phenomenological self is fundamentally a misapprehension of reality and like a confusion, and that that leads to attachment and suffering and fear of death. So do you guys have comments here about religion as codifying views on identity?
00:35:30
Speaker
I mean, I think it's definitely really interesting that there are different views of identity in religion. How I grew up, I always assumed religion was about souls and getting into heaven. But yeah, I mean, it turns out I just didn't know about Eastern religions and cults that also happen to sometimes have like different views of personal identity. I mean, that was definitely a revelation to me.
00:35:54
Speaker
I would actually say that I started questioning the sense of common sense of personal identity before I learned about Eastern religions. And I was really pretty surprised and very happy when I found out that, let's say, Hinduism actually has a kind of universal consciousness on identity, a socially sanctioned way of looking at the world that has a very expansive sense of identity.
00:36:16
Speaker
Buddhism is also pretty interesting because as far as I understand it, they consider actually pretty much any view of identity to be a cause for suffering fundamentally. It has to do with a sense of craving either for existence or craving for non-existence, which they also consider a problem.
00:36:34
Speaker
A Buddhist would generally say that even something like universal consciousness, you know, believing that we're all fundamentally Krishna incarnating in many different ways itself will also be a source of suffering to some extent because you may kind of crave further existence which may not be very good from their point of view. It makes me optimistic that there's like other types of religions with other views of identity.
00:36:58
Speaker
Yes, so one of my earliest memories that my mother belonged to Order of the Cross who worshiped father-mother, very obscure, small, vaguely Christian denomination, none sexist. And I recall being told age five that I could be born again. Might be his little boy, but it might be his little girl because gender didn't matter.
00:37:19
Speaker
And I was absolutely appalled at this at the age of five or whatnot, because yeah, in some sense, girls were that I couldn't actually express this defective. And religious conceptions of identity vary immensely. I mean, one thinks of something like original sin in Christianity. So yeah, I mean, I could make a lot of superficial comments about religion, but one would need to actually explore in detail the different religious traditions and the differing conceptions of identity.
00:37:50
Speaker
What are the different views on identity? If you can't say anything, why don't you hit on the ontological sense and the phenomenological sense? Or if we just want to stick to the phenomenological sense, then we can. I mean, are you talking about a open, empty, closed? Yeah. So that would be the phenomenological sense. Yeah. No, actually I would, I would claim those are attempts at getting at the ontological sense. Okay.
00:38:16
Speaker
If you do truly have a soul ontology, something that implicitly a very large percentage of the human population have, that would be, yeah, in this view called a closed individualist perspective. Common sense, you start existing when you're born, you stop existing when you die. You're just a stream of consciousness. Even perhaps more strongly, you're a soul that has experiences, but experiences maybe are not fundamental to what you are.
00:38:42
Speaker
Then there is the more Buddhist and definitely more generally scientifically minded view, which is empty individualism, which is that you only exist as a moment of experience. And from one moment to the next, you're a completely different entity. And then finally, there is open individualism, which is like Hinduism, claiming that we are all one consciousness fundamentally.
00:39:06
Speaker
There is a ontological way of thinking of these notions of identity. It's possible that a lot of people think of them just phenomenologically or they may just think like there's no further fact beyond the phenomenal, in which case something like closed individualism for most people most of the time is self-evidently true because you are kind of moving in time and you can notice that you continue to be yourself from one moment to the next.
00:39:32
Speaker
then of course, what would it feel like if you weren't the same person from one moment to the next? Well, each of those moments might completely be under the illusion that it is a continuous self.
00:39:44
Speaker
For most things in philosophy and science, if you want to use something as evidence, it has to agree with one theory and disagree with another one. And the sense of continuity from one second to the next seems to be compatible with all three views. So it's not itself much evidence either way. States of depersonalization are probably much more akin to empty individualism from a phenomenological point of view.
00:40:08
Speaker
And then you have ego death and definitely some experiences of the psychedelic variety, especially high doses psychedelics tend to produce very strong feelings of open individualism that often comes in the form of noticing that your conventional sense of self is very buggy and doesn't seem to track anything real, but then realizing that you can identify with awareness itself.
00:40:33
Speaker
And if you do that, then in some sense, automatically, you realize that you are every other experience out there since the fundamental ingredient of a witness or awareness is shared with every conscious experience.
00:40:46
Speaker
These views on identity are confusing to me because agents haven't existed for most of the universe. And I don't know why we need to privilege agents in our ideas of identity. They seem to me just emergent patterns of a big ancient old physical universe process that's unfolding. It's confusing to me that just because they're complex self and world modeling patterns in the world, that we need to privilege them with some kind of shared identity across themselves or across the world.
00:41:16
Speaker
Do you see what i mean here oh yeah yeah definitely i'm not a agent centric i mean in a sense like also like all of these other exotic feelings of identity often also come with states of low agency you actually don't feel that you have much of a choice in what you could do.
00:41:34
Speaker
I mean, definitely depersonalization, for example, often comes with a sense of inability to make choices that actually it's not you who's making the choice, they're just unfolding and happening. Of course, in some meditative traditions, that's considered a path to awakening, but in practice for a lot of people, that's a very unpleasant type of experience.
00:41:53
Speaker
It sounds like I might be privileging agents. I would say that's not the case. If you kind of zoom out and you see the bigger worldview, it includes basically this concept David calls it non-materialist, physicalist idealism, where the laws of physics describe the behavior of the universe. But that which is behaving according to the laws of physics is qualia, is consciousness itself.
00:42:18
Speaker
I take very seriously the idea that a given molecule or a particular atom contains moments of experience. It's just perhaps very fleeting and very dim or just not very relevant in many ways, but I do think it's there. And the sense of identity, maybe not in a phenomenal sense. I don't think an atom actually feels like an agent over time, but continuity of its experience and the boundaries of its experience would have strong bearings on ontological sense of identity.
00:42:47
Speaker
There's a huge, obviously a huge jump between talking about the identity of atoms and then talking about the identity of a moment of experience, which presumably is an emergent effect of 100 billion neurons themselves made of so many different atoms. Crazy as it may be, it is both David Pierce's view and my view that actually each moment of experience does stand as an ontological unit. It's just an ontological unit of a certain kind
00:43:15
Speaker
that usually we don't see in physics, but it is both physical and ontologically closed. Maybe you could unpack this. You know Muriological Nihilism. Maybe I privilege this view where I just am trying to be as simple as possible and not build up too many concepts on top of each other.
00:43:34
Speaker
Mirological nihilism basically says that there are no entities that have parts. Everything is part-less. All that exists in reality is individual monads, so to speak, things that are fundamentally self-existing. For that, if you have, let's say, Monad A and Monad B, just put together side by side, that doesn't entail that now there is a Monad AB that kind of mixes the two.
00:44:00
Speaker
Or if you put a bunch of fundamental quarks together that it makes something called an atom, you would just say that it's quarks arranged atom-wise. There's the structure and the information there, but it's just made of the monads. Right. And the atom is a wonderful case, basically the same as a molecule, where I would say, neurological nihilism with fundamental particles as just the only truly existing beings does seem to be false when you look at how, for example, molecules behave.
00:44:30
Speaker
the building block account of how chemical bonds happen which is kind of with these Lewis diagrams of how we can have a single bond or double bond and you have the octet rule and you're trying to build these chains of atoms strung together and all that matters for those diagrams is what each atom is locally connected to.
00:44:50
Speaker
However, if you just use these in order to predict what molecules are possible and how they behave and their properties, you will see that there's a lot of artifacts that are empirically disproven. And over the years, chemistry has become more and more sophisticated.
00:45:06
Speaker
where eventually it's come to the realization that you need to take into account the entire molecule at once in order to understand what it's, quote unquote, dynamically stable configuration, which involves all of the electrons and all of the nuclei simultaneously interlocking into a particular pattern that self replicates. And it has new properties over and above the parts.
00:45:31
Speaker
Exactly. That doesn't make any sense to me or my intuition. So maybe my intuitions are just really wrong. Where does the new property or causality come from because it essentially has causal efficacy over and above the parts.
00:45:44
Speaker
Yeah, it's tremendously confusing. I mean, I'm currently writing an article about basically how this sense of topological segmentation can in a sense account both for this effect of what we might call weak downward causation, which is like you get a molecule and now the molecule will have effects in the world that you need to take into account all of the electrons and all of the nuclei simultaneously as a unit in order to actually know what the effect is going to be in the world.
00:46:14
Speaker
You cannot just take each of the components separately. That's something that we could call as weak downward condensation. It's not that fundamentally you're introducing a new law of physics. Everything is still predicted by the Schrodinger equation. It's still governing the behavior of the entire molecule. It's just that the appropriate unit of analysis is not the electron, but it would be the entire molecule.
00:46:37
Speaker
Now, if you pair this together with a sense of identity that comes from topology, then I think there might be a good case for why the moments of experience are discrete entities.
00:46:51
Speaker
the analogy here with topological segmentation. Hopefully I'm not going to lose too many listeners here, but we can make an analogy with, for example, a balloon, that if you start out imagining that you are the surface of the balloon, and then you take the balloon by two ends and you twist them in opposite directions, eventually at the middle point you get what's called a pinch point. Basically the balloon kind of collapses in the center and you end up having these two smooth surfaces connected by a pinch point.
00:47:20
Speaker
Each of those twists creates a new topological segment, or in a sense is like segmenting out the balloon.
00:47:27
Speaker
you could basically interpret things such as molecules as new topological segmentations of what's fundamentally the quantum fields that is implementing them. Usually the segmentations may look like an electron or a proton, but if you assemble them together just right, you can get them to essentially melt with each other and become one topologically continuous unit.
00:47:52
Speaker
The nice thing about this account is that you get everything that you want you explain on the one hand why identity would actually have causal implications and it's these weak downward causation effect at the same time as like being able to explain how is it possible that the universe can break down into many different entities.
00:48:12
Speaker
Well, the answer is the way in which is breaking down is through topological segmentations. You end up having these kind of self-contained regions of the wave function that are discommunicated from the rest of it. And each of those might be a different subject of experience.
00:48:28
Speaker
It's very much an open question, the intrinsic nature of the physical. Commonly, materialism and physicalism are equated. But the point of view that Andreas and I take seriously, non-materialist physicalism is actually a form of idealism.
00:48:46
Speaker
Recently philosopher phil goth galileo's era used to be a skeptic critic of number two list is a close up because of the binding problem he's recently published a book defending it. Very much an open question i mean we're making some background assumptions here.
00:49:04
Speaker
Critical background assumption is physicalism that quantum mechanics is complete there's no element of reality that is missing from the equations or possibly the fundamental equation of physics but physics itself seems to be silent on the intrinsic nature of the physical.
00:49:24
Speaker
I mean intuitively, what is the nature of a quantum field? Intuitively, it's a field of incentience. But this isn't a scientific discovery, it's a very strong philosophical intuition. And if you couple this with the fact that the only part of the world with which one has
00:49:44
Speaker
direct access, i.e. one's own conscious mind, though this is controversial, is consciousness sentient? The number tier list physicalist will conjecture that we are typical in one sense, that the fields of your central nervous system aren't ontologically different from the rest of the world. And what makes sentient beings special is the way that fields are organized into subjects or experience egocentric world simulations.
00:50:15
Speaker
I'm personally fairly confident that each of us individually is running a mind's negocentric world simulation that direct realism is false. I'm not at all confident that I certainly explore it. That experience is the intrinsic nature of the physical, the stuff of the world.
00:50:32
Speaker
But this is a tradition that goes back via Russell ultimately to Schopenhauer. Schopenhauer essentially turning Kant on his head. Kant famously said that all we will ever know is phenomenology, appearances. We will never, never know the intrinsic, numinal nature of the world. But Schopenhauer argues that essentially we do actually know one tiny piece of the numinal essence of the world, the essence of the physical.
00:51:00
Speaker
And it's experiential. So yes, tentatively at any rate, Andres and I would defend non-materialist or idealistic physicalism.
00:51:11
Speaker
The actual term non-materialist physicalism is due to the late Grover Maxwell. Sorry, could you just define that real quick? I think we haven't. Physicalism is the idea that no element of reality is missing from the equations of physics, presumably some relativistic generalization of the Schrodinger equation. It's a kind of naturalism, too.
00:51:33
Speaker
Oh yes, it is naturalism. There are some forms of idealism and panpsychism that are non-naturalistic, but this is uncompromisingly monist. Non-materialist physicalism isn't claiming that primitive experiences is attached in some way to fundamental physical properties. The idea is that the actual intrinsic nature of the essence of the physical is experiential.
00:51:57
Speaker
Stephen Hawking, for instance, was a wave function monist, a doctrinaire materialist, but he famously said that we have no idea what breathes fire into the equations and makes there a universe for us to describe. Now, intuitively, of course, one assumes that the fire in the equations can't assume the lessons of the world is non-experiential, but if so, we have the
00:52:21
Speaker
We have the binding problem, we have the problem of causal efficacy, a great mess of problems. But if, and it's obviously a huge if, the actual intrinsic nature of the physical is experiential, then we have a theory of reality that is empirically adequate, that has tremendous explanatory predictive power.

Consciousness and Physical Reality

00:52:44
Speaker
mind-bogglingly implausible, at least to those of us steeped in the conceptual framework of materialism. But yes, by transposing the entire mathematical apparatus of modern physics, quantum field theory, or its generalization onto an idealist ontology, one actually has a complete account of reality that explains the technological successes of science, its predictive power,
00:53:11
Speaker
And doesn't give rise to such insoluble mysteries as the hard problem.
00:53:16
Speaker
I think all this is like very clarifying that there are also background metaphysical views, which people may or may not disagree upon, which are also important for identity. I also want to be careful to define some terms in case some listeners don't know what they mean. I think you hit on like four different things, which all had to do with consciousness. The hard problem is why different kinds of computation actually, why it's something to be that computation or like why there was consciousness correlated or associated with that experience.
00:53:43
Speaker
Then you also said the binding problem. Is it the binding problem why there is a unitary experience that you said modally connected earlier?
00:53:52
Speaker
If one takes the standard view from neuroscience that your brain consists of 86 billion odd discrete decohered membrane bound cells, then phenomenal binding either local or global ought to be impossible. This is the binding problem, this partial structural mismatch that neuroscience can apparently, if it scans your brain when you're seeing a particular perceptual object,
00:54:19
Speaker
can pick out distributed feature processors, edge detectors, motion detectors, color mediating neurons, and yet there isn't the perfect structural match that must exist if physicalism is true. And David Chalmers from this partial structural mismatch goes on to argue that
00:54:38
Speaker
dualism must be true. Though I agree with David Chalmers that, yes, phenomenal binding is classically impossible. If one takes the intrinsic nature argument seriously, then phenomenal unity is minted in. The intrinsic nature argument recall is that experience or consciousness discloses the intrinsic nature of the physical.
00:55:01
Speaker
Now, one of the reasons why this is so desperately implausible is it makes the fundamental psychon of consciousness ludicrously small, but there's a neglected corollary of non-materialist physicalism in that if experience discloses the intrinsic nature of the physical,
00:55:18
Speaker
Essentially, experience must be temporarily incredibly fine-grained. And if you probe your nervous system at a temporal resolution or femtoseconds or even atoseconds, what would one find? And it's my guess that it would be possible to recover a perfect
00:55:38
Speaker
structural match between what you are experiencing now in your phenomenal world simulation and the underlying physics that superpositions or cat states are individual entities. Now, if the effective lifetime of neuronal superpositions in the CNS were milliseconds, they would be the obvious candidate for a perfect structural match and to explain the phenomenal unity of consciousness.
00:56:05
Speaker
But physicists, not least, Max Tegmark have done the maths. The effective lifetime of neuronal superstitions in the CNS, assuming the unitry-only dynamics, is femtoseconds or less, which is intuitively the reductio ad absurdum of any kind of quantum mind.
00:56:23
Speaker
But one person's reductio ad absurdum is another person's falsifiable prediction. And I'm guessing, I'm sounding like a believer, I'm not, but I am guessing that when there is sufficiently sensitive molecular matter wave interferometry, perhaps using trained up mini brains, that the non-classical interference signature will disclose perfect structural match between what you're experiencing right now, you're in your unified world simulation and the underlying physics.
00:56:52
Speaker
So we hit on the hard problem and also the binding problem. There was like two other ones that you threw out there earlier that I forgot what they were. The problem of causal efficacy, how is it that you and I can discuss consciousness? How is it that the raw fields of consciousness have not really the causal, but also the functional efficacy to inspire discussions of their existence? And then what was the last one?
00:57:20
Speaker
What's been called the plate problem p a l e double t e as in the fact that there is tremendous diversity of different kinds of experience and yet the fundamental and is recognized by physics at least on the normal tail extremely simple and homogeneous.
00:57:41
Speaker
What explains this extraordinarily rich, collective, conscious experience? Physics exhaustively describes the structural relational properties of the world. What physics doesn't do is deal in essences intrinsic nature. Now it's an extremely plausible assumption that the world's fundamental fields are non-experiential, devoid of any subjective properties, and this may well be the case.
00:58:09
Speaker
But if so we have a hard problem the problem of course is efficacy, binding problem, the whole raft of problems.
00:58:18
Speaker
Okay, so this all serves the purpose of codifying that there's these questions up in the air about these metaphysical views, which inform identity. We got here because we were talking about myriological nihilism. And Andres said that one view that you guys have is that you can divide or cut up or partition consciousness into individual momentary, unitary moments of experience that you claim are ontologically simple. What is your credence on this view?
00:58:48
Speaker
Phenomenological evidence. When you experience your visual fields, you don't only experience one point at a time. The contents of your experience are not ones and zeros. It isn't the case that you experience one and then zero and then one again. Rather, you experience many different types of qualia varieties simultaneously. Visuals experience and auditory experience and so on. All of that gets presented to you.
00:59:14
Speaker
I take that very seriously. I mean, some other researchers may fundamentally say that that's an illusion, that there's actually never a unified experience. But that has way many more problems than actually taking seriously the unity of consciousness.
00:59:30
Speaker
A number of distinct questions here are each of us egocentric, phenomenal world simulations. And a lot of people are implicitly conceptual direct realists, even though they might disavow the label, but implicitly they assume that they have some kind of direct access to physical properties and they will associate experience with some kind of stream of thoughts and feelings behind their forehead.
00:59:56
Speaker
But then there is the question, what is the actual fundamental nature of the world beyond your phenomenal world simulation? Is it experiential or non-experiential? I'm agnostic about that, even though I argue for non-materialist physicalism. So I guess I'm just trying to get a better answer here on how is it that we navigate these views of identity towards truth?
01:00:21
Speaker
An example, I thought of a very big contrast between what you may intuitively imagine is going on versus what's actually happening is if you're very afraid of snakes, for example, and you look at a snake, you feel, oh my gosh, it's intruding into my world and I should get away from it. And you have kind of this representation of it as a very big author. You know, anything that is very threatening, oftentimes you represent it as an author.
01:00:48
Speaker
Crazily, that's actually just yourself to a large extent because it's still part of your experience. You know, within your moment of experience, the whole phenomenal quality of looking at a snake and thinking that's an other is entirely contained within you. In that sense, these ways of ascribing identity and continuity to the things around us or like a self-other division
01:01:10
Speaker
are almost kind of psychotic. They kind of start out by assuming that you can segment out a piece of your experience and call it something that belongs to somebody else, even though clearly is still just part of your own experience. It's you. But the background here is also that you're calling your experience your own experience, which is maybe also a kind of psychopathy. Is that the word you use? Yeah, that's right.
01:01:33
Speaker
Maybe the scientific thing is like there's just snake experience and it's neither yours nor not yours. And there's what we conventionally call a snake. That said, there are ways in which I think you can use experience to gain insight about other experiences.
01:01:48
Speaker
If you're looking at a picture that has two blue dots, I think you can accurately say by paying attention to one of those blue dots, the phenomenal property of my sensation of blue is also in that other part of my visual field. And this is a case where, in a sense, you can, I think, meaningfully refer to some aspect of your experience by pointing at another aspect of your experience.
01:02:12
Speaker
It's still maybe in some sense kind of crazy, but it's still closer to truth than many other things that we think of or imagine. Honest and true statements about the nature of other people's experiences, I think are very much achievable. Breaching the reference gap, I think might be possible to overcome. And you can probably aim for like a true sense of identity, harmonizing the phenomenal and the ontological sense of identity.
01:02:40
Speaker
I mean, I think that part of the motivation, for example, in Buddhism is that you need to always be understanding yourself in reality as it is, or else you will suffer, and that it is through understanding how things are that you'll stop suffering. I like this point that you said about unifying the phenomenal identity and phenomenal self.
01:02:58
Speaker
with what is ontologically true. But that also seems not intrinsically necessary because there's also this other point here where you can like maybe function or have the epistemology of any arbitrary identity view, but not identify with it. You don't take it as your ultimate understanding of the nature of the world or like what it means to be this limited pattern in a giant system.
01:03:22
Speaker
I mean, generally speaking, that's obviously a pretty good advice. It does seem to be something that's kind of constrained to the workings of the human mind as it is currently implemented. You know, I mean, definitely all these Buddhist advices don't identify with it or don't get attached to it. Ultimately, it cashes out in experiencing less of a craving, for example, or feeling less despair in some cases.
01:03:47
Speaker
Useful advice, not universally applicable. For many people, their problem might be something like, sure, like desire, craving, attachment, in which case, you know, these Buddhist practices will actually be very helpful. But if your problem is something like melancholic depression, then lack of desire doesn't actually seem very appealing. That is the default state and it's not a good one. I'd be mindful of universalizing this advice.
01:04:14
Speaker
Yes, other things being equal, the happiest people tend to have the most desires. Of course, a tremendous desire can also bring tremendous suffering, but there are a very large number of people in the world who are essentially unmotivated. Nothing really excites them. In some cases, they're just waiting to die, melancholic, depression.

Desire, Happiness, and Conflict

01:04:34
Speaker
Desire can be harnessed. Big problem, of course, is that in a Darwinian world, many of our desires are mutually inconsistent and to use what
01:04:43
Speaker
me at least would be a trivial example, not to everyone. If you have 50 different football teams with all their supporters, there is simply logically no way that the preferences of these fanatical football supporters can be reconciled. But nonetheless, by raising hedonic set points, one can allow football supporters to enjoy gradients of information sensitive lists, but there is simply no way to reconcile their preferences.
01:05:14
Speaker
There's part of me that does want to do some universalization here and like maybe that is wrong or unskillful to do. But I seem to be able to imagine a future where say we get a line super intelligence and there's some kind of rapid expansion, some kind of optimization bubble of some kind. And maybe there are like the worker AIs and then there are like the exploiter AIs and the exploiter AIs just get blissed out. And imagine if some of the exploiter AIs are egomaniacs.
01:05:44
Speaker
in their hedonistic simulations and some of them are hive minds and they all have different views on open individualism or closed individualism. Some of the views on identity just seem more diluted to me than others. I seem to have a problem with a self-identification and reification of self as something.
01:06:04
Speaker
It seems to me to take something that is conventional and make it an ultimate truth, which is confusing to the agent. And that to me seems bad or wrong, like your world model is wrong. Part of me wants to say it is always better to know the truth, but I also feel like I'm having a hard time being able to say how to navigate views of identity in a true way. And then another part of me feels like actually it doesn't really matter only insofar as it affects the flavor of that consciousness.
01:06:35
Speaker
If we find the chemical or genetic levers for different notions of identity, we could presumably imagine a lot of different ecosystems of approaches to identity in the future, some of them perhaps being much more adaptive than others.
01:06:49
Speaker
I do think I grasp a little bit like maybe the intuition pump and I think that's actually something that resonates quite a bit with us, which is that it is an instrumental value for sure to always be truth-seeking, especially when you're talking about like general intelligence. It's very weird and it sounds like it's going to fail if you say, hey, I'm going to be truth-seeking in every domain except on
01:07:12
Speaker
Here and these might be identity or value function or your model of physics or something like that But perhaps actual super intelligence in some sense it really entails having an open-ended model for everything including Ultimately who you are If you're not having those open-ended models that can be revised with further evidence and reasoning You are not a super intelligence
01:07:36
Speaker
That intuition pump may suggest that if intelligence turns out to be extremely adaptive and powerful, then presumably the super intelligences of the future will have true models of what's actually going on in the world, not just convenient fictions.
01:07:54
Speaker
Yes. In some sense, I think I would hope our long-term goal is ignorance of the entire Darwinian era and its horrors, but it would be extremely dangerous if we were to give up prematurely. We need to understand reality and the theoretical upper bounds of rational moral agency in the cosmos.
01:08:17
Speaker
I was with me when we have done literally everything that it is possible to do to minimize and prevent suffering i think in some sense we want to forget about it all together but i would stress the risks of premature defeat is.
01:08:34
Speaker
Of course, we're always going to need a self model, a model of the cognitive architecture in which the self model is embedded. It needs to understand the directly adjacent computations, which are integrated into it. But it seems like the views of identity go beyond just this self model. Is that the solution to identity? What does open closed or empty individualism have to say about something like that?
01:09:00
Speaker
open, empty, and closed as ontological claims. Yeah, I mean, they are separable from the functional uses of a self model. It does, however, have like bearings on basically the decision theoretic rationality of an intelligence, because when it comes to planning ahead,
01:09:17
Speaker
If you have the intense, let's say, objective of being as happy as you can, and somebody offers you a cloning machine, and they say like, hey, you can trade one year of your life for just a completely new copy of yourself. Do you press the button to make that happen? For making that decision, you actually do require a model of ontological notion of identity, unless you just care about replication.
01:09:43
Speaker
So I think that the problem there is that identity, at least in us apes, is caught up in ethics. If you could have an agent like that where identity was not factored into ethics, then I think that it would make a better decision.
01:09:57
Speaker
It's definitely a question too, like whether you can bootstrap an impartial God's eye view on the well-being of all sentient beings without first having developed a sense of own identity and then wanting to preserve it, and finally kind of updating it with more information, you know, philosophy, reasoning, physics.
01:10:15
Speaker
I do wonder if you can start out without caring about identity and finally concluding with kind of an impartial God's eye view. I think probably in practice a lot of those transitions do happen because a person is first concerned with themselves and then they update the model of who they are based on more evidence. You know, I could be wrong. It might be possible to completely sidestep kind of Darwinian identities and just jump straight up into impartial care for all sentient beings. I don't know.
01:10:44
Speaker
So we're getting into the ethics of identity here and like why it matters. The question for this portion of the discussion is what are the ethical implications of different views on identity? So Andres, I think you can sort of kick this conversation off by talking a little bit about the game theory.
01:11:02
Speaker
Right. Well, yeah, the game theory is surprisingly complicated. Just consider within a given person, in fact, the different quote unquote sub-agents of an individual. Let's say you're drinking with your friends on a Friday evening, but you know you have to wake up early at 8 AM for whatever reason, and you're deciding whether to have another drink or not. Your intoxicated self says, yes, of course.
01:11:27
Speaker
Tonight is all that matters, you know, whereas your cautious self might try to persuade you that no, you will also exist tomorrow in the morning. Within a given person, there's all kinds of complex game theory that happens between alternative views of identity, even implicitly.
01:11:45
Speaker
It becomes obviously much more tricky when you expand it outwards, how like some social movements in a sense are trying to hack people's view of identity, whether the unit is your political party or the country or the whole ecosystem or whatever it may be.
01:12:00
Speaker
A key thing to consider here is the existence of legible shelling points, also called focal points, which is in the absence of communication between entities, what are some kind of guiding principles that they can use in order to effectively coordinate and move towards a certain goal. I would say that having something like open individualism itself can be a powerful shelling point for coordination, especially because if you can't be convinced that somebody
01:12:30
Speaker
is an open individualist, you have reasons to trust them. There's all of this research on how high-trust social environments are so much more conducive to productivity and long-term sustainability that low-trust environments and expansive notions of identity are very trust-building. On the other hand, from a game theoretical point of view, you also have the problem of defection.
01:12:54
Speaker
Within an open individualist society, you have a small group of people who can fake the test of open individualism. They can take over from within and instantiate some kind of dictatorship or some type of a closed individualist takeover of what was a really good society, good for everybody.
01:13:16
Speaker
This is a serious problem, even when it comes to, for example, forming groups of people with all of them share a certain experience. For example, MDMA or 5-M-E-O-D-M-T or let's say deep stages of meditation. Even then, you've got to be careful because people who are resistant to those states may pretend that they have an expanded notion of identity, but actually covertly work towards a much more reduced sense of identity.
01:13:46
Speaker
I've yet to see a credible game theoretically aware solution to how to make this work. If you could clarify the knobs in a person, whether it be altruism or selfishness or other things that the different views on identity turn, if you clarify how that affects the game theory, then I think that that would be helpful.
01:14:08
Speaker
I mean, I think the biggest nub is fundamentally what experiences count from the point of view of the fact that you expect to, in a sense, be there or expect them to be real in as real of a way as your current experience is. It's also contingent on theories of consciousness because you could be an open individualist and still believe that higher order cognition is necessary for consciousness and that non-human animals are not conscious.
01:14:37
Speaker
that gives rise to all sorts of other problems. The person presumably is altruistic and cares about others, but they just still don't include non-human animals for a completely different reason in that case. Definitely another knob is how you consider what you will be in the future, whether you consider that to be part of the universe or the entirety of the universe.
01:15:00
Speaker
I guess I used to think that personal identity was very tied to a hedonic tone. I think of them as much more dissociated now. There is a general pattern. People who are very low mood may have kind of a bias towards empty individualism. People who become open individualists often experience a huge surge in positive feelings for a while because they feel that they're never going to die. The fear of death greatly diminishes.
01:15:30
Speaker
But I don't actually think it's a surefire or a foolproof way of increasing well-being because you take seriously open individualism. It also comes with terrible implications, like that, hey, we are also the pigs in factory farms. It's not a very pleasant view. Yeah, I take that seriously.
01:15:47
Speaker
I used to believe for a while the best thing we could possibly do in the world was to just write a lot of essays and books about why open individualism is true. And now I think it's important to combine it with consciousness technologies so that, hey, once we do want to upgrade our sense of identity to a greater circle of compassion, that we also have the enhanced happiness and mental stability to be able to actually engage with that without going crazy.
01:16:15
Speaker
And this has me thinking about one point that I think is very motivating for me for the ethical case of veganism. Take the common sense, normal consciousness like most people have and that I have. You just feel like a self that's having an experience. You just feel like you are fortunate enough to be born as you and to be having the Andres experience or the Lucas experience.
01:16:37
Speaker
and that your life is from birth to death or whatever. And like when you die, you will be annihilated and you will no longer have experience. Then who is it that is experiencing the cow consciousness? Or like who is it that is experiencing the chicken and the pig consciousness? There's so many instantiations of that, like billions. Even if this is based off of the irrationality, it still feels motivating to me. Yeah, I could just die and wake up as a cow 10 billion times. That's kind of the experience that is going on right now.
01:17:04
Speaker
sudden confused awakening into cow consciousness plus factory farming conditions. I'm not sure if you find that completely irrational or motivating or what. No, I mean, I think it makes sense. We have a common friend as well, Magnus Winding. He wrote a pro-veganism book actually kind of with this line of reasoning. It's called You Are Them, about how a post-theoretical science of consciousness and identity itself is a strong case for an ethical lifestyle.
01:17:32
Speaker
So just touching here on the ethical implications, like some other points I just want to add here are that when one is identified with one's phenomenal identity, in particular, I want to talk about the experience of self where you feel like you're a closed individualist, which your life is like when you were born and then up until when you die, that's you.
01:17:51
Speaker
I think that that breeds a very strong duality in terms of your relationship, your own personal, phenomenal consciousness. The suffering and joy which you have direct access to are categorized as mine or not mine. Those which are mine take high moral and ethical priority over the suffering of others.
01:18:11
Speaker
You're not mind melded with all of the other brains, right? So there's an epistemological limitation there where you're not directly experiencing the suffering of other people, but the closed individualist view goes a step further. And isn't just saying that there's an epistemological limitation, but it's also saying that this consciousness is mine and that consciousness is yours. And this is the distinction between self and other and given selfishness, that self-consciousness will take moral priority over other consciousness. That I think just obviously has massive ethical implications with regards to the greed of people.
01:18:41
Speaker
I view here the ethical implications as being important because, at least in the way that human beings function, if one is able to fully rid themselves of the ultimate identification with your personal consciousness as being the content of self, then I can move beyond the duality of consciousness to self another and care about all instances of well-being and suffering much more equally than I currently do.
01:19:04
Speaker
That to me seems harder to do, at least with human brains, if we have a strong reification and identification with your instances of suffering or wellbeing as your own.
01:19:16
Speaker
Part of the problem is that the existence of other subjects of experience is metaphysical speculation. It's metaphysical speculation that one should take extremely seriously. I'm not a solipsist. I believe that other subjects of experience, human and non-human, are as real as mine. But nonetheless, it is still speculative and theoretical. One cannot
01:19:38
Speaker
feel their experiences. There is simply no way given the way that we are constituted the way we are that one can behave in an impartial sort of godlike with impartial godlike benevolence.
01:19:52
Speaker
I guess I would question perhaps a little bit that we only care about our future suffering within our own experience because this is me, this is mine, this is another. In a sense, I think we care about those more largely because they're more intense. You do see examples of, for example, mirror touch synesthesia.
01:20:09
Speaker
people who, if they see somebody else get hurt, they also experience pain. And I don't mean a fleeting sense of discomfort, but like perhaps even actual, you know, strong pain, because they're able to kind of reflect that for whatever reason.
01:20:25
Speaker
People like that are generally very motivated to help others as well. And in a sense, their implicit self model includes others, or at least weights others more than most people do. I mean, in some sense, you can perhaps make sense of selfishness in this context as the coincidence that what is within our self model is experienced as more intense.
01:20:45
Speaker
But there's plenty of counter examples to that, including sense of depersonalization or ego death, where you could experience the feeling of God, for example, as being this eternal and impersonal force that is infinitely more intense than you, and therefore it matters more, even though you don't experience it as you. Perhaps the core issue is what gets the highest amount of intensity within your world simulation.
01:21:11
Speaker
Okay, so I also just want to touch on a little bit about preferences here before we move on to how this is relevant to AI alignment and the creation of beneficial AI. From the moral realist perspective, if you take the metaphysical existence of consciousness very substantially and you view it as the ground of morality, then different views on identity will shift how you weight the preferences of other creatures.
01:21:35
Speaker
And so from a moral perspective, whatever kinds of views of identity end up broadening your moral circle of compassion, closer and closer to the end goal of impartial benevolence for all sentient beings according to their degree and kinds of worth, I would view as a good thing.
01:21:55
Speaker
But now there's this other way to think about identity, because if you're listening to this and you're a moral anti-realist, there is just the arbitrary evolutionary and historical set of preferences that exist across all creatures on the planet. Then the views on identity, I think, are also obviously, again, going to weigh into your moral considerations about how much to just respect different preferences, right?
01:22:18
Speaker
One might want to go beyond hedonic consequentialism here and could just be a preference consequentialist. You could be a deontological ethicist or a virtue ethicist too. We could also consider about how different views on identity as lived experiences would affect what it means to become virtuous. If being virtuous means moving beyond the self actually.
01:22:42
Speaker
I think I understand what you're getting at. I mean, really, there's kind of two components to ontology. One is what exists, and then the other one is what is valuable. You can arrive at something like open individualism just from the point of view of what exists, but still have disagreements with other open individualists about what is valuable. Alternatively, you could agree on what is valuable with somebody, but completely disagree on what exists.
01:23:10
Speaker
to get the power of cooperation of open individualism as a shelling point, there also needs to be some level of agreement on what is valuable, not just what exists.
01:23:22
Speaker
It definitely sounds arrogant, but I do think that by the same principle by which you arrive at open individualism or empty individualism, basically non-standard views of identities, you can also arrive at hedonistic utilitarianism. And that is, again, like the principle of really caring about knowing who or what you are fundamentally.
01:23:43
Speaker
to know yourself more deeply also entails understanding from second to second how your preferences impact your state of consciousness.
01:23:53
Speaker
It is my view that just as open individualism, you can think of it as the implication of taking a very systematic approach to make sense of identity. Likewise, philosophical hedonism is also an implication of taking a very systematic approach at trying to figure out what is valuable. How do we know that pleasure is good?
01:24:15
Speaker
Yeah, and does the pleasure-pain axis disclose the world's intrinsic metric of disvalue? There is something completely coercive about pleasure and pain and one can't transcend the pleasure-pain axis. Taking heroin or enhanced interrogation, there is no one with an inverted pleasure-pain axis.
01:24:40
Speaker
supposed counter examples like sadomasochist in fact just validate the primacy of the pleasure pain axis.
01:24:47
Speaker
What follows from the primacy of the pleasure-pain axis? Should we be aiming as a classical utilitarian surge to maximize the positive abundance of subjective value and the universe, or at least our forward light cone? But if we are classical utilitarians, there is this latently apocalyptic
01:25:10
Speaker
implication of classical utilitarianism that we ought to be aiming to launch something like a utilitarian or hedonium shockwave where utilitarian or hedonium is matter and energy optimized for pure bliss. And so rather than any kind of notion of personal identity as we currently understand it, if one is a classical utilitarian,
01:25:35
Speaker
Or if one is programming a computer or a robot for the utility function of classical utilitarianism, it should essentially one be aiming therefore to launch an apocalyptic utilitarian shockwave.
01:25:50
Speaker
Or alternatively, should one be trying to ensure that the abundance of positive value within our cosmological horizon is suboptimal by utilitarian criteria? I don't actually personally advocate a utilitarian shockwave. I don't think it's sociologically realistic. I think much more sociologically realistic is to aim for a world based on gradients of intelligent bliss.
01:26:18
Speaker
Because that way, people's existing values and preferences for the most part can be conserved. But nonetheless, if one is a classical utilitarian, it's not clear what is allowed this kind of messy compromise.
01:26:34
Speaker
All right, so now that we're getting into the juicy hedonistic comparative type stuff, let's talk about here about how this is relevant to AI alignment and the creation of beneficial AI.

Identity in AI Alignment and Human Values

01:26:45
Speaker
I think that this is clear based off of the conversations we've had already about the ethical implications and just how prevalent identity is in our world for the functioning of society and sociology, just civilization in general.
01:26:58
Speaker
So let's limit the conversation from the moment just to AI alignment. And for this initial discussion of AI alignment, I just want to limit it to the definition of AI alignment as developing the technical process by which AIs can learn human preferences and help further express and idealize humanity. So exploring how identity is important and meaningful for that process, two points I think that it's relevant for.
01:27:26
Speaker
Who are we making the AI for? Different views on identity, I think, would matter because if we assume that sufficiently powerful and integrated AI systems are likely to have consciousness or to have qualia, they're moral agents in themselves. So who are we making the AI for?
01:27:42
Speaker
We're making new patients or subjects of morality if we ground morality on consciousness so from a purely egoistic point of view the alignment process is just for humans it's just to get the AI to serve us.
01:27:57
Speaker
But if we care about all sentient beings impartially, and we just want to maximize conscious bliss in the world, and we don't have these dualistic distinction of consciousness being self or other, we could make the AI alignment process something that is more purely altruistic, that we recognize that we're creating something that is fundamentally more morally relevant than we are, given that it may have more profound capacities for experience or not.
01:28:23
Speaker
And David, I'm also holding in my hand, I know that you're skeptical of the ability of AGI or superintelligence to be conscious. I agree that that's not solved yet, but I'm just working here with the idea, okay, maybe if they are. So I think it can change the altruism versus selfishness motivations around who we're training the AIs for. And then the second part is why are we making the AI?
01:28:44
Speaker
Are we making it for ourselves or are we making it for the world? If we take a view from nowhere, what Andres called a God's eye view, is this ultimately something that is for humanity or is it something ultimately for just making a better world?
01:28:58
Speaker
Personally, I feel that if the end goal is ultimate loving kindness and impartial ethical commitment to the wellbeing of all sentient creatures in all directions, then ideally the process is something that we're doing like for the world and that we recognize the intrinsic moral worth of the AGI and super intelligence as ultimately more morally relevant descendants of ours. So I wonder if you guys have any reactions to this. Yeah, definitely. So many.
01:29:27
Speaker
Tongue in cheek, but you just made me chuckle when you said, why are we making the AI to begin with? I think there's a case to be made that the actual reason why we're making AI is as a kind of an impressive display of fitness in order to signal our intellectual fortitude and superiority.
01:29:44
Speaker
I mean, sociologically speaking, you know, like actually, you know, getting an AI to do something really well. It's a way in which you can yourself signal your own intelligence. And I guess I worry to some extent that this is a bit of a tragedy of the comments, as it is the case with a weapon development. You're so concerned with whether you can, and especially because the social incentives that you're going to gain status and be looked at as somebody who's really competent and smart, that you don't really stop and wonder whether you should be building this in the first place.
01:30:14
Speaker
Leaving that aside just from a purely ethically motivated point of view, I do remember thinking and having a lot of discussions many years ago about if we can make a supercomputer experience what it is like for a human to be on MDMA, then all of a sudden that supercomputer becomes a moral patient. It actually matters. You probably shouldn't turn it off. Maybe in fact, you should make more of that.
01:30:38
Speaker
A very important thing I'd like to say here is I think it's really important to distinguish the notion of intelligence on the one hand as causal power over your environment and on the other hand as the capacity for self-insight and introspection and understanding reality.
01:30:56
Speaker
And I would say that we tend to confuse these quite a bit. I mean, especially in circles that don't take consciousness very seriously, it's usually implicitly assumed that having a superhuman ability to control your environment entails that you also have, in a sense, kind of a superhuman sense of self or a superhuman broad sense of intelligence. Whereas even if you are a functionalist, I mean, even if you believe that a digital computer can be conscious,
01:31:25
Speaker
You can make it a pretty strong case that even then it is not something automatic. It's not just that if you program the appropriate behavior, it will automatically also be conscious. A super straightforward example here is that if you have the Chinese room, if it's just a giant lookup table, clearly it's not a subject of experience, even though the input-output mapping might be very persuasive.
01:31:48
Speaker
There's definitely still problems there. And I think if we aim instead towards maximizing intelligence in the broad sense, that does entail also the ability to actually understand the nature and scope of other states of consciousness.
01:32:04
Speaker
And in that sense, I think a super intelligence of that sort would be intrinsically aligned with the intrinsic values of consciousness. But there's just so many ways of making partial super intelligences that maybe are super intelligent in many ways, but not in that one in particular. And I worry about that. I sometimes give this kind of simplistic trichotomy three conceptions of super intelligence. One is the kind of intelligence explosion.
01:32:31
Speaker
recursively self-improving software-based AI. Then there is the Kurtzweilian scenario, complete fusion of humans and our machines. And then there is...
01:32:43
Speaker
what very crudely one can call biological superintelligence, but not just rewriting our genetic source code, but also the neural link here is prefiguring it. Essentially narrow superintelligence on a chip so that anything that a digital computer can do that a human or a trans human can do. And so, yes, I see a full spectrum superintelligence as a biological descendants.
01:33:11
Speaker
super sentient, able to navigate radically alien states of consciousness. And so I think is it the question that you're asking, why are we developing narrow AI, possibly narrow AGI? Is that question the purely non-biological machine attempt at super intelligence?
01:33:32
Speaker
I'm speaking specifically from the AI alignment perspective, how you align current day systems and future systems to super intelligence and beyond with human values and preferences. And so the question born of that in the context of these questions of identity is who are we making that AI for? And like, why are we making the AI? If you've got Buddha, I teach one thing and one thing only suffering and the end of suffering Buddha would press the off button.
01:33:56
Speaker
I would press the off button. What's the off button? Sorry, the notional, you know, initiator, a vacuum phase transition or something, and that obliterates Darwinian life. But when people talk about AI alignment, most people working in the field, they are not talking about a Buddhist ethic. They have something else in mind. And in practical terms, this is not a fruitful line, you know, the kind of Buddhist, Benatarian, negative utilitarian, suffering-focused ethics.

AI and Buddhist Ethics

01:34:26
Speaker
Essentially that one wants to be kind of ratcheting up he don't agree and she don't accept points in the way that you're conserving people's existing preferences even though they're existing preferences and values are in many cases in conflict with each other. How on actually implement this in a classical digital computer or a classical connection is system or some kind of hybrid i don't know precisely.
01:34:52
Speaker
At least one pretty famous cognitive scientist and AI theorist does propose the Buddhist ethic of turning the off button of the universe. Thomas Metzinger and his benevolent artificial antinatalism.
01:35:07
Speaker
that's pretty interesting because he explores, you know, the idea of an AI that truly kind of extrapolates human values and what's good for us as subject of experience and the AI concludes what we are psychologically unable to, which is that the ethical choice is non-existence.
01:35:24
Speaker
But yeah, I mean, I think like that's, as David pointed out, impossible. I think it's much better to put our efforts in creating a super cooperator cluster that tries to recalibrate the hedonic set point so that we are animated by creating sublease. Sociological constraints are really, really important here. Otherwise, you risk being irrelevant. Being irrelevant, yeah, is one thing. The other thing is unleashing an ineffective or failed attempt at sterilizing the world, which would be so much, much worse.
01:35:55
Speaker
I don't agree with this view, David. Generally, I think that Darwinian history has probably been net negative, but I'm extremely optimistic about how good the future can be. And so I think it's an open question at the end of time, how much misery and suffering and positive experience there was. So I guess I would say I'm agnostic as to this question, but if we get AI alignment right and these other things, then I think that it can be extremely good. And I just want us to heather this back to identity and AI alignment
01:36:24
Speaker
I do have the strong intuition that if empty individualism is correct at an ontological level, then actually negative utilitarianism can be pretty strongly defended on the grounds that when you have a moment of intense suffering, that's the entirety of that entity's existence. And especially with eternalism, once it happened, there's nothing you can do to prevent it.
01:36:49
Speaker
There's something that seems particularly awful about allowing inherently negative experiences to just exist. That said, I think open individualism actually may, to some extent, weaken that because even if the suffering was very intense, you can still imagine that if you identify with consciousness as a whole, you may be willing to undergo some bad suffering as a trade-off for something much, much better in the future.
01:37:17
Speaker
This sounds completely insane if you're currently experiencing a cluster headache or something astronomically painful. But maybe from the point of view of eternity, it actually makes sense. Those

Quantifying Consciousness in AI

01:37:29
Speaker
are still tiny specks of experience relative to the bliss that is going to exist in the future. You can imagine Jupiter brains and Dyson spheres just in a constant ecstatic state. So I think open individualism might counterbalance some of the
01:37:43
Speaker
negative utilitarian worries and would be something that AI would have to contemplate and might push it one way or the other. So let's go ahead and expand the definition of AI alignment. A broader way we can look at the AI alignment problem or the problem of generating beneficial AI and making future AI stuff go well, where that is understood is the project of making sure that the technical, political, social, and moral consequences of short-term to superintelligence and beyond. As we go through all of that, that is a beneficial process.
01:38:13
Speaker
Thinking about identity in that process, we were talking about how strong nationalism or strong identity or identification with regards to a nation state is a form of identity construction that people do. The nation or the country becomes part of self. One of the problems of the AI alignment problem is arms racing between countries and so taking shortcuts on safety.
01:38:35
Speaker
I'm not trying to propose clear answers or solutions here. It's unclear how successful an intervention here could even be. But these views on identity and how much nationalism shifts or not I think feed into how difficult or not the problem will be.
01:38:50
Speaker
The point of game theory becomes very, very important in that, yes, you do want to help other people who are also trying to improve the well-being of all consciousness. On the other hand, if there is a way to fake caring about the entirety of consciousness, that is a problem because then you would be using resources on people who would hoard them, or even worse, wrestle the power away from you so that they can focus on their narrow sense of identity.
01:39:17
Speaker
In that sense, I think having technologies in order to set particular phenomenal experiences of identity as well as to be able to detect them might be super important. But above all, and I mean, this is definitely my area of research, having a way of objectively quantifying how good or bad a state of consciousness is based on the activity of a nervous system seems to me like an extraordinarily key component for any kind of a serious AI alignment.
01:39:46
Speaker
If you're actually trying to prevent bad scenarios in the future, you've got to have a principal way of knowing whether the outcome is bad or at the very least knowing whether the outcome is terrible.

AI, Identity, and Immortality

01:39:57
Speaker
The aligned AI should be able to grasp that a certain state of consciousness, even if nobody has experienced it before, will be really bad and it should be avoided. That tends to be the lens through which I see this.
01:40:08
Speaker
In terms of improving people's internal self-consistency, as David pointed out, I think it's kind of pointless to try to satisfy a lot of people's preferences, such as having their favorite sports team win because there's really just no way of satisfying everybody's preferences.
01:40:25
Speaker
In the realm of psychology is where a lot of these interventions would happen. You can't expect an AI to be aligned with you if you yourself are not aligned with yourself, right? If you have all these strange psychotic competing sub-agents. So it seems like part of the process is going to be developing techniques to become more consistent so that we can actually be helped.
01:40:48
Speaker
In terms of risks this century, nationalism has been responsible for most of the wars of the past two centuries and nationalism is highly likely to lead to catastrophic war this century and the underlying global catastrophic risk, I don't think, is AI. It's male human primates doing what male human primates do. It's designed by evolution.
01:41:12
Speaker
to fight to compete to wage war and even vegan pacifists like me, how do we spend our leisure time playing violent video games? I mean, there are technical ways one can envisage mitigating the risk. I mean, it's perhaps unduly optimistic aiming for all female governance or aiming for democratically accountable world state under the ostracism of the United Nations. But I think unless one actually does have somebody with a monopoly on the use of
01:41:41
Speaker
force that essentially we are going to have cataclysmic nuclear war this century. It's highly likely, I think we're sleepwalking our way towards disaster, that it's more intellectually exciting and interesting discussing exotic risk from AI that goes foom or something like that. But there is much more mundane catastrophes that I suspect are going to unfold this century.
01:42:08
Speaker
All right, so getting into this other part here about AI alignment and beneficial AI throughout this next century, there's a lot of different things that increased intelligence and capacity and power over the world is going to enable. There's going to be human biological species divergence via AI enabled bioengineering.
01:42:29
Speaker
There is this fundamental desire for immortality in many people, and the drive towards superintelligence and beyond for some people promises immortality. I think that in terms of closed individualism here, closed individualism is extremely motivating for this extreme self-concern and desire for immortality.

Identity Views and Technological Evolution

01:42:48
Speaker
There are people currently today who are investing in, say, like, cryonics.
01:42:52
Speaker
because they want to freeze themselves and make it long enough so that they can somehow become immortal. Very clearly influenced by their ideas of identity. As you've all know, Harari was saying on our last podcast, it subverts many of the classic liberal myths that we have about the same intrinsic worth across all people. And then if you add humans 2.0 or 3.0 or 4.0 into the mixture, it's going to subvert that even more. So there are important questions of identity there, I think.
01:43:21
Speaker
With sufficiently advanced superintelligence, people flirt with the idea of being uploaded. The identity questions here which are relevant are, if we scan the information architecture, the neural architecture of your brain and upload it, will people feel like that is them? Is it not them? What does it mean to be you? Also, of course, in scenarios where people want to merge with the AI, what is it that you would want to be kept in the merging process?
01:43:46
Speaker
what is superfluous to you, what is not non-essential to your identity or what it means to be you, that you would be okay or not with merging. And then I think that most importantly here, I'm very interested in the descendants scenario where we just view AI as like our evolutionary descendants.
01:44:02
Speaker
There's this tendency in humanity to not be okay with this descendant scenario because of closed individualist views on identity. They won't see that consciousness as the same kind of thing, or they won't see it as their own consciousness. They see that well-being through the lens of self and other. So that makes people less interested in their being descendant, super intelligent, conscious AIs. Maybe there's also a bit of speciesism in there.
01:44:26
Speaker
I wonder if you guys want to have any reactions to identity in any of these processes again they are human biological species divergence via AI enabled bioengineering. Immortality uploads merging or the descendants scenario.
01:44:41
Speaker
In spite of thinking that Darwinian life is sentient malware, I think comics should be opt-out and cryoethanasia should be opt-in as a way to define death. And so long as someone is suspended in optimal conditions, it ought to be possible for advanced intelligence to reanimate that person.
01:45:03
Speaker
And sure, if one is an empty individualist or you're the kind of person who wakes up in the morning in trouble that you're not the kind of person who went to sleep last night, this may not really be you. But if you're more normal, yes, I think it should be possible to reanimate you if you are suspended.
01:45:21
Speaker
In terms of mind uploads, this is back to the binding problem, even assuming that you can be scanned with a moderate degree of fidelity. I don't think your notional digital counterpart is subject to experience.
01:45:38
Speaker
Even if I am completely wrong here and that somehow subjects of experience inexplicably emerge in classical digital computers, there's no guarantee that the qualia would be the same. After all, you can replay a game of chess with perfect fidelity, but there's no guarantee. Incidentals like the textures of the pieces will be the same. Why expect the textures will
01:46:03
Speaker
qualia to be the same, but that isn't really my objection. It's the fact that a digital computer cannot support phenomenally bound subjects of experience. I also think cryonics is really good, even though with a different nonstandard view of personal identity, it's kind of like puzzling, like, why would you care about it? Lots of practical considerations. I like what they've said of like defanging death. I think that's a good idea, but also giving people skin in the game for the future.
01:46:33
Speaker
You know, people who enact policy and become, you know, politically successful often tend to be 50 years plus. And there's a lot of things that they wait on that they will not actually get to experience that probably biases politicians and people who enacting policy to focus especially just on short-term gains, as opposed to really generally trying to improve the long-term. And I think cryonics would be helpful in giving people skin in the game.
01:46:58
Speaker
more broadly speaking, it does seem to be the case that what aspect of transhumanism a person is likely to focus on depends a lot on their theories of identity. I mean, if we break down transhumanism into the three supers of super happiness, super longevity and super intelligence, the longevity branch is pretty large. There's a lot of people looking for ways of rejuvenating, preventing aging and reviving ourselves or even uploading ourselves
01:47:24
Speaker
Then there's people who are very interested in superintelligence. I think that's probably the most popular type of transhumanism nowadays. That one I think does rely to some extent on people having a functionally information theoretic account of their own identity.
01:47:41
Speaker
There's all of these tropes of, hey, if you leave a large enough digital footprint online, super intelligence will be able to reverse engineer your brain just from that and maybe like reanimate you into future or something of that nature. And then there's people like David and I and Quality Research Institute as well that care primarily about super happiness.
01:48:03
Speaker
We think of it as kind of a requirement for a future that is actually worth living. You can have all the longevity and all the intelligence you want, but if you're not happy, I don't really see the point. A lot of the concerns with longevity, fear of death and so on in retrospect, I think will be probably considered some kind of a neurosis, you know, obviously a genetically adaptive neurosis, but something that can be cured with mood enhancing technologies.
01:48:32
Speaker
Leveraging human selfishness or leveraging how most people are closed individualists seems like the way to having good AI alignment. To one extent, I find the immortality pursuits through cryonics to be pretty elitist.

Future AI Scenarios and Identity

01:48:48
Speaker
I think it's a really good point that giving the policymakers and the older generation and people in power more skin in the game over the future is both potentially very good and also very scary. It's very scary to the extent to which they could get absolute power.
01:49:04
Speaker
But also very good if you're able to mitigate risks of them developing absolute power. But again, as you said, it motivates them towards more deeply and profoundly considering future considerations, being less myopic, being less selfish. So that getting the AI alignment process right and doing the necessary technical work, it's not done for a short-term nationalistic game. Again, with an asterisk here that the risk is unilaterally getting more and more power.
01:49:33
Speaker
Yeah, yeah, yeah. Also without cryonics, another way to increase skin in the game may be more straightforwardly positive. Bliss technologies do that. A lot of people who are depressed or nihilistic or vengeful or misanthropic, they don't really care about destroying the world or watching it burn, so to speak, because they don't have anything to lose.
01:49:53
Speaker
But if you have a really reliable MDMA-like technological device that reliably produces wonderful states of consciousness, I think people will be much more careful about preserving their own health and also not watch the world burn, because they know I could be back home and actually experiencing this, rather than just trying to satisfy my misanthropic desires. So the happiest people I know work in the field of existential risk.
01:50:20
Speaker
And rather than great happiness making people reckless, it can also make them more inclined to conserve and protect.
01:50:29
Speaker
Awesome. I guess just one more thing that I wanted to hit on these different ways that technology is going to change society is, I don't know, in my heart, the ideal is the vow to liberate all sentient beings in all directions from suffering. The closed individualist view seems generally fairly antithetical to that, but there's also this desire for me to be realistic about leveraging that human selfishness towards that ethic.
01:50:53
Speaker
The capacity here for conversations on identity going forward, if we can at least give people more information to subvert or challenge or give them information about why the common sense closed individualist view might be wrong, I think it would just have a ton of implications for how people end up viewing human species divergence or immortality or uploads emerging or the descendants scenario.
01:51:15
Speaker
In Max's book, Life 3.0, he describes a bunch of different scenarios for how you want the world to be as the impact of AI grows, if we're lucky enough to reach super intelligent AI. These scenarios that he gives are, for example, in egalitarian utopia, where humans, cyborgs, and uploads coexist peacefully thanks to property, abolition, and guaranteed income.
01:51:39
Speaker
There's a libertarian utopia where human cyborgs and uploads and super intelligences coexist peacefully thanks to property rights. There is a protector God scenario where essentially omniscient and omnipotent AI maximizes human happiness by intervening only in ways that preserve our feeling of control of our own destiny and hides well enough that many humans even doubt the AI's existence.
01:52:02
Speaker
There's enslaved God, which is kind of self-evident. The AI is a slave to our will. A descendant scenario, which I described earlier, where AIs replace human beings, but give us a graceful exit, making us view them as our worthy descendants, much as parents feel happy and proud to have a child who's smarter than them, who learns from them, and then accomplishes what they could only dream of, even if they can't live to see it.

Identity Education and Societal Evolution

01:52:26
Speaker
So after the book was released, Max did a survey of which ideal societies people were most excited about. And basically most people wanted either the egalitarian utopia or the libertarian utopia. These are very human centric, of course, because I think most people are closed individualists. So like, okay, they're going to pick that. And then they wanted a protector god next. And then the fourth most popular was an enslaved god. The fifth most popular was descendants.
01:52:55
Speaker
I'm a very big fan of the descendant scenario. Maybe it's because of my empty individualism.
01:53:01
Speaker
I just feel here that as views on identity are quite uninformed for most people, most people don't take it or closed individualism just seems intuitively true from the beginning because it seems like it's been selected for mostly by Darwinian evolution to have a very strong sense of self. I just think that challenging conventional views on identity will very much shift the kinds of worlds that people are okay with or the kinds of worlds that people want.
01:53:29
Speaker
Like if we had like big massive public education campaign about the philosophy of identity and then take the same survey later, I think that the numbers would be much more different. That seems to be like a necessary part of the education of humanity in the process of beneficial AI and AI alignment. To me, the descendants scenario just seems best because it's more clearly in line with this ethic of being impartially devoted to maximizing the wellbeing of sentience everywhere.
01:53:58
Speaker
I'm curious to know your guys reaction to these different scenarios about how you feel views on identity as they shift will inform the kinds of worlds that humanity finds beautiful or meaningful or like worthy of pursuit through and with AI.

Expanded Consciousness and Future Society

01:54:13
Speaker
Starting with the most obvious that I focus on if today's hedonic range is minus 10 to 0 to plus 10, yes, building a civilization with a hedonic range of plus 70 to plus 100, one wants more contrast or plus 90 to a plus 100, the multiple phase changes in consciousness involved there are just completely inconceivable to humans.
01:54:36
Speaker
But in terms of full spectrum superintelligence, what we don't know is the radically alien state spaces of consciousness. I mean, far more different than let's say dreaming consciousness and waking consciousness that I suspect that we are going to explore. And we currently, we just do not have the language or the concepts to conceptualize what these alien state spaces are like. I suspect.
01:55:02
Speaker
millions, billions of years of exploration lie ahead. I assume that a central element will be the pleasure axis, that they will be generically wonderful, but they will be completely alien. And so talk of identity with primitive Darwinian malware like us is quite fanciful.
01:55:23
Speaker
Consider the following thought experiment where you have a chimpanzee right next to a person who is right next to another person, where the third one is currently on a high dose of DMT combined with a ketamine and salvia.
01:55:40
Speaker
If you consider those three entities, I think very likely actually the experience of the chimpanzee and the experience of the sober person are very much alike compared to the person who's on DMT, ketamine, salvia, who is in a completely different alien state space of consciousness and in some sense biologically unrelatable from the point of view of the qualia and the sense of self and time and space and all of those things.
01:56:07
Speaker
Personally, I think having intimations with alien state spaces of consciousness is actually also quite apart from like changes in a feeling that you become one with the universe. Merely having experience with like really different states of consciousness makes it easier for you to identify with consciousness as a whole. You realize, okay, like my DMT self, so to speak, cannot exist naturally. And it's just like so much different to who I am normally and even more different than perhaps being a chimpanzee.
01:56:37
Speaker
that you

Ethics of AI Identity

01:56:38
Speaker
could imagine caring as well about alien state spaces of consciousness that are completely non-human. And I think that it can be pretty helpful.
01:56:46
Speaker
The other reason why I give a lot of credence to open individualism being a winning strategy, even just from a purely political and sociological point of view, is that open individualists are not afraid of changing their own state of consciousness because they realize that it will be them either way.
01:57:08
Speaker
Whereas closed individualists can actually be pretty scared of, for example, taking DMT or something like that. They tend to have at least the suspicion that, oh my gosh, is the person who's going to be on DMT me? Am I going to be there? Or maybe I'm just being possessed by a different entity with completely different values and consciousness.
01:57:30
Speaker
No matter what type of consciousness your brain generates, it's going to be you. It massively amplifies the degrees of freedom for coordination. Plus, you're not afraid of tuning your consciousness for particular new computational uses. Again, this could be extremely powerful as a cooperation and coordination tool.
01:57:51
Speaker
To summarize, I think a plausible and very nice future scenario is going to be the mixture of open individualism, on the one hand, second, generically enhanced hedonic tone so that everything is amazing, and third, expanded range of possible experiences that we will have the tools to experience pretty much arbitrary state spaces of consciousness and consider them our own.
01:58:15
Speaker
The sentence scenario, I think it's much easier to imagine thinking of the new entities as your offspring. If you can at least know what they feel like, you know, if you can take a drug or something and know, okay, this is what it's like to be a post human Android. I like it. This is wonderful. It's better than being a human. That would make it possible.
01:58:35
Speaker
Wonderful. So this last question is just the role of identity in the AI itself or the super intelligence itself as it experiences the world, the ethical implications of those identity models, et cetera.
01:58:48
Speaker
There is the question of identity now, and if we get aligned super intelligence and post human super intelligence, and we have Jupiter brains or Dyson spheres or whatever, that there's a question of identity evolving in that system. We are very much creating life 3.0. And there is a substantive question of what kind of identity views it will take, what its phenomenal experience of self or not will have.
01:59:12
Speaker
This all is relevant and important because if we're concerned with maximizing conscious wellbeing, then these are flavors of consciousness, which would require a sufficiently rigorous science of consciousness to understand their valence properties.
01:59:25
Speaker
I mean, I think it's a really, really good thing to think about. The overall frame I tend to utilize to analyze these kind of questions is I wrote an article, you can find it in quality computing that it's called consciousness versus replicators. I think that is a pretty good overarching ethical framework, where basically I described how different kinds of ethics
01:59:47
Speaker
can give different worldviews, but also they depend on your philosophical sophistication. At the very beginning you have ethics such as the battle between good and evil, but then you start introspecting and like, okay, what is evil exactly? And you realize that nobody sets out to do evil from the very beginning. Usually they actually have motivations that make sense within their own experience. Then you shift towards this other theory that's called the balance between good and evil, super common in Eastern religions,
02:00:17
Speaker
Also, people who take a lot of psychedelics or meditate a lot tend to arrive to that view as in like, oh, don't be too concerned about suffering or the universe. It's all a huge yin and yang. The evil part makes the good part better or like weird things like that. Then you have a little bit more developed what I call gradients of wisdom. I would say like Sam Harris and definitely a lot of people in our community think that way, which is they come to the realization that, you know, there are societies that don't help human flourishing.
02:00:47
Speaker
And there are, you know, ideologies that do, and it's really important to be discerning. We can't just say, hey, everything is equally good. But finally, I would say the fourth level would be consciousness versus replicators, which involves one, taking open individualism seriously, and second, realizing that anything that matters, it matters because it influences experiences.
02:01:10
Speaker
If you have that as your underlying ethical principle, there's this danger of replicators hijacking our motivational architecture in order to pursue their own replication independent of the well-being of sentience, and you guard for that. I think you're in a pretty good space to actually do a lot of good. I would say perhaps that is the sort of ethics or morality we should think about how to instantiate in artificial intelligence.
02:01:36
Speaker
In the extreme, you have what I call a pure replicator, and a pure replicator essentially is a system or an entity that uses all of its resources exclusively to make copies of itself independently of whether that causes good or bad experiences elsewhere. It just doesn't care. I would argue that humans are not pure replicators, that in fact, we do care about consciousness, at the very least our own consciousness,
02:02:05
Speaker
And evolution is recruiting the fact that we care about consciousness in order to, as a side effect, increase our inclusive fitness of our genes. But these discussions we're having right now, the possibility of post-human ethic is the genie is getting out of the bottle in the sense of consciousness is kind of taking its own values and trying to transcend the selfish genetic process that gave rise to it. Ooh, I like that. That's good. Anything to add, David?
02:02:34
Speaker
No, I simply hope we have a Buddhist AI. I agree. All right, so I've really enjoyed this conversation. I feel more confused now than when I came in, which is very good. So yeah, thank you both so much for coming on.
02:02:56
Speaker
If you enjoyed this podcast, please subscribe, give it a like or share it on your preferred social media platform. We'll be back again soon with another episode in the AI alignment series.