Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Transhumanism and Stoicism (Episode 123) image

Transhumanism and Stoicism (Episode 123)

Stoa Conversations: Stoicism Applied
Avatar
788 Plays9 months ago

“Although we don’t often recognise it, the 21st century is a transhuman era, where everything that currently makes us human, from our brains and bodies, to our values and ways of life, is poised to be transformed or superseded.”

In this episode, Caleb and Michael consider a contemporary philosophical issue through the lens of Stoicism.

Transhumanism is the idea that humans should use technology to radically enhance human capabilities, lifespan, and experience. So, what do Stoics have to say about it?

Caleb argues that they should be largely in favor of it – with some serious qualifications.

Classical Transhumanism, essay from Caleb

Future Superhuman by Elise Bohan

(02:12) Future Superhuman

(12:54) Objections to Transhumanism

(19:35) Human Nature And Stoicism

(32:22) Warning From The Stoics

(38:53) Moderation Is A Virtue

(40:01) Transcendence

***

Subscribe to The Stoa Letter for weekly meditations, actions, and links to the best Stoic resources: www.stoaletter.com/subscribe

Download the Stoa app (it’s a free download): stoameditation.com/pod

If you try the Stoa app and find it useful, but truly cannot afford it, email us and we'll set you up with a free account.

Listen to more episodes and learn more here: https://stoameditation.com/blog/stoa-conversations/

Thanks to Michael Levy for graciously letting us use his music in the conversations: https://ancientlyre.com/

Recommended
Transcript

Foundations of Stoicism and Transhumanism

00:00:00
Speaker
What makes the Stoic picture appealing is that you are not trying to align yourself to some external possibility, whether that's an external God or some possibility of technological transformation.
00:00:21
Speaker
but are aiming to perfect what you have right now, you know, what's right in front of you, what's concrete, your ability to manage

Introductions and Key Concepts

00:00:31
Speaker
impressions. Welcome to Stoa Conversations. My name is Caleb Montaveros. And I'm Michael Trombley. And today we're going to be talking about transhumanism and stoicism. So it's going to be quite the combo, quite the double feature. I think it's a good one because
00:00:51
Speaker
This issue of transhumanism, which we'll get into, these issues around technology, human nature, and so on are clearly philosophical. And it raises the question, you know, what can stoicism add to some of our contemporary debates about the place of technology and, of course, human nature as well. So it's a huge topic, of course, but we're going to dive into it and see what comes out.
00:01:21
Speaker
And Caleb, this is also something that you're pretty interested in,

Literature and Transhuman Era

00:01:24
Speaker
right? So it's like kind of something that you know quite a bit about, not just something that we're both learning about, but something you're pretty into, right?
00:01:35
Speaker
Yeah, I mean, I just certainly had read a number of authors who are interested in questions about human enhancement, transhumanism broadly. And that's something I've written about in a variety of different venues. So it's not something I'm coming to completely fresh by any means. What about yourself? Well, I know, I've never thought about this before, ever.
00:02:02
Speaker
So I'm excited to learn, have a conversation with you about it, question some of these ideas, and both of us put it through the lines of stoicism at the same time. All right, cool, cool. Well, I thought we should open with a passage from a book called Future Superhuman by Elise Bohan, which I think helps sets the stage.
00:02:25
Speaker
Although we don't recognize it, the 21st century is a transhuman era where everything that currently makes us human from our brains and bodies to our values and ways of life is poised to be transformed or superseded. In our lifetime, we could merge with forms of artificial intelligence that are radically smarter than us, rewrite our biology to conquer aging, disease, and involuntary death.
00:02:51
Speaker
leave behind the crudest and coolest vestiges of our evolutionary programming and embrace a new mode of being that is so much more than human that we would have to define it as post-human. In its best incarnations, you might call this kind of future super-human. So I like to open with that.
00:03:13
Speaker
passage just because it's an excellent framing for what's I think a radical challenge to many different philosophical positions, many different common sense beliefs about the world.

Defining Transhumanism

00:03:28
Speaker
Kind of gives you a sense of what transhumanism is, which I would define as extending human lifespan, intelligence, and experience through technological innovation so that humans can become superhuman or posthuman.
00:03:48
Speaker
It's really a project of almost technological transcendence that, as Elise Bohan says, aims to make a future superhuman. So, of course, there are a whole range of questions that opens up. Is this truly a transhuman era? Is transhumanism possible? Is it even possible to do these things that Bohan mentions merging with?
00:04:18
Speaker
artificial intelligence, conquering aging disease. And if it's possible, you know, would that be good? If any of these things are possible, would that even be a good thing? So we'll consider what we can say from a stoic perspective on these thoughts, but maybe it's a good idea to pause there and get some initial reactions. What do you think, Michael?
00:04:43
Speaker
Well, so my two questions for this are, okay, so just so I understand, right? Like post-human, it's this idea of a before and after, right? There was humans, and now this is

Transhumanism Versus Stoic Philosophy

00:04:56
Speaker
a type difference. This is not a matter of degree. So something like curing cancer is not transhumanism, even though there would be some sort of technological innovation here.
00:05:07
Speaker
something like prosthetic limbs to a limited degree would not be that we're talking about some sort of major type difference, you know, something where somebody's living to 300 years old or somebody who was formerly super human strength with a kind of prosthetic modified body or something like this. I guess I wanted to clarify that is that is that the extent we're talking about?
00:05:31
Speaker
Yeah, so I think that all of these things are going to be on a continuum, but I think if you think about extending lifespan that's on the transhuman scale, it's not at the level of a few years, but many decades a century and so on. And I think perhaps one way to put it is the transhuman
00:05:55
Speaker
project is pushing the limit as far as one can to the extent that whatever remains may not be human just as you have past evolutionary processes where
00:06:12
Speaker
different animal species change and evolved and so on. Even if there were no radical breaks at the moment, there's a sense that the Transhuman Project is aiming over the span of years, decades, whatever it is.
00:06:31
Speaker
to develop a being that would be properly considered post-human because their lifespan, intelligence, and way of life was so different from the human animal that preceded them.
00:06:46
Speaker
Yeah, well, that was going to be my follow-up question. I've never thought of humans in terms of types before. This is post-human. This is human one, human two, human three. I've never thought of it like before. So I was thinking, are there different kinds of humans already? I don't know. When we discovered antibiotics, was that a different jump? Was there any jumps in the past? But I guess the point here would be something like, no, because maybe if we can read something like the Stoics,
00:07:16
Speaker
And we can empathize with them and go, hey, they're just like us. They're experiencing the same things. They're wrestling with the same questions. There's so many parallels between our lives and Seneca, Marcus Aurelius, Epictetus. That speaks to the fact that transhumanism hasn't occurred before, I would say. It speaks to the fact that because there's that, if you got this kind of posthuman, kind of like sci-fi picture, they wouldn't be able to read our philosophy, our existential philosophy and empathize. I don't know what it's like to
00:07:46
Speaker
feel that kind of fear of death or I don't know what it's like to have to wrestle with your limited intelligence. There would be this kind of empathy break. Does that sound accurate? I don't think so. I think it's because you could imagine, I think the vision is almost those people would read our philosophy and understand it better than we would.
00:08:07
Speaker
That needn't be the case, but that's, I think, closer to what the movement is aiming for. There might have a break of some sort, but that's not needed by the philosophy.

Pathways to Transhumanism

00:08:25
Speaker
ideally with jumps in intelligence. There's a philosopher named Nick Bostrom. Instead of using the word augmenting experience, he just says augmenting emotion. These beings would have a wider range of possible experiences and a deeper one at that. So maybe they would be better at empathizing than we are. Maybe they would be better at stepping in.
00:08:53
Speaker
the sandals of ancient Egyptian that we are, what have you. So, I mean, it could be the case that that's not true, but I think that at least gets closer to the picture. And I think what you're touching on is
00:09:10
Speaker
There's a lack of clarity, which is necessary, but there's a lack of clarity on, you know, well, how do you, what are these beings? Exactly. And I guess like the thought is you can, you can add a little bit more detail by saying, well, perhaps through
00:09:26
Speaker
decades of, you know, genetic modification, introducing things like artificial wombs. You can have human beings that are selected for to such an extent that, you know, they make past geniuses, whether von Neumann's, Einstein Socrates, and so on, look rudimentary. And of course, they would also happen to be exceptionally healthy, physically, and then
00:09:58
Speaker
be formidable beings able to act in the world just like just as humans can in that sense. So there's that genetic, purely biological
00:10:08
Speaker
And then there's a path where humans become cyborgs, like in some sense you merge with either other artificial intelligence is a forms of biotechnology to the extent that you can imagine. This is depicted in different science fiction films or books. You end up with a being that is almost half human to the point where, yeah, okay, you could say that.
00:10:38
Speaker
Uh, the amount of difference between them and us is significant enough to call them, call them a superhuman. Yeah. I mean, it sounds like, I mean, I'll let you, um, keep going.
00:10:53
Speaker
keep talking a bit about transhumanism because it sounds like there's these philosophical, once we have a picture, we'll be able to open up this philosophical can of worms about what it even means to be human, what it means to be more human, less human. So why don't we keep going on what transhumanism means, how it could be possible, maybe some objections to it.
00:11:17
Speaker
Yeah, so as you mentioned earlier, I think we've made some steps already in disrupting human
00:11:27
Speaker
processes or natural processes. So there are physical and cognitive enhancing drugs that change our natural levels of health. We have reproductive technology, existing ways to create new beings or prevent other beings from coming into existence. And of course we have all of this digital technology that people, some have argued, serve almost as an extended mind of some sorts.
00:11:56
Speaker
where someone without the access to Google is in some sense more knowledgeable than someone without that access if they're able to use it well.
00:12:07
Speaker
And then the thought is that transhumanism would realize these sorts of gains at a whole other scale. It would essentially continue that trajectory such that we end up as different. And as you say, that does bring up a lot of many philosophical questions and many empirical ones, too. There's a question of can human beings really become cyborgs as merging with artificial intelligence as possible?

Challenges and Societal Impact

00:12:32
Speaker
What about these purely biological routes? Are there going to be steep
00:12:36
Speaker
diminishing marginal returns to selecting for specific traits and so on. You have those descriptive type questions. And then in addition to that, you have different value judgments. Are any of these possibilities, whether they're realistic or not, are they even a good idea? So I think people break this down as different objections to transhumanism.
00:13:02
Speaker
And I think it's useful to go through those and then think through which of these objections would the Stoics reject, which would they accept. So one objection is there's just some possibility idea, just this thought that it can't be done. There are going to be limits on these processes.
00:13:29
Speaker
A related idea is it's too difficult, too costly. That's a cost objection. You can imagine that the gains are too uncertain, too far on the horizons for people to care about transhumanism.
00:13:49
Speaker
So those are the sort of empirical objections to the project.
00:14:00
Speaker
If you think about this, of course, you have always this distinction between the way things are, the ways things ought to be. That latter category includes questions of values, what's good, what's bad, and that's where people come in with other objections to transhumanism. There is the idea that it would have negative social effects. In some ways, it would be bad for society.
00:14:26
Speaker
Perhaps it would increase inequality, which someone might think is bad, or result

Benefits and Stoic Critique

00:14:33
Speaker
in other downstream consequences that would be negative.
00:14:39
Speaker
You also have this questioning, you know, what's the value of post-humanity as such? You know, would post-human lives be worse than human lives? There's that skepticism about, you know, is it really better to radically increase one's health intelligence and emotional range?
00:15:03
Speaker
And then there's one other objection which is a little bit, which is related, but subtle, which is the, I thought that whether or not, you know, post-human lives would be good, the value of post-humanity for humans is nil. We couldn't benefit from it. And I think the idea would be, we would be so different that
00:15:30
Speaker
there would be no sense in which it would be good for us to become transhuman. So one way to think about this and make it a little bit more concrete is one transhuman scenario that I haven't talked about too much is scanning the neural patterns in one's brain and uploading it into some computer simulation and so on.
00:15:57
Speaker
And one objection a lot of people have to this scenario is that you are not the kind of thing that can be run in a computer simulation. What you are is, in some sense, embodied in a biological beam. So whether or not that simulation has a good life, it wouldn't be you. So it wouldn't be valuable for you to become uploaded in this way. So that's where this other objection comes into play.
00:16:25
Speaker
So then we've got these five objections. It's impossible. It's too costly. The social effects would be terrible. These posthuman lives wouldn't be that good. So this is the value of posthumanity as such. And then whether or not these lives would be good, they wouldn't be good for us. We couldn't benefit. So the value of posthumanity for humans is low. Those are those five objections. What's your initial take on this?
00:16:54
Speaker
Well, the thing about transhumanism that's really interesting is stoicism, or at least the way I interpret stoicism, grounds the value of human lives in teleology, right? Like there's this almost dysfunctionalist account. And so there becomes this question of like,
00:17:14
Speaker
This is a question about actually changing your telos, changing your nature, right? Like Stosum says you're supposed to live in accordance with your nature. This is a question of actually changing your nature in a foundational way, not achieving that nature, not becoming more rational, not improving your characteristics to be a better human, but to change into something that's not human, that's post-human. And so that question of, you know,
00:17:40
Speaker
What benefit could post-humanity have for humans? I mean, I can think of some counter-examples. It might be beneficial for humans to live around kinds of people who are more intelligent and ethical and maybe need less resources. You can come up with all kinds of counter-examples, but there is this
00:18:02
Speaker
idea at the core about how do we treat the technological possibility of changing our nature at a fundamental level? Do we think we would change our nature or are we getting confused on what our nature really is? But it brings to the head those kinds of questions, which I think are, yeah, they're stomach questions, but they're also core questions like ancient Greek philosophy more generally. Right, right. Yeah, I think that's exactly right, which is that this point about
00:18:31
Speaker
The amount of philosophical questions that transhumanism shines a light on is quite high, and they're important ones, ultimately around these questions of the function of human life, the telos, but also around parts of our lives.

Biological Modifications and Stoic Reasoning

00:18:53
Speaker
We're thinking about some of the earlier examples of the
00:19:00
Speaker
or incremental technological improvements around more digital technology, more ability to increase one's life, more ability to take control of the means of reproduction. All of these are
00:19:19
Speaker
initially quite good, but then they also raise the question, what's the purpose of these tools to begin with? Or what's the original purpose that these tools are now used for the sake of? So I think I'm curious on your take about this, but so my sense is that
00:19:43
Speaker
many people object to transhumanism because it wouldn't be good for humanity. And there is this idea that we are in some sense biological beings and we're rooted, identified with our biology to the extent that
00:20:03
Speaker
forsaking the body either by modifying it or transcending it altogether would not be good for us. My sense is that the Stoics wouldn't agree with this objection. And in principle, one, they would not think that this last objection, this value of post humanity for humans is a good one.
00:20:33
Speaker
The reason for that, I think, is that one can survive significant modification of one's organism for the Stoics. That's fundamentally because they see human nature not as biological beings, but as embodied rational beings, and that can survive into post-humanity.
00:21:02
Speaker
The central aspect of human nature for the Stoics was this part, this reasoning part, this fragment of the divine that allowed one to manage impressions well. And if you think about these transhuman options,
00:21:21
Speaker
yes, they might be radically different from human beings as we are today, but we still think they have this ability to reason perhaps even better than we can. So in that sense, I think that Stoics would not
00:21:41
Speaker
object along this last objection, which is that transhumanism violates human nature or doesn't respect human nature to the extent that it should be respected. What do you think about that?
00:21:55
Speaker
Yeah, so that's really interesting, Gilvan. I mean, this is going to be we're layering to we've got the two layers of nerdy philosophy and the science fiction now on top of each other. So we're going deep here. But I mean, I agree with you, but I want to do a caveat. And I would say.
00:22:14
Speaker
It would depend on if technology, so this idea of you upload your consciousness to the cloud, somebody would say, that's not you, you're flesh and blood. And the Stoics might say, no, you're a reasoning thing, and there you are reasoning in the cloud, right? And I think it would depend on an understanding of
00:22:31
Speaker
artificial intelligence that I actually lack. My friend runs an AI company and I was asking him what artificial intelligence was and he was like, first question is really easy. Artificial is anything that's not biological. Second question is really, really complicated. What is intelligence? If you could have non-biological intelligence,
00:22:57
Speaker
if you could have non-biological reasoning that was the same kind of reasoning our minds can do, I think the Stoics would be absolutely on board with it. But I think you would have to prove that it's not. So the Stoics have this kind of ontology of being where there's like
00:23:14
Speaker
matter that just has a form, it's like rocks, then there's things that generate like plants, there's things that have appetites and impulses like animals, and then humans actually sit at the top of this ontology where we have our material, our biological matter in our brain for them and in our souls is able to reason, it's able to be self-reflective. And so if technology is capable of doing that,
00:23:42
Speaker
in a way that yeah that in a way that's real and i guess i'm losing the language to say what real is like but in a way that's real reasoning then i would agree but you'd have to you wouldn't want it to be well this is
00:23:57
Speaker
This is the first type of matter. This is the same type of matter of rocks. This is electricity and metal that's imitating these reactions in a very complicated, compelling way. But it's not the same kind of heightened material. They could actually look at it ontologically. Does that make sense what I'm saying?
00:24:16
Speaker
Yeah, absolutely. No, I think that's a great point. One way to put it is, so early we had these different options of becoming both human, a purely biological one, one that sort of mixes artificial and the biological becoming cyborgs. And then you might think of as a purely artificial one, uploading yourself to the cloud. And I think that the Stoics are
00:24:41
Speaker
they at least think what matters persists is retained in the purely biological picture. Depending on how becoming a cyborg is done, it could be retained. And then there's more of a question mark around that last one, as you say, which is,
00:25:00
Speaker
is the kind of mind embodied in non-biological matter, the kind of thing that can be said to reason. And I think that's the sort of thorny question that one could spend.
00:25:22
Speaker
many more episodes on, but I think at least at the initial, I would be confident in saying that the Stoics are happy with significant biological modification and perhaps also happy with some of these pictures of merging with technology.
00:25:40
Speaker
It depends on, these other artificial options depend on metaphysical assumptions and then some empirical questions about how the minds are in fact embodied.
00:25:57
Speaker
Yeah, but then there's this other part. I mean, I've just never thought about these questions before. It's really, it's really fun taking the, you know, philosophy from 2000 years ago and comparing it to sci-fi. There's this other idea that humans are rational animals, right? We are pros. So we have certain other things that come with our animal nature that then it's reasons, responsibility to, to enact, right? Like what, what.
00:26:22
Speaker
virtue is a type of reasoning, but the type of reasoning that's virtuous does correspond in some way to our biological nature, right? The fact that we, I don't know, we have families, we give birth, we prefer certain, like we, you know, health is preferable. It's not virtuous then to go around hurting people or harming people, right? Because you're for no real good reason taking away something that's preferable from them. So
00:26:49
Speaker
If we went full transhumanist on this, full science fiction, you might be able to say, well, there's still this picture of virtue looking like reason for this posthuman, but then the content of virtue being quite different. I guess I wouldn't say the content, but what it requires of a posthuman might end up looking different.
00:27:20
Speaker
That's vague, but it's also, you know, it's quite a complicated example. Yeah, I think that's right. So you have this idea, well, what are humans essentially for the stomachs? They're rational, but that reason is expressed in a social way. And that's why we insist that, you know, the stomach picture of humans is that we're rational and social creatures.
00:27:46
Speaker
And I do think there has to be this other picture, the social needs to come into the picture for the trans human.
00:27:58
Speaker
Then as you say, what that looks like might be different. There's that phrase, what's good for the hive is good for the bee, which Marcus Aurelius cites. It's a good reminder that we have obligations to the whole. I think also relevant to this discussion, what would be a stoic insistence that there is some kind of
00:28:27
Speaker
order some kind of sociality in whatever transhuman beings emerge, that they play a role in a greater whole. There's that point. And then there's also the point that, of course, for bees, what it is to be social is different than what it is for humans, and that likely will be the same.
00:28:50
Speaker
for transhuman creatures. So that'll be what that actually looks like. It might be different depending on their capabilities and proclivities and so on. Yeah, and stop me if I'm taking this down in the wrong direction. But I just think it's super cool. Because it's almost the same kind of question of how would it still look judging an alien, right? Well, it's a rational creature that has a different nature. Or are all rational creatures, do we all have the same nature?
00:29:19
Speaker
you know the stoics weren't really able to answer that question because it was just kind of humans for them in their view but it's like what would happen if you yeah would virtue for an alien be the same as a human and then the same thing would virtue for a post human be the same for a human um i think we're both saying
00:29:34
Speaker
probably very similar in some ways, but a little bit different in others, which is again, not exactly treaties with these exact ways, but it's an interesting question. I guess the question is, is rationality the superseding highest level and everything that shares rationality, virtue is going to look 90% the same?
00:29:58
Speaker
Or how much of that other part of what we are gets brought into it such that a rational bear from a fairy tale would have a different ethics than a rational human, then a rational alien, and then a computer.
00:30:14
Speaker
And I don't know, I don't know the answer to that. I don't know. I'm not sure my answer to it, but I think, I think reason, I think things that share reason would be pretty similar. I think as we're saying, as you said, what it means for the B is slightly different than what it means for the human, but the things that share reason would have a fundamental role in common that is so broad. It would, it would encompass a lot of the same kind of character requirements, I think.
00:30:41
Speaker
Yeah, I think well we've talked about role ethics before and there's these different levels to role ethics you have that universal picture of our roles, you know to be rational and social and then you narrow down depending on our
00:30:59
Speaker
uh, talents, capabilities, depending on our social situations, what kind of relationships we have, and then our preferences. So I suppose you could say with these other beings at the top that we'll have the same fundamental, uh, roles, and as such, we should expect some of the central virtues to be expressed at least in the fundamental abstract way.
00:31:26
Speaker
in a similar manner. But once you get to those lower levels of, of course, capabilities are going to be radically different. The social world may be exceptionally different than of course, one can only speculate about the preferences of these beings. So at those later levels, these more specific questions, you might see a level of difference.
00:31:55
Speaker
Yeah, I think that's, I think that's, I think that's right. Uh, it's just a, it's just a fun question. We're so kind of as humans, we're, we're reflective, but we're like the only reflective things. I mean, that's just the, we're kind of stuck in this conundrum by ourselves and it would be so interesting. And that's why I think why people are so interested in aliens, right? So interested to see what rationality would look like manifested in a different being or reason would look like manifested in something else. Right. Right.
00:32:24
Speaker
Yeah, so I think that's one central issue that emerges this question of, can we, in a real sense, persist as opposed to humans? I think the Stoics would say yes with those caveats you mentioned earlier, and then nonetheless, what that would, of course, amount to would be
00:32:47
Speaker
radically radically different so that's one that's one i think important issue another important issue i think at least perhaps warning that emerges from stoic philosophy or something that's distinct in stoic philosophy from any other modern points of view is just that more isn't necessarily
00:33:11
Speaker
best overall.

Moderation and Caution in Augmentation

00:33:14
Speaker
What do I mean? So she's thinking about one obvious argument for transhumanism is that what transhumanism does is it extends our health, you know, it improves our health, makes us smarter and gives us a wider range of good experiences. And since all of those things are good, the thought would be, why don't we just do that to the maximal extent possible?
00:33:41
Speaker
But of course, the Stoics think things like lifespan, intelligence, physical capabilities, pleasure, and so on are not ultimately good. Instead, they are indifference, you know, the things that each of them are preferable. But what ultimately matters is being virtuous and making the right choices amongst these preferable type things.
00:34:12
Speaker
which means that more is not always necessarily the best, which I think is a challenge to a central, I think one of the central intuitions behind transhumanism and puts a constraint on our pursuit of augmenting our capabilities. Not just in this sort of lofty,
00:34:38
Speaker
pursuit of becoming transhuman, but also in ordinary activities of increasing our health, increasing our pleasure and so on. Yeah. I mean, I feel like about this, like, I think it's a good point, but.
00:34:53
Speaker
It raises this interesting question about, well, what do you feel about something like testosterone replacement therapy or cosmetic surgery for somebody who's born with some sort of facial problem? I generally feel like the Stokes would say, look,
00:35:16
Speaker
Change it if you can and if you want to, but you don't need to to have a good life and people that don't, don't like can still have good lives. So I kind of see it as like, again, it's preferable, right? It's do it if you can, don't compromise your virtue for it. Don't, you know.
00:35:31
Speaker
steal money and harm somebody to get the money to do this. But if it's available to you, take it. So to me, extending that to transhumanism, maybe not that the brain in the cloud is a bit abstract, but maybe these cyborg modifications or life extend, you pay money and you can extend your life by 50 years, or you can freeze yourself and get reanimated or something like this.
00:35:57
Speaker
I think I would generally think the stoic argument would be like, yeah, like, you know, go ahead if it's for the right reasons and it hasn't compromised your virtue in pursuit of it. But you would, you think it's a step further, you think the stoics would be against these kinds of things, or is it just kind of like a, you know, let's not divert our resources to this when there's more important things we could be doing, something like that.
00:36:21
Speaker
Yeah, I would say it's more of a warning. So we might be in agreement. But if I think about the cosmic surgery case, in some ways what the stoic view is almost
00:36:37
Speaker
unsatisfying. It's almost, it always depends or something like this. Cause I don't see a strong prohibition against cosmic or sorry, cosmic cosmetic surgery. That's a different kind of, that's a different kind of surgery. So to advance for me, the, I don't see a strong prohibition for cosmetic surgery coming from the Stoics, but they, you know, they would question, you know, who are you?
00:37:06
Speaker
who are you? Why are you doing this operation? And I think people could have good answers to that and they could have bad answers to it. Um, and I think there's also these issues of, well, what are the best social norms? What are the roles where you happen to live when you happen to live that best serve society? Are you playing into the right ones for your time? So,
00:37:35
Speaker
I would say that, you know, I think this provides some constraints on becoming transhuman. It's sort of like, well, we initially, we should think it's a good thing to promote our lifespan, our intelligence and so on, but we always need to have that reflective stance toward it, both as individuals and as a society, as at large and
00:38:00
Speaker
be willing to entertain arguments against doing these sorts of things. So perhaps it's more of a higher level heuristic to pause, not always go jump straight ahead into augmenting ourselves. But I don't think that the Stoics would have a prohibition just because these things aren't different.
00:38:26
Speaker
Well, as you put it, right, more isn't necessarily better. That doesn't mean more is bad. It just means exactly what you said, more isn't necessarily better. And so that's a warning. That's a reason to interrogate your reasons for being attracted to transhumanism or attracted to these kinds of pursuits of changing yourself, right? And ask yourself, as you said, who you are and why you're doing it.
00:38:53
Speaker
Yeah, yeah. The Stoics are of course the fans of moderation and I think to some extent they have a radical view here, which is that
00:39:04
Speaker
you know, if we become transhuman, what's important about human nature can persist. To that extent, socialism is radical, but it's still going to have this focus on nothing in access, taking all things into account when making decisions as a society and as
00:39:25
Speaker
an individual to the extent that one can. And that, I think, means probably more of a focus on some of these incremental changes and acknowledgement of a certainty. We opened up with that quote about we live in a transhuman era. I think there's still some question marks.
00:39:44
Speaker
around that, but on the margin perhaps more people should move in that direction given the possibility of radical change in artificial intelligence and biotechnology in particular.
00:40:01
Speaker
And I guess there's also this idea of kind of a lack of satisfaction too, right? Like a lack of an ability to be happy with what is, happy with the way things are, and then a kind of vicious pursuit to get out of that as well, that transhumanism can kind of appeal to. One thing that I was thinking, Kevin, interested in what you think about this, like, sometimes I think about this idea, I don't know if this is Jungian or whatever, but there's that there's like,
00:40:31
Speaker
patterns and thought, and that these patterns and thought manifest in different ways. And so one patterns and thought is this idea that I want to transcend. And so maybe that meant like you that was manifested through this idea of heaven, but then it was also manifested in transhumanism, but then in Stoicism, this idea of transcendence finds itself in the idea of the sage, right? Like Stoicism sells a picture of something that's kind of post-human.
00:40:57
Speaker
And it idealizes that picture of the post human, the human that like, you know, doesn't seem possible. If they are possible, they come around once every 500 years. So it's a kind of a superficial possibility. So what do you think that says about this idea of like more isn't always better or moderation?
00:41:15
Speaker
When I feel like the stoics, I guess, didn't, didn't practice

Transformation and Sage Aspiration

00:41:21
Speaker
that. Well, they were in moderate, but in terms of just the good thing, you know, the sage pursue that intensely. That is, that is better transformed to post human in that way. So the idea is to just like be moderate about these indifference, but there's still that appeal and stoicism of kind of changing your nature fundamentally. Yeah. It's interesting because I think stoicism has.
00:41:45
Speaker
a, it does have an ambition towards truly transforming yourself. And it's sort of, it's, there's almost a tension between that possibility being imminent, being available to you right now with also this recognition that
00:42:13
Speaker
the sage is a distant possibility. I would almost contrast it with pictures of heaven or
00:42:27
Speaker
standard monotheistic pictures where the picture of any monotheistic religions is to become the kind of person who is worthy of God. Whereas a picture of many Greek philosophies, Roman philosophies, was to become a God in a real sense, that you have this fragment of
00:42:56
Speaker
the divine within you, perfect it, and then you will become like a god yourself, which both, I think both includes that picture of transcendence, radical transformation, but also has a more tangible, imminent possibility or at least promise that is offered than some of the traditional monotheistic religions.
00:43:28
Speaker
Oh no, and then that just strikes me as being transhumanist in gold. Just not lifespan, not physical strength or health, but transhumanist morally. Like you become a kind of post-human, morally, the kind of person who can't make mistakes.
00:43:50
Speaker
Yeah, yeah. I would almost want to say posthuman, another way that Stoics might say, you're becoming fully human, right? Or living out your full human nature. But I think, yeah, I understand. And I think in a way that's
00:44:09
Speaker
also leaves up not exactly an objection to transhumanism, but this thought that what makes the Stoic picture appealing is that you are not trying to align yourself to some external possibility, whether that's an external God or some possibility of technological transformation.
00:44:38
Speaker
but are aiming to perfect what you have right now, what's right in front of you, what's concrete, your ability to manage impressions. So I think they have that
00:44:57
Speaker
picture of radical self-improvement transformation with the transhumanist, I think we'll be happy to push the arc of human development forward. Always with the eye to one's life, of course, doesn't depend on, you know, living an excellent life doesn't depend on these transhuman capabilities, except as you say, in the moral or ethical domain.
00:45:22
Speaker
Yeah, I really like what you said there, where you said, you know, you're not trying to align yourself with something external. So in stoicism, you're not trying to align, you know, like, Oh, this is what God wants. And I'm going to try to be like kind of thing God wants. Or, you know, technology is really cool. I'm not technology, I'm going to become more like a cyborg.
00:45:47
Speaker
there's just this thing of just being yourself as much as possible. And so even in Stoicism, there's the idea of living in accordance with nature, but the higher level truth is obviously just that you're already part of nature.

Stoic Nature versus Technological Transcendence

00:45:58
Speaker
And so your goal to live in accordance with nature is not to do anything, but like walk alongside the carts of fate to use that metaphor. I'm not anything new here. I'm just kind of ruminating on that idea of it's
00:46:13
Speaker
And in that way, it's like, that's cool. There's like a lot less, I guess, kind of friction there, right? It's just like, just be yourself more. And obviously, that's bringing in a certain picture of what you are. And if you don't agree with the Stoics there, that's the friction. I'm not this rational being. Or I'm not only a rational being, or not primarily. My rationality doesn't look the way the Stoics think it looks. There can be some friction there, but there's this permission to just
00:46:40
Speaker
Yeah, as you said, not become post-human in the stoic view, but become as human as possible, become fully human. I'm feeling pumped up. I'm feeling inspired. That's a huge part of the appeal you're dead on. Yeah, yeah, absolutely.
00:47:00
Speaker
We'll be talking about the Stoic God next, that's on our schedule. And what you just said made me think of, yeah, of course, you're not aligning yourself to anything external, but in a sense, you already have the external in you, which is, I mean, it'll surface might seem like a mystical thought, but it's just this picture that, you know, you do have that fragment of the divine reason. Reason is you are a rational creature, but that ability to reason is somehow
00:47:31
Speaker
reflective or maps onto is embodied in the larger whole, the external. And perhaps we'll get into that more in our next conversation. Yeah, I'm looking forward to it. Awesome. Well, that's transhumanism.
00:47:49
Speaker
I had a number of other notes, but I think that gets us in a good spot. And the most important things I wanted to put on this table, just this possibility of transhumanism, I think it's worth thinking about radical technological change.
00:48:05
Speaker
at least Bohan's book is a good one on this. And then thinking through, you know, what are some of these philosophical questions, objections that arise with the possibility of transhumanism and how would, how would they still think about it? So I think we've got some of the most important thoughts, argumentative moves on the table. Awesome. Thanks, Michael. So the picture here is something like
00:48:30
Speaker
The Stoics don't object to transhumanism in principle. There could be a version where achieving, there's some version of post-humanism that looks like, look, if you become a cyborg, if you become a supercomputer AI, that might not necessarily actually interfere with your nature fully. There's at least a possibility of cyborgs that still
00:48:58
Speaker
can be stoics, right? Because they still have this piece of the divide in them that's the most important part of them.
00:49:05
Speaker
So there's, they don't have an in principle objection to transhumanism, but there are warnings, warnings about both why we might want it and why it might appeal to us. And then I think most importantly, those and, and thinking about this idea that, you know, more isn't necessarily better. Um, and that there's some acceptance of the way things are, you don't want transhumanism to become this way of trying to escape reality or escape nature.
00:49:32
Speaker
And then what we ended up with was that really interesting distinction between how, I don't think transhumanism necessarily does this, but if transhumanism is about this attempt to align yourself with something outside of yourself, then that's not stoic, because stoic is about becoming as human as possible. And I don't know, that I think was a really good way of putting stoicism at its core. A nice, nice, yeah, well summarized.
00:50:04
Speaker
Cool. Awesome. Thanks. Cool. Yeah. Thanks Michael. Thanks for listening to Stoa conversations. Please give us a rating on Apple podcasts or Spotify and share it with a friend. If you want to dive deeper still search Stoa in the app store or play store for a complete app with routines, meditations, and lessons designed to help people become more
00:50:27
Speaker
Stoic. And I'd also like to thank Michael Levy for graciously letting us use his music. You can find more of his work at ancientlyre.com. And finally, please get in touch with us. Send a message to stoa at stoameditation.com if you ever have any feedback, questions, or recommendations. Until next time.