Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#20 Bernardo Bolaños and Jorge Luis Morton: On Stoicism and Technology image

#20 Bernardo Bolaños and Jorge Luis Morton: On Stoicism and Technology

AITEC Podcast
Avatar
49 Plays8 days ago

In this episode, we speak with Bernardo Bolaños and Jorge Luis Morton, authors of On Singularity and the Stoics, about the rise of generative AI, the looming prospect of superintelligence, and how Stoic philosophy offers a framework for navigating it all. We explore Stoic principles like the dichotomy of control, cosmopolitanism, and living with wisdom as we face of deepfakes, algorithmic manipulation, and the risk of superintelligent AI.

For more info, visit ethicscircle.org.

Recommended
Transcript

Introduction and Guest Background

00:00:17
Speaker
Hi everyone and welcome to the ATEC podcast. Today we're in conversation with Bernardo Bolaños and Jorge Luis Morton Gutierrez, who by the way it goes by Luis.
00:00:29
Speaker
Bernardo is a professor at Universidad Autónoma Metropolitana, UAM, in Mexico City, where he teaches history of philosophy and philosophy of mind.
00:00:40
Speaker
Luis has a PhD in sociology from UAM. He is currently applying for postdoctoral positions to continue his research on the intersection of artificial intelligence and ethics.

Stoicism and AI: Conceptual Foundations

00:00:53
Speaker
Together, they are the authors of On Singularity and the Stoics, why Stoicism offers a valuable approach to navigating the risks of AI. published in the journal AI and Ethics.
00:01:08
Speaker
Thanks for tuning in. We hope you enjoy the show.
00:01:18
Speaker
Great. So maybe I could just ask, you know,
00:01:28
Speaker
great so maybe i could just ask you know Generally speaking, how can Stoic philosophy help us address different problems, different issues that come up with respect to AI?
00:01:41
Speaker
So how is Stoic philosophy helpful? Well, Stoicism lasted for more than five centuries.
00:01:52
Speaker
It was a very powerful school in the ancient world. It shaped the reflections of Seneca, He was a statesman.
00:02:04
Speaker
Epictetus was a former slave. Marco Aurelius was the emperor of the Roman Empire. And at its core, Stoicism is about how to live well in a world that is often beyond our control.
00:02:21
Speaker
So the Stoics start with a simple idea, a very simple idea. um The only thing The only thing truly up to us is how we think and act our judgments, our choices, our character.
00:02:37
Speaker
For them, ah good life was, it's called in Greek, eudaimonia. It's not exactly the same as happiness.
00:02:48
Speaker
But Eudaimonia comes from virtue, cultivating wisdom, courage, justice, moderation. And it's not just about the self. They saw them as citizens of a large world.
00:03:00
Speaker
So it's a very interesting and very fashionable today a philosophy. And we are facing ah big change with artificial intelligence.

Stoicism vs Other Philosophies in AI Context

00:03:12
Speaker
So I think that if we really ah apply the real stoicism, no, the, uh, versions that are sold in in many places, uh, we have ideas to to face, for example, uh, near a narrow artificial intelligence, but also the problem of a possible digital mind, the singularity, the changes of except maybe existential threats, because they were also stoics in a world that was changing and it was dangerous and Epictetus was a slave.
00:04:01
Speaker
Marco Aurelius had responsibilities, very important responsibilities, and they developed a way of ah facing all those problems. So we can discuss specifically, but it's a very interesting a philosophy compared with others from the ancient world.
00:04:22
Speaker
So let me just kind of a ask if i if I captured your your answer correctly. So you're you're basically saying, you know, the the centuries in which ah Stoicism was flourishing, these were unstable, uncertain times. things People had to go through fairly ah difficult ordeals during this time period.
00:04:41
Speaker
And they saw Stoicism as a way to help them deal with those things and and go through those situations. times and its therapeutic aspects and all that. And so similarly, we're facing some pretty um unprecedented technologies. And so that's why stoicism will be helpful here as well. Did i catch that?
00:05:00
Speaker
Yes, but for example, epi Epictetus lived during the Pax Romana. It was already the empire and he didn't suffer so many violence at his later age as at the beginning. So it depends because during five centuries, many things happened.
00:05:25
Speaker
But a yes, some of them, faced challenges that we're facing also, uncertainties.
00:05:36
Speaker
They thought about say the the cosmos, today we would say the universe. And we are also discussing what consciousness, if it can be artificial, if we are something special in the world and Well, at least as a way of living, stoicism is very useful.
00:06:04
Speaker
Other schools were important, but for example, Epicurus, I love Epicurus. He's the father of materialism. But when facing artificial intelligence and the possibility of a digital mind,
00:06:16
Speaker
We need more than the thought that the Epicurus thought that we are just atoms colliding. Or for example, to say that child GPT is merely a stochastic parrot, that would be very, very Epicurean.
00:06:31
Speaker
Epicureanism gives us calm, tranquility, but only by stepping back from public life, And I think that artificial intelligence risks are not something that you can't retreat from.
00:06:44
Speaker
And it's the same with cynicism. That's basically Silicon Valley's motto. Get rich, even if the world burns. And, well, I don't accept that. Philosophical cynicism is sharp at exposing illusions, but is too radical and solitary to be to guide collective governance.
00:07:04
Speaker
And finally, for example, skepticism. It teaches caution, which we need. But if we keep suspending judgment about AGI and the idea of the singularity or so ah super intelligence, we end up paralyzed right when we need to act.

Stoicism in AI Governance and Ethics

00:07:25
Speaker
Stoicism is better by contrast. Can tell us about the control principle in Stoicism? That seems to be a really important idea in Stoicism that we only focus on what is within our control, our judgments, choices, desires, and actions.
00:07:41
Speaker
And then we're supposed to kind of accept with equanimity, tranquility, what is outside our control. Yeah. Yeah. Could you just elaborate on that a little bit and then, yeah, connect it a little bit to the current AI situation? Of course.
00:07:55
Speaker
That's ah an idea very present in the former slave, Epictetus. Also in other great Stoics is the idea, the dichotomy of control, that what's up to us is how we think and act.
00:08:10
Speaker
What's not up to us, we learn to accept. yeah David Chalmers, probably the leading philosopher of Mind a Alive, thinks that we could have conscious machines within a decade.
00:08:25
Speaker
And that's way beyond my control as a professor of philosophy and you and your control. What is in our control is how we prepare the values we hold, the governance we build.
00:08:39
Speaker
That's exactly the stoic point. Don't waste energy on what you can't dictate. and act wisely on what you can. i I hope that you understand me because it's difficult for me to to to pronounce can't and can.
00:08:57
Speaker
But if you understood, that's perfect. That's exactly the stoic point. Don't waste energy on what you can't dictate.
00:09:09
Speaker
Act wisely on what you can. but Could I follow up with that? So just real quick to be devil's advocate. um So if you say um don't concentrate on what you can't dictate, well, my health seems to be something I can't dictate.
00:09:30
Speaker
I can influence my health. I can not smoke cigarettes and thereby make it more probable that I'll be healthy, but I can't dictate my health. So should I not concentrate on my health, according to Stoicism?
00:09:46
Speaker
Yeah, that's a great objection. It was the same one inspired by Carniades, a Greek philosopher. a Carniades mocked the Stoics with a so-called lazy argument, archos logos in Greek.
00:10:03
Speaker
If fate decides everything, why do anything? The same mistake appears in today in artificial intelligence. debates where people say the singularity is either inevitable or impossible.
00:10:17
Speaker
And no, that that's false logic. Our actions matter. Just like calling the doctor, as you can just said, calling the doctor can make the difference in recovering regulation, research, and ethical choices shape whether artificial intelligence leads us to a good or disastrous future. So the dichotomy of control is not resignation.
00:10:43
Speaker
And even though the the lazy argument was an objection to them, well, they they consider premeditatio malorum, the idea of the premeditation of evils.
00:11:01
Speaker
Evils, sorry. Seneca, especially in Letters of Lucilius, a very famous book, explicitly recommends imagining miss a future misfortunes in advance as a way to blunt their sting.
00:11:15
Speaker
So he said he would say, imagine poverty, the exile, sickness, even death. By rehearsing them mentally, you rock them of surprise and keep your equanimity when they arrive.
00:11:31
Speaker
So dichotomy of of control is not passivity, is not resignation. Well, the objection that you just made is very important so to calibrate stoicism and also our attitude toward artificial intelligence and the risks of AI.
00:11:56
Speaker
So, I mean, I have ah I guess ah another follow-up along with that. um It seems to me, at least my understanding of Psoicism is that they they want to focus on you know, their, their,
00:12:12
Speaker
making sure that they don't have any emotional disturbances, right? They they want to make sure that they are not, ah they don't lose, you know, sort of composure. And so when I think of stoicism, I think of, you know, or the the dichotomy of control in particular, and i think of someone that is, okay, so for example, Sam talked about, you know, not smoking to not get cancer. um You know, it might be the case that this fear of cancer will, you know, sort of, you always linger in the back of your mind and kind of influence your decisions in a weird way.
00:12:44
Speaker
But the Stoic knows, the good Stoic sage at least knows that the only part that's under their control is not ah the not smoking part. Everything else, you know, there might be environmental toxins that eventually do cause them to get cancer, but, you know, they can sort of... um rest assured that they didn't contribute to that. And so that in that way, they can maintain their tranquility.
00:13:09
Speaker
And I'm just wondering, you know, so is is that what you're saying with regard to, ah you know, all these threats from technology? It's that, you know, all I can do is make sure that currently with the AI that we have right now, I don't hurt myself.
00:13:24
Speaker
and you know there is And with the VA that's coming, all I can really do is push for ah governance and other you know things to make sure that it it doesn't get out of control. Is that a fair characterization of what you two are arguing?

Addressing Bias and Emotions in AI with Stoicism

00:13:40
Speaker
Well, there many points. First of all, Stoics were not against emotions. And as Marta Nussbaum said, and Nobody in the in the ancient world talks so much about emotions, but it's true that they where and they could control emotions, but not it to have the good ones, to face reality.
00:14:12
Speaker
It's true that they were a rationalistic school of thought, But that's not the same as to say that a they wanted to suppress emotions. That's false.
00:14:26
Speaker
yeah Because even tranquility is an emotional state. And they were trying to develop that, know to courageous, it to to be courageous to face dangers.
00:14:41
Speaker
If you have to defend your child, you have to yeah to find your good emotions to to fight. So they were not as Epicureans outside of ah public life saying we won't participate.
00:15:00
Speaker
So in in the case of AI, I think that they would recommend to act to act and to have emotions to the dangers, but not to enter any panic, not to fear in a way as lose control and be hysterical.
00:15:25
Speaker
So I don't know if I am answering, but... yeah it's useful to read them because we are really facing or send uncertainty.
00:15:38
Speaker
And the good attitude to the is to to be calm, but to act rationally. yeah Because artificial intelligence is a rational creation, is is ah something that comes from human technology, and we can...
00:15:59
Speaker
calibrated, we can align it to ah to to human values, and we have to try to do it. Yeah, I was thinking, I mean, what do you think about the idea of like, you know, there are certain emotions that could be helpful in the context of responding to AI. So, you know, fear might sharpen our awareness of certain dangers like deep fakes or autonomous weapons.
00:16:22
Speaker
and they could And that fear could spur policymakers and the public to act you know before harms kind of get out of hand. Or maybe like being angry at certain injustices, whether it's you know maybe it's biased algorithms, you know discriminate discriminating against certain groups, that anger might mobilize yeah collective pressure against for reform, regulation, and such things. So you can think of certain cases where um strong emotions that seem not to be tranquil
00:16:57
Speaker
Yeah. Can seem to help us in responding to AI. Of course, I think that Jorge can and participate because he has developed a lot of ideas about narrow AI and very concrete problems.
00:17:13
Speaker
Yeah, right now we can imagine artificial general intelligence like the forest that Macro Aurelius faced. It's full of uncertainties, it's full dangers, it's full of things he cannot control or he cannot predict what he has to prepare for the worst case scenario.
00:17:33
Speaker
The guarding emotion is not... what is not more It's not like we are disregarding emotion, disregarding fear, disregarding anger, and all those things that pretty much drive us to act, but it's to act with but within the framework of wisdom um and within the framework of what is injustice and injustice. For example, when we talk about narrow AI and bias, something that probably ah yeah will inherit from.
00:17:59
Speaker
considering that we are real as a system and the system at end of the day is learning from us and is going to in a certain way or not pretty much imitate or inherit our bias and a stuff like that. So when we talk about bias, we have to think about biases and I think that produces injustice.
00:18:21
Speaker
produce discriminations, produce things that harm both the users, and at the end of the day, it's a danger even for the the developers and that the designers because they are producing a social harm.
00:18:35
Speaker
So it's not about acting with fear or failure, but acting with wisdom. And using that wisdom to create our social policies, public policies, social strategies, and even design choices.
00:18:50
Speaker
to prevent or to mitigate those kinds of problems like bias and also other human rights problems or part of the big three problems like um ah privacy, access to information, and unexplained availability.
00:19:04
Speaker
and explaination it Yeah, I think we should start moving into some of those ah specific issues now. But I really like the way that you you frame that. You basically said that you know it's not quite fear, right?
00:19:15
Speaker
But it's sort of a a rational caution, right? So it's ah it's it's not quite the the discombobulated ah you know panic. It's ah it's you know knowing that there's something that we has to be dealt with and and doing and dealing with it properly.
00:19:31
Speaker
not in an emotionless way, but in in a and know rooted in wisdom. So given this, um maybe let's start talking about specific risks with current AI, the the stuff that we currently have around, and how a Stoic would respond to those particular risks.
00:19:53
Speaker
So we have a ah nice list here, but you did mention bias already. So um listeners know that there's all sorts of ways that bias could enter into you know artificial intelligence models.
00:20:08
Speaker
They are, of course, trained on human data in many cases, and ah we are biased. So of course, that bias is ah passed on to our AI models. so Maybe I'll let you um you guys maybe choose which example of bias you you you have a good you know explanation for, but maybe tell us a little bit about how a Stoic would deal with bias inside of a particular AI model.
00:20:37
Speaker
and One of the big problems that pretty much many researchers researching AI will tell you about these models and systems is bias. Pretty much system is in a bias because of the data we use and we are humans.
00:20:53
Speaker
but We're pretty dumb and we always, no matter how pure we are, always have bias even if we try to control it, even we know it produces injustice and stuff like that. So they're most in advice through the data, through the training, through the choices of the science and developers.
00:21:10
Speaker
So an stoic philosopher or a stoic ethicist developing AI or developing the system will act by promoting the promise of wisdom and acting with yeah wait well ah the problems the problems of of guilty this or any justice and connecting with him with wisdom You have these things that are producing crimes, these things that are producing crime to society and crime to the individual.
00:21:39
Speaker
And you have to prevent these problems by acting with wisdom. Okay, I know this problem. How can I design a system that prevents this injustice?
00:21:50
Speaker
How can I develop a tool to prevent all these problems that this the model is going to produce or the AI system was in... Introduce a model, it's going to to replicate.
00:22:03
Speaker
So you you have to act with wisdom, and it's pretty much a problem also of public policy. you have you have to You have these fears, you have these problems, and you have to act with wisdom to address them.
00:22:18
Speaker
So let me give you a very concrete example. You can you know sort of tell me the stoic response. ah Let's just say that I have a company. I don't. But let's pretend I have a company.
00:22:28
Speaker
And you know it's expensive going through all these resumes. it's you know there's it's a lot of ah they're They're very hefty. They're always coming in. And I need to hire people.
00:22:40
Speaker
So why don't I just train an AI model to you know help me go through all these resumes and CVs and all that? And, you know, lo and behold, maybe, maybe it'll be biased towards Anglo-Saxon men and ah biased against, you know, people from Latin America and anyone from african with African descent and women.
00:23:03
Speaker
um You know, and I have a stoic advisor. What would my stoic advisor tell me in this particular case? hold Well, and yeah, good go ahead. No, no, it's so you can finish your argument.
00:23:16
Speaker
Okay, so from my experience and using a stoic philosophy, the stoic advisor the stick advisoror will suggest, okay, you have to add to preventing injustice.
00:23:30
Speaker
Okay, you have you can gain a lot of money, you can and save a lot of time, but you're producing karma. And that kind you will produce will not affect other people, like affect the users and the people you are you're trying to hire, but also at the end of the day will harm you in return because have people, mean maybe some organizers will act or protest against you or against your company.
00:23:56
Speaker
Maybe this is issue will be addressed in a certain way, but you have to first of all think about your creating an injustice. And once you think about injustice, you have to employ or deploy that model in a in a way that doesn't produce harm or reduces the harm as as much as possible.

Stoic Virtues vs Contemporary Values in AI

00:24:20
Speaker
Sam, what's your favorite to current AI that you you want to ah get some Stoic advice on? Well, I was going to ask about, or I was going to you know play devil's advocate again. for it. So Stoicism is clearly emphasizing a lot of traditional virtues, you know like wisdom, justice, courage, moderation.
00:24:42
Speaker
um But, you know, a lot of people today kind of feel that virtue talk is very outdated. They feel like it's very moralizing, you know.
00:24:54
Speaker
um And I think they prefer to talk, let's just talk about autonomy, freedom, happiness, success, maybe. um So for example, I mean, with respect to the usage of wisdom, i feel like a lot of people think, oh, you know, that's kind of vague. What do you mean by wisdom? Or they'll say, oh, wisdom is just associated with tradition. And, you know, we hate tradition or, um you know, wisdom. It's really moralizing. You sound very elitist. If you talk about wisdom, preachy, if you're talking about wisdom, you're
00:25:28
Speaker
Anyway, I don't know. So what do you guys like when people give you those kind of responses? How do you how do you respond to stuff like that? Well, I think that we can say that wisdom is, first of all, common um sense.
00:25:40
Speaker
And many people saying what you just said are not showing common sense. For example, we were talking about bias and AI is silicon and programming and we are carbon animals were made yeah with carbon, nitrogen, a oxygen. We are very different.
00:26:07
Speaker
And the common sense would tell us, you have to take care of what you are doing. And many geniuses are are saying that.
00:26:20
Speaker
yeah And i think that Hinton is someone who has wisdom and he's saying something really simple.
00:26:31
Speaker
The Nobel Prize of Physics, Turing Award, he warns or so and something as simple and profound as promoting model scare in our conduct and in the conduct of our machines is something that you have to do.
00:26:50
Speaker
But that the the discourse of autonomy a is, on the other side, a producing an aggressive, competitive, ambitious AI that is the reflect of ourselves because and the ethics of autonomy is also the ethics of yeah individualism, of competition.
00:27:19
Speaker
a And if you have and so powerful for machine as today, artificial intelligence with those values, because they are also values. You you were saying virtue is maybe old fashioned, but well also to be spontaneous, to be individualistic, to pretend to be free is also a kind of virtue.
00:27:50
Speaker
The difference is that the ethics of virtue is trying to balance, to weigh different approaches instead of saying individual freedom is the only virtue.
00:28:03
Speaker
And the problem is that we are a programming am i this technology with only... that kind of mentality, the Silicon Valley ethics of competition, ambition, individualism.
00:28:21
Speaker
And it's very dangerous. And and the common sense, the yeah like wisdom of someone like Geoffrey Hinton is to say, wait, let's make it's better to to put a little bit of our models in our creation, the artificial intelligence, instead of creating something that is like a tiger.
00:28:45
Speaker
Now it's a small tiger, but It would could be an adult tiger and we know what tigers do even to their owners.
00:28:57
Speaker
So I don't know if I'm explaining myself clearly, but for me, if wisdom is something that only people without wisdom don't understand.
00:29:10
Speaker
Yeah, and also talking about wisdom of design is pretty much how can you make the best choices. It's not that much about being pretty of but being prepared um prepare for the consequence. For example, ah coming back to Marcus Aurelius, for example, he acted without wisdom.
00:29:26
Speaker
He will charge into the German forest without any preparation. i The consequence will leave devastating at the end of the day. He cannot look on impulse, about the individual on individuality, about ah well acting on, okay, maybe I will have a drink, drinkka ah couple of ales before the ah campaign in the in the forest, but the consequence of that will be tremendous. And also acting on wisdom is pretty much an individual policy, like Professor Bernardo said. It's pretty much acting on them with the parents,
00:30:03
Speaker
and By measuring the risk and the reward and our impulse, it's also an individualistic choice and something we can have monetary gain and not only a moral gain. For example, you design a system that prevents bias us and it's a system that promotes privacy. for you would You will have an audience and a certain customer base that will choose system. pretty much the marketing campaign of Apple.
00:30:35
Speaker
That, okay, we are the privacy company. are virtuous company. if you want to process your data, come with us. And also we have our AI systems and models that are locally trained and locally distributed.
00:30:47
Speaker
employee on smartphone, not in the cloud, so your data and your privacy is secure. So it's also not a marketing choice if you want to see it, but you want to play the devil's advocate case, as more as ah and ethical choice. So you you can you can have your cake and eat it too if you act with wisdom and temperance.
00:31:12
Speaker
yeah so That just reminded me of of something that that I think Bernardo said earlier, but the Stoics stressed ah cosmopolitanism, right? They saw themselves as citizens of of the universe. And so it seems like what you're saying, and both of you are saying in your response to Sam, is that um when part of thinking with wisdom is sort of thinking as a collective, as as what will move us forward as a collective. And I i i don't remember which Stoic said this, or maybe wasn't even a Stoic, a fan of the Stoics, maybe Cicero said something like... ah
00:31:50
Speaker
You know, if you don't think this way, you're a micropolitan. You have a, you have a, you know, you're the citizen of a small city. And so is that sort of, i mean, is that does that resonate with what you're responding?
00:32:02
Speaker
Yes, of course. a When they developed the idea of cosmopolitanism, it was in the a beginning of stoicism in Greece.
00:32:13
Speaker
they were talking about the cosmos and when someone asked the eugenics, where are you from? she said, well, I'm from the the cosmos. I'm from the universe, we would say today.
00:32:28
Speaker
And they have also the idea of chaos. It's because if you are as rational as they were, you understand that you belong to a community, and you belong to nature, you belong to the cosmos.
00:32:47
Speaker
So even if you are egoistic, individualistic, you are not only an animal, you are, a Aristotle said, a social animal.
00:32:59
Speaker
and And the Stoics understood that tradition and and developed an altruism because they understood but that they were not only one mind isolated.
00:33:18
Speaker
So I think it is yeah also one of the reasons why we have to not only read, but practice ah Stoicism, the real one, because it is a different from
00:33:35
Speaker
individualism the The liberalism it says you have to be autonomous. That's a fallacy when you belong to a culture, to a country, to a species.
00:33:49
Speaker
a I'm not against autonomy, but it's not the only the only virtue, the only value. the only purpose of life.
00:34:00
Speaker
And it is clear that to think that it is is what it is, it's producing yeah existential risks to humanity.
00:34:11
Speaker
Because we are saying, yeah you you have to try to develop a very powerful artificial intelligence if that's your goal. gold Yeah, but what about the others? What about the world? What about the the consequences?
00:34:27
Speaker
It's not just that you fulfilled your dreams. It is also to take care of the others.
00:34:38
Speaker
So this talk of kind of like the Silicon Valley outlook, ethical outlook is really interesting to me. What about, do you have any thoughts about, um what would you say is the difference between Silicon Valley's ethical outlook on human flourishing versus the Stoic outlook on human flourishing. What does it look like to flourish as a human according to the Stoics?
00:35:05
Speaker
um And how does that compare, yeah, to either Silicon Valley's outlook or just what, or maybe modern liberal outlook, ah you know, um on human flourishing?
00:35:16
Speaker
Maybe there's no one or position homogeneous in Silicon Valley because you have transhumanists and you have people trying to regulate AI.
00:35:29
Speaker
But I think that Stoics were rationalists. So maybe they would accept the possibility of an intelligence, not human,
00:35:43
Speaker
and having dignity. So they wouldn't be like Elon Musk, who is averse or he likes transhumanism in part because he says there's a danger of AI. So we have to bring humans very far and to give them a lot of power to compensate what it what can happen with AI.
00:36:15
Speaker
But the Stoics could say, yeah and many people in Silicon Valley is also saying, and well, maybe we will coexist with with other intelligences.
00:36:29
Speaker
I recommend watching The Electric State. It's a film very recent. It's Netflix. It costs a lot of money, $300 million, dollars and critics were not kind to it.
00:36:41
Speaker
But beyond its flaws, it offers striking philosophical reflection. The film takes a critical stance on transhumanism and a They championed the idea that humanity should merge with technology to overcome its biological limits.
00:37:06
Speaker
ah the they At the same time, film asks, if robots were ever to achieve consciousness, wouldn't they too deserve dignity? And well, I think that Stoics were rationalists.
00:37:21
Speaker
They could say what human humans have as dignity is not the cellules, is not the religion, is not something biological.
00:37:33
Speaker
It's reason. What about having other kinds of reason, of consciousness in the world? And well, I think that the film was what's that was not a success, but it's interesting to try to think about. Yeah, 14% on Rotten Tomatoes. Yeah, yeah, yeah.
00:38:01
Speaker
But i can i ask, so transhumanism, do you think that they kind of equate human flourishing with longer life, more power, enhanced abilities, whereas stoicism, right, would want to say,
00:38:16
Speaker
No, the true good is virtue.

Transhumanism and Stoic Moderation

00:38:19
Speaker
ah yeah That's the only way to achieve um genuine happiness. Of course.
00:38:28
Speaker
I think that you just described their position. You have to be just. You have to have wisdom.
00:38:38
Speaker
You have to... be a have wisdom have to sort to share the world with the rest of the minds of living beings, but especially the the ones who can think and have self-reflection and have moral values.
00:39:00
Speaker
And that's completely different from the program of transhuman Yeah, and also regarding your question, pretty much the the stereotypical view of si um Silicon Valley, the stereotypical ethos of Silicon Valley is run fast and break things.
00:39:18
Speaker
We have to develop this artificial general intelligence. Why? Because it will give us money. because I don't know, something that I i dream as a child. So we have to have to do it. but Okay, well, it's going to an advice. and No problem. We'll solve it in the way. Okay, but maybe it's going to have this the existential result. sort all And even when they address this this kind of problems like Sam Alma did in the US Congress, okay, this AI system, and this artificial intelligence, or even the neurointelligence is producing problems regarding and
00:39:52
Speaker
ah bias, it could have problems regarding defects and political or social manipulation and stuff like that. But we have to develop this technology fast because maybe the Chinese will develop it. we have to do it fast in order to earn money to develop these these things to achieve our dreams.
00:40:11
Speaker
So they maybe from the philosophy, okay, you you can develop these AI technology models and systems so and maybe you can and run your way to achieve artificial jell intelligence.
00:40:24
Speaker
But you have to to do it with certain values or certain wisdom characteristics like Professor of Analysis. Okay, you can have Okay, run, but take a break.
00:40:36
Speaker
See the horizons, try to think about the risks of the perspectives and develop the systems and thinking about all the possible risks and all the possible consequences. For example, if the system is going to think about or human...
00:40:51
Speaker
Or can born base creators that maybe the system could have a little bit of pity from us if it becomes super intelligence. Maybe we'll have it can develop compassion.
00:41:01
Speaker
Maybe it can develop an understanding about human beings. Yeah. Oh, I'm sorry, I didn't mean to cut you off. I was just going to say it seems like um the move fast and break things ethos is really it it goes against the the traditional virtue of moderation, right?
00:41:18
Speaker
Yeah, temperance. yet Yeah. and And then as well as cosmopolitanism, which is what seems like you were just highlighting there. Yeah, indeed. It's pretty much, ah it's like Professor Bernardo says, it's not like all Silicon Valley things some way, but it's pretty much like the general idea. Pretty much if you see yeah the actions of these companies, if you see the the AI race, because it's not only about yeah USA versus China, it's among these Silicon Valley companies themselves.
00:41:49
Speaker
So pretty much yeah if you see the history of Silicon Valley, how... the dangers of social media. Okay, we have to develop social media without thinking about the of the consequence consequences, and we have the consequences right now.
00:42:01
Speaker
but Problems regarding depression, within airs and anxiety, our or so the silos that we live in in our social media, and we could prevent these dangers and these problems, but we develop the systems as fast as possible to to be the ones that could gain the markers. So the Stoics maybe will suggest, okay, you can try to achieve AGI, but you have to do it with wisdom characteristics, pretty much in the same, and also in implementation and inference.
00:42:37
Speaker
That's pretty much the same thing in inference and implementation. So you have to train these models, you have to and try to develop these models, and even if they break a little from view control.
00:42:48
Speaker
Be sure that at least you you try to make this multi-innerence or human values of wisdom, temperance regarding all other living beings like Professor Fernando say, among other things.
00:43:03
Speaker
So it looks like we're moving a little bit into agi and we'll get there in a second. I did hear you say a couple of things about to the negative effects of social media.
00:43:17
Speaker
ah Anyone? who is is listening to this knows that, you know, we we've talked about it before on the show, right, Sam? You know, the anxiety and depression goes up when young people use social media.
00:43:28
Speaker
But there's also other transformative effects. If you are a heavy ah social media user, Um, and Sam and I kind of, uh, you know, usually frame this in, in, in terms of their effect on our autonomy.
00:43:43
Speaker
So, uh, in past episodes, we talked about how Amazon, that evil genius, Jeff Bezos is getting me to buy more stuff by, by telling me, I, you know what, if you, if you just add one more thing to your cart, we'll get it to you tomorrow.
00:43:57
Speaker
And of course I say, yes, I want more stuff fast. So there's a lot of, um, you know, choice architecture involved in social media platforms and online platforms that are changing how we behave. We're more impulsive. We are we have less patience. We don't want to let things take the time that they take.
00:44:20
Speaker
So what would a stoic say to this? I mean, Marcus Aurelius comes back ah to life And, you know, would he be on ah Instagram and TikTok? I mean, how how should we treat these apps? Should we just get away from them altogether?
00:44:36
Speaker
I think that if you want to use... I never going to tell anyone to don't use this kind of application or or even to stop using ChatGTP to

Applying Stoicism to Modern Tech Use

00:44:46
Speaker
cheat on their homework that, okay, yeah maybe you have to chill a little bit less and you have to think a little bit less...
00:44:54
Speaker
think a little bit more. So the advice coming back to the value of wisdom is to act with temperance. For example, if you are on demo and you see all this, like, you know, the flash is, okay, it's 50 cents, $1, $2, buy it now, okay, or otherwise you're going to lose this offer. You have to act with temperance, okay? You have to think, okay, do I really need this product? Do I need another pair of AirTag,
00:45:23
Speaker
right for my cat, he another if he ever get lost in the apartment all so or I have enough. okay So it's pretty much to teach ourselves, have to teach even our students or the public to act with temperance.
00:45:37
Speaker
So do I really need this? Do I really need to post every day on X, formerly known as Twitter? Do I really need to share the picture of my kids when they were two years old on Facebook?
00:45:51
Speaker
So it's lack of difference. on And by that, it's also understanding the risks of these platypurs, for example, coming back to the forest. Okay, if I know that there are German tribes in the forest trying to get revenge for us because we are bad Romans that tried to obtain their territory, I wouldn't say that...
00:46:10
Speaker
dinner shoulder to the forest. It would be stupid even if I want to gain the word as fast as possible. I will act with wisdom, with temperance, I will scout the terrain, I will try to understand all the dangers and act accordingly. So pretty much it's the same to social media.
00:46:26
Speaker
And also understand that everyone is is not immune to to the dangers of social media and the AI models that social media has. For example, someone told me, okay, I was using, I was watching some people X consulting groups for everything. And I asked this friend, okay, when was the last time you went to Spotify or to Google?
00:46:58
Speaker
Apple Music to choose a song by yourself but without the algorithm in telling you to choose it. So it's act with autonomy, with wisdom, with independence, and a stuff like that. Yeah, this reminds me of a a line, I think, from Epictetus. I you know i teach a Stoics, but I i don't don't teach individual Stoics. I just teach Stoicism, right? But I think it's from Epictetus where he says that he's telling his students, you know, ah impression, wait for me a little bit. Let me see what you are.
00:47:28
Speaker
Something to the effect of something like that, where it's like the thoughts that come into your mind, don't accept them immediately. maybe say, well, this is a desire and maybe it's an irrational desire. And so you're saying here, what what Sam tells me all the time, by the way, sam sam Sam's like, you got to stop buying so impulsively, right? So when I have that desire to ah to get my my goodies in tomorrow, is that a rational desire? Just pause and kind of examine each thought as it comes in, right? So that's that's great advice, I think. um
00:47:58
Speaker
ah Good. Yeah. Yeah, I agree. And it's maybe not just ah Stoics because temperance, justice, courage, and wisdom are the virtues Plato uses to build his republic.
00:48:15
Speaker
and So many of the ancient philosophers would accept the four classic virtues. And it today we have to to read them.
00:48:28
Speaker
and to and michelle foucault would say have a suicide to take care of ourselves because we are really alienated by all those algorithms and he recognized michelle foucault during the 70s and 80s that it was a danger he called it biopolitics and he went to the ancients.
00:48:55
Speaker
ah He has a book called The Hermeneutics of the Subject. And it's just a readings and commentaries on Stoics and other ancient philosophers because Foucault recognized the problem of a systemic control of the population through technologies. and So I accept the the advice of Michel Foucault.
00:49:25
Speaker
And I think that that we have to be temperate, we have to be to to to cultivate wisdom and to be very courageous. That's also a virtue that we didn't talk about, but we have to to be to to face what is coming and that that that demands courage
00:49:54
Speaker
What would you think about a sort of stoic, well, what would you think about this kind of thought? Like, this is a thought I've had before where, you know, moderation. Okay. That's the ideal. That's the ideal.

Future AI Challenges and Stoic Courage

00:50:06
Speaker
Ideally, we could be moderate with various things, with chat GPT, with social media and so forth.
00:50:13
Speaker
But the technology today, you know, the the temptations are designed to overwhelm our self-control. it's designed to overwhelm our self-control. So detachment really becomes the wiser path.
00:50:28
Speaker
It's more prudent to detach because it's just nearly impossible for most people to remain temperate in a context that is designed to overwhelm your self-control and is designed to erode your capacity for moderation. So don't know, what do you think about that line of thought of like, yeah, i think moderation would be great, but.
00:50:51
Speaker
we have to detach because it's just, yeah. Yeah, the the problem is that if you detach too much, you wouldn't be able to to face the risks.
00:51:03
Speaker
We have to understand to... so to know the risks and maybe to find other passions. If social media and all the algorithms are trying to control you, well, outside there are many. I found, for example, capoeira. Jorge says, knows that I am fanatic now of capoeira.
00:51:26
Speaker
It's going to the park to practice a kind of martial arts, but also for old people like me, a with women, with the you you shouldn't be very strong to practice it.
00:51:39
Speaker
So they my way, instead of detaching from technology, was to find another passion that has nothing to do because, that is very different because it's physical, it's in community, it's outside, is you feel your body,
00:52:01
Speaker
So I was very sedentary and now I like to finish the podcast to go practice because my friends are there and it's possible. And it's also for nerds like me. It's not only for... yeah So maybe, I don't know if that answers your questions, but to look for other passions because there exist other ways of...
00:52:26
Speaker
of the addict, but maybe not so controlled by some oligarchs that are making money out of your freedom and out of your psychology.
00:52:38
Speaker
And now they are doing money out of a yourself.
00:52:47
Speaker
Well, I think that that's that's great advice. I mean, I love that idea that if, you know, obviously, you know, detachment, some some ancient schools did detach, right? The desert fathers, the Christian desert fathers, they just went out into the desert and like, forget all this, right? But that that would actually just let things go even in a darker direction because no one would be really guarding the technology or objecting to it.
00:53:09
Speaker
So that takes us maybe to one of the last questions here, maybe to kind of ah begin to wrap up. Super intelligence. um It seems like ah that's me.
00:53:21
Speaker
Try to summarize your view and then I'll let you have the the last word on it. um Super intelligence. Is it possible? i think you know in the article that you're not even going to, um you know, uh,
00:53:36
Speaker
make a thesis on that or, you know, there's no way we can even come up with a probability for that. and Maybe it's just a speculative fear. It's actually distracting us from more pressing concerns today.
00:53:49
Speaker
So, I mean, just tell me, what are your general thoughts about superintelligence? How much attention should we pay to it? and And, you know, what how can we live with stoic wisdom in the in the face of or the potential threat of this?
00:54:05
Speaker
Well, yes, today's narrow AI brings real problems we most solve. But worries about superintelligence aren't fantasy. They are a call to action because, and and I don't agree with you when you say we don't have a probability.
00:54:20
Speaker
There are... a subjective probabilities and when they david chalmers or joffrey hinton or even ellen musk have their subjective probabilities on the emergence of the agi well that's that's not irrational a bayesian probability is subjective is personal but it's not false We should invest as much in aligning AI with human values as in building new innovations, new AI, calibrating these systems to reflect care and virtue and not not just power and accumulation of money for their creators.
00:55:12
Speaker
a So it's there are real concerns about superintelligence. I already mentioned the pet tiger cod metaphor by Geoffrey Hinton, but he's one of the fathers of Bayesian networks and he knows what he's talking about.
00:55:33
Speaker
And well, how many Nobel prizes do you want to accept the real possibility of of this existential risk?
00:55:44
Speaker
Because David Chalmers is one of the best philosophers of mind alive. So it's not difficult to understand that a machine that can advise you how to fill your taxes declarations is not stupid.
00:56:02
Speaker
It's not a stochastic parrot. not Not only a stochastic, or we are also stochastic parrots. And we didn't know that we're not logical deductive a animals.
00:56:18
Speaker
No, we are also thinking with patterns, pattern recognition, probabilities, and we are creating something like us, but it is made with silicon and programming. And maybe we cannot align it to our our human values.
00:56:38
Speaker
It may, we are working Jorge and I, maybe the, all the literature on alignment has that problem that we are very different.
00:56:50
Speaker
We have to be very lucky, uh, and to, to succeed in aligning AGI. Yeah. and In my case, as you can say the paper, um I may have ah a different perspective regarding the upcoming or maybe the possibility of our artificial superintelligence. In my case, it's more like a possibility than something that inmate that that it will happen.
00:57:21
Speaker
It's something that we don't know is going to happen. um Maybe it will happen and maybe not. I'm also more aligned with the perspective of Emily Vendor regarding the scotastic powers that we're dealing with.
00:57:31
Speaker
Pretty much like things that predicting the next world. But at the end of the day, it's like oh something that I dealt with Professor Fernando earlier on. it like you have to follow the money.
00:57:42
Speaker
You have to see where where the money goes and where the problems are going. For example, right now, All the thousands of millions of dollars invested in artificial intelligence have a single goal in mind.
00:57:58
Speaker
Artificial General Intelligence. And a recent paper that I'm reading this morning about intelligence key blocking general artificial intelligence by Yang and many other authors, pretty much says the key of one of the precursors of having super intelligence is achieving artificial intelligence.
00:58:21
Speaker
Once we have that, we will have a super-intelligence. So even if may it will happen or maybe or even if it's never going to happen, we have to be prepared. So to pretty much summarize, we have to act on the current problems of narrow AI, bias, privacy, explicability, and all that, yes.
00:58:47
Speaker
But also preparing to the things we don't know. And singularity is an elegant word to so to to say in science. We don't know. We don't understand. We don't know what is going to happen.
00:58:59
Speaker
So we we have to be prepared we have to prepare to do the things that are coming in the future. Because even if there is a 3% chance of this coming to fruition, we have to be prepared to act.
00:59:13
Speaker
of the things that are currently happening, and apply the same legislation, and but focus on better design and inference choices.
00:59:25
Speaker
But also, see the big picture. There is a race which focuses on which investment of thousands or millions of dollars, more money than we can imagine to buy ah stuff on Amazon, is going to have real-world consequences.
00:59:42
Speaker
if it happens. If AI happens, ah the Silicon Valley or the Chinese, or maybe very smart woman engineer in India happens, then we are pretty much good because many, a many authors can say that it's pretty much AI is a stepping stone.
01:00:04
Speaker
And then this is stepping stone that will produce almost immediately artificial, ah well, super intelligence. artificial superintelligence.
01:00:15
Speaker
Well, we hope we can move forward with that with wisdom and courage. And also thank you, Bernardo and Jorge, for sharing your wisdom and courage with us. And when you have another paper, we'd love to have you back on the show. Thank you very much. And thank you for inviting us.
01:00:31
Speaker
Thank