Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
18| Feeling Right: Emotions & Ethics — James Hutton image

18| Feeling Right: Emotions & Ethics — James Hutton

S1 E18 · MULTIVERSES
Avatar
115 Plays1 year ago

Can we trust our emotions as a guide to right and wrong?

This week's guest James Hutton is a philosopher at the University of Delft who argues that emotions provide a way of testing our moral beliefs — similar to the way observations are used in natural sciences as evidence for or against theories.

This is not to say that emotions are infallible, nor that they are not themselves influenced by our moral beliefs, but that they do have a place in our moral inventory. In particular, the destabilizing power they can have — their capability to clash with our beliefs — is an important counterpoint to the entrenchment of poorly justified beliefs.

I found myself revising my own views throughout this discussion. It feels right that emotions play a role in our decision-making. Perhaps that feeling is justified.

Outline

(00:00) Intro

(2:28) Start of conversation: Metathical frameworks

(4:45) Reason alone cannot provide moral premises

(6:30) Are moral principles self-evident? Or do we have a moral sense?

(11:00) Is emotion antithetical to reason?

(12:00) Emotional senses:Amia Srinivasan’s example of Nour, an example where emotions are trustworthy

(23:00) Antonio Damasio & Descartes’ Error: the importance of emotion as a motivating force

(29:30) … But should it be a motivating force?

(30:30) Tolstoy’s emotional reaction to an exection and how it disrupted his moral theory of progress

(34:50) Emotions can cause us to revise our moral beliefs

(37:25) This does not mean emotion is infallible as a guide to morality

(40:25) The tension between reasoning from principles and emotional reaction creates a useful instability

(42:00) The analogy between science and moral reasoning: sometimes observations (and emotions) should be ignored, but sometimes we should pay attention to them

(46:00) Is it possible to have a no-holds-barred ethics incorporating principles and emotions? (Not really!)

(49:40) Observations and theories are perennially in conflict, sometimes we reject the observation

(50:40) Utilitarianism: elegant but easy to find cases where it clashes with our intuitions

(51:50) Harvesting organs — where the greatest good for the number does not feel right

(53:20) Ethics and Inuition — Peter Singer: we shouldn’t trust our emotions

(54:20) But why trust the utilitarian principle over our intuitions?

(57:45) Situations in which we need to be wary of our emotions: burn a teddy vs releasing tonnes of CO2

(1:03:00) Emotional blind spots: abstract, global, probabilistic, outgroup vs ingroup

(1:08:00) Partiality: should we treat everyone equally, or do we have special obligations to friends and family?

(1:10:28) Heckled by a doorbell

(1:11:50) Partiality is a litmus case for utilitarian principles vs intuition

(1:15:30) Given emotions are fallible how do we make good use of them?

(1:17:30) Unreliable emotions and ethical knowledge: blood sugar, mood &c. cause emotional noise

(1:19:30) How do we deal with noisy information in other areas — the analogy with testimony

(1:23:50) Defeaters — cues that give us pause to double check our emotional responses

(1:25:40) Negative meta-emotions: e.g. shame at being angry

(1:26:25) Should we expand our emotional repertoire?

(1:30:20) Flight shame as an example of a new emotional response

(1:34:25) Should we expect evolution to have created morally fitting emotional responses?

(1:38:15) The problems with evolutionary debunking arguments

(1:46:43) This is work in progress — google James Hutton Delft to get in touch

Recommended
Transcript

Introduction to Meta-Ethics: The Role of Emotions

00:00:00
Speaker
How should we live? What things should we do? What things should we not do? These are enormous questions. And it's the main of meta-ethics to try to figure out the frameworks by which we answer those questions. Now, if you think about what we actually do do and the way we go about our lives, it's clear that emotion plays a huge role. So it's somewhat surprising that it's a bit of a minority view within meta-ethics that emotions should play a role in determining our actions and in fact doing that.

James Hutton's Perspective: Emotions as Ethical Guides

00:00:29
Speaker
I guess this week is James Hutton. He's a philosopher at the University of Delft. And he is a philosopher for whom emotions should play that role in ethical decisions. He freely admits that he doesn't have all the answers here. He's still kind of working out his ultimate position, I guess. But he does have some excellent intuitions. And one is that emotions
00:00:51
Speaker
should provide some evidence in developing our moral theories, rather like we use our observations within physics to build up theories. Our emotional reactions to things are evidence, like a kind of moral compass that points us towards what is right and wrong.
00:01:09
Speaker
And similarly, within physics, not only can we use our emotions in building up our theories, but we can use them to overthrow our theories. And this is, for me, is one of the really most beautiful points that comes across.

Emotions vs. Reason: Tolstoy's Case and Ethical Frameworks

00:01:21
Speaker
He gives a wonderful example of Tolstoy, who witnesses a terrible event. I won't provide too many spoilers here. And just seeing that event and his very strong emotional reaction to it,
00:01:36
Speaker
upset Tolstoy's worldview, it turned it over. James argues that this capability of emotions to destabilise our beliefs can be really powerful, that it can cause us to reassess what we think we know. And I think this is very promising, very hopeful, because we live in a world where beliefs seem to have become very entrenched, and it can be very hard to get people agreeing. If emotion can get us to a common place, a common understanding, then I think we should welcome that.
00:02:07
Speaker
So without further ado, I'm James Robinson. My guest is James Hutton. This is Multiverses. James Hutton, thanks for joining me. Thanks for having me on the podcast. I'm excited to be here.
00:02:33
Speaker
I want to start by kind of grounding ourselves. You have a particular view about how we acquire moral knowledge, and even the idea of moral knowledge is already a view about the existence of that thing. But before we sort of zone in on that, maybe we can start with the landscape in general. What are the different
00:02:57
Speaker
ethical or sort of meta ethical positions that one can take. What are the different frameworks for thinking about how we should act?

Exploring Moral Knowledge and Ethical Reasoning

00:03:05
Speaker
Big question. Okay, right. So maybe a good place to start is by thinking about something that we think that we know in the sphere of ethics.
00:03:18
Speaker
So we could start with something really boring and uncontroversial like murder is wrong. Why don't we go with that? So take the proposition or the claim that murder is wrong.
00:03:31
Speaker
I take it that anyone who, if they were faced with a choice between murdering and not murdering, just chose murdering because it was a little bit more convenient, rather than feeling that to be a kind of really something to be avoided. We would think that, well, they're missing out on something that we all know, which is that murder is wrong.
00:03:53
Speaker
Well, when you start reflecting on this, an obvious question arises, which is, well, how do we know that murder is wrong or the other kinds of conduct are wrong, for example, I don't know, breaking promises unless you have a really good reason or massively polluting a village in order to make a little bit more profit or whatever. How do we know that all of these things that we, in fact, know to be wrong are wrong?
00:04:20
Speaker
Now, lots of philosophers would question the idea that it really is right to apply the concept of knowledge here and say that we do know these ethical claims to be true. And we can maybe return to that in a bit. But if we start with this tempting idea that we really do know certain moral claims to be true, then, yeah, well, how do we know them? Some of them we know through by kind of reasoning them out as conclusions from other things we know.
00:04:51
Speaker
We might know some general, you know, take the example of polluting a village and poisoning people. Well, that's an instance of harming people. So maybe you know a general principle that unless you've got a really good reason, you shouldn't harm people.
00:05:04
Speaker
And you know that a particular course of action is going to harm people through pollution. Well, then you can reason to a moral conclusion from that moral premise about harm in general being not something you can do and the fact of the case. But then the question arises again, where does that kind of premise come from? Where does that principle your reasoning come from?
00:05:25
Speaker
So lots of theorists find themselves pushed towards saying, well, not all of our moral knowledge can come from reasoning because reasoning can't go on forever. You need some moral premises to start from or something like that. And as a result of that, philosophers who want to hang on to the idea that we really know stuff in the sphere of morality
00:05:47
Speaker
find themselves appealing to some notion of things kind of intuitively seeming right or wrong. And there's a range of different ways of going with that. One of the most dominant kind of views would be to say that certain principles are just self-evident to reason itself.
00:06:08
Speaker
And philosophers who take that line are going to say they're going to try and defend a kind of analogy between the axioms of mathematics, perhaps, and the axioms of ethics. So, for example, the utilitarian philosopher Peter Singer claims that it's simply self-evident that we ought to do the thing that produces the greatest kind of net benefit to people in terms of pleasure or well-being.
00:06:36
Speaker
Now, I think that that appeal to, you know, rationally self-evident principles is a little bit unsatisfying. I don't think it can ultimately be upheld, but I nevertheless
00:06:49
Speaker
am confident that we really do know things in terms of ethics.

The Debate on Objective Knowledge in Ethics

00:06:53
Speaker
And that's led me to consider different accounts of where those starting points of ethics might come from. And in particular, as you mentioned, I've found myself defending the idea that we can know that something's wrong on the basis of an emotional experience that we have. Yeah, that's a really nice overview.
00:07:14
Speaker
As you say, you mentioned at one point, yeah, there's this kind of, even the idea that knowledge is possible is somewhat controversial because that's, or at least it's something that, there are differences of opinion about that. And I suppose we can kind of carve up the landscape along different dimensions and one is, does one
00:07:36
Speaker
Does one believe that ethical judgments can have the status of knowledge, that there can be ethical or moral truths? And another is, well, regardless of whether these objects
00:07:52
Speaker
have that status or maybe they're just some kind of some they have some other normative status. How do we arrive at getting to the right ones like what's the justification for those things whether you're justifying knowledge or truth or whether you're justifying something else and those kind of things perhaps
00:08:13
Speaker
Those dimensions are perhaps somewhat related. They're not perhaps entirely independent in that certain moral positions probably presume if one is a relativist in terms of moral knowledge. And so one thinks that there is not such a thing as moral knowledge. There are no truths out there. You might be more inclined to a position that says, OK, well, the way that we justify our beliefs is some sort of societal, coherent picture.
00:08:44
Speaker
that shifts over time and shifts depending on the culture that you live in. Whereas if you take a more kind of objective stance, you're unlikely to be able to maintain that relative picture. Although I think there's probably shades in all of these things and it's not quite as clear-cut as I've just stated.
00:09:05
Speaker
The other thing that I think is really interesting, as you say, is that reason, I think whatever standpoint you take, reason has a role in justification. But what's at stake is more the foundations for your reasoning. Do you just have a very small set of axioms, similar to mathematics, as you say? Mathematics has what?
00:09:29
Speaker
You can axiomatize it or arithmetic in, I don't know, nine axioms or maybe less. I'm not sure of the number, but it's very few. And Peter Singer and others might argue, well, you can actually axiomize moral philosophy with even fewer. You just need to say that you want to produce, I don't know, the greatest good for the greatest number or
00:09:52
Speaker
maximize pleasure or something like that and have perhaps even a single axiom. There's appealing simplicity to that. Of course, the complexity that's hidden is then you have to crank through lots and lots of calculations to try to figure out the consequences of every action.
00:10:12
Speaker
And then there's other systems, I guess more kind of moral empiricist systems where you are kind of digging out moral knowledge. And if I understand rightly, you wouldn't necessarily be able to kind of discover empirically, you're not beholden to discover empirically every moral judgment or every justify everything through a kind of empirical discovery. You can sort of discover bits and pieces and then use reason
00:10:43
Speaker
to bring everything together. So you have kind of like some landmarks, right? Which you can think of as kind of primitives in the same way as we think of like physical facts as not so much as primitives, but things that are independent of us, but they're not the whole story.
00:11:01
Speaker
have a whole collection of physical facts and you can join them together to build a picture of the world using reason. We've mentioned emotion. So how does that fit into the picture? This seems very antithetical to kind of reason. So how can emotion help us get towards moral

Emotion and Reason in Ethical Decision-Making

00:11:20
Speaker
knowledge?
00:11:20
Speaker
Right, that's a good question. Okay, so yeah, this antithesis between reason and emotion is something that theorists of emotion and people in various different disciplines have begun to question in the last few decades.
00:11:37
Speaker
But so I think we can certainly draw a distinction between emotion and the process of reasoning. So reasoning or kind of inferring a conclusion from some premises. So that's you know, maybe the best way to explain that is three examples. So I suggested something earlier where we might have a
00:11:55
Speaker
The principle that we should do no harm and then some facts about the case and then we just kind of put those together and we kind of reason through that if those premises are true then the conclusion must be true that in this case it's wrong to pollute the village.
00:12:11
Speaker
Now emotion doesn't work like that, so think about a different kind of case. And so maybe we can take an example from the philosopher Amiya Srinivasan. So she has a paper from a few years ago called Radical Externalism that was published in the Philosophical Review, which is one of the most kind of prestigious journals in the discipline.
00:12:35
Speaker
And Amiya Srinivasan, who's now a professor in Oxford, she gives the following example. Maybe I can just read it out if that sounds all right.
00:12:46
Speaker
So she says, consider the following case. Noor, a young British woman of Arab descent, is invited to dinner at the home of a white friend from university. The host, Noor's friend's father, is polite and welcoming to Noor. He is generous with the food and wine and asks Noor a series of questions about herself.
00:13:10
Speaker
Everyone laughs and talks amiably. As Noor comes away, however, she is unable to shake the conviction that her friend's father is racist against Arabs. But replaying the evening in her head, she finds it impossible to recover just what actions on the host's part could be thought to be racist, or what could justify her belief in the host's racism.
00:13:35
Speaker
If pressed, Noor would say she just knows that her host is racist. And then Srinivasan goes on to specify that
00:13:47
Speaker
In fact, the host is racist. He thinks of Arabs as inherently fanatical, dangerous and backward, and as a result sent off subtle cues that Noor subconsciously registered and processed. It is this subconscious sensitivity that led to Noor's belief that her host is racist.
00:14:06
Speaker
So think about this example. And I think that there are lots of everyday cases that are like this, where we make a judgment that someone's being an asshole, or someone's being rude, or someone's being duplicitous, or whatever. And we can't really put our finger on what it is about their conduct that makes us think that there's something morally suspect going on here.
00:14:32
Speaker
I mean, you might think that in those cases where you can't point to the facts of the case, you can't really be said to know that they're doing something wrong. But it is tempting at least to say that, well, even if it's a kind of weaker
00:14:48
Speaker
Judgment if you can't specify the things that make it wrong like. Some people really are pretty reliable at picking up on when someone's being an asshole i think we all are actually we don't necessarily have some kind of principal or set of axioms about what what kind of behavior means you're being an asshole.
00:15:07
Speaker
We actually find it really difficult to try and articulate that. It would be a bit more like trying to come up with principles for what makes a film funny or something like that. We know it when we see it, but we don't have a set of principles that we can reason from.
00:15:22
Speaker
So yeah, there's this question, do we really know in cases like that? And what I want to do is say, well, yeah, maybe we do. And then I want to ask what kind of psychological mechanisms are actually in play when we're making those judgments that don't involve putting a pre-existing, previously known moral principle together with certain facts that we know.
00:15:44
Speaker
And I think the evidence is that in a case like Noor forming this judgment, she's very likely to be experiencing an emotion in response to these subtle cues. And our emotions, our emotional responses are shaped through long experience that we learn how to respond to things emotionally from our parents or other guardians when we're little.
00:16:04
Speaker
and our friends in the playground or whatever just there's a huge huge endless array of experiences that we've had throughout our lives in kind of things we've experienced for real films we've watched books we've read whatever you name it where we're not just gain gaining information. About like how things are in the world descriptively we're also kind of.
00:16:26
Speaker
our emotional sense is being refined over time. And someone like Noor, who's probably chatted with friends about various ways in which people have been racist or just belligerent towards them, they've, like, learned... She probably, through her life, has had loads and loads of experiences which have refined her sense of unease, such that
00:16:53
Speaker
such that she feels uneasy and starts to think well there's something up here in the presence of cues that well she can't specify what they are but there is a pattern there and she has learnt over time to pick up on the pattern. So the idea would be that our emotional dispositions may be a feeling of unease
00:17:15
Speaker
In a more positive case, our feelings of compassion, we can't always give principles for why we feel compassion in a certain case. It's not as if we first of all go through various steps of reasoning and then conclude, oh, I will now feel a feeling of compassion. No, it's something that's elicited from us a little bit, we might say more automatically, although always remembering that
00:17:39
Speaker
responding to informational cues is what the kind of cognitive science of emotion tells us. And as I say, that response is shaped through long experience. So the picture would be, as well as reasoning, another tool that we have in our toolbox is these refined emotional senses of different ethically relevant scenario that occur in our lives. And the picture there would be that, well,
00:18:06
Speaker
When you have that emotion, when you have that feeling of unease, it suggests to you, it kind of seems to you as if something is up here. When you feel outraged, it seems to you that someone's doing something wrong. When you feel compassion, it seems to you that someone is undergoing significant suffering. Something like that. And the idea would be, in my view, and this is something I've tried to defend in some of my writing,
00:18:35
Speaker
that we really are entitled kind of in an epistemic sense, we really are
00:18:41
Speaker
As it is, it is legitimate to base the belief that someone's being an asshole or something is up in this situation. These kind of everyday ethical judgments that we're constantly making that we're in. It really is legitimate to make those judgments on the basis of emotion. All things being equal. Of course, there can be reason, substantive reasons not to trust your emotions in a given case. And I'm not at all saying that our emotions are infallible.
00:19:10
Speaker
But yeah, I'm trying to say that as well as this tool that we have of reasoning from principles that we already believe, we've got this other tool of

Cultural Preferences: Reason Over Emotion in Western Ethics

00:19:19
Speaker
picking up on subtle cues, responding emotionally and using that to inform our beliefs. Yeah. In some ways it's a very surprising position given that
00:19:30
Speaker
you know, within the law, for instance, there's very little appeal to emotion. Certainly within many domains within the philosophy of manifest ethics that we've considered, like consequentialism, there's just no emotion doesn't enter the picture at all.
00:19:49
Speaker
And again, I would say generally within culture, there is this kind of primacy, at least Western culture currently, a primacy of reason over emotion. And we seem very distrustful of our emotions. So all these things, I think, kind of set one up to be a little bit
00:20:08
Speaker
distrustful of this idea. And yet, as you say, I mean, this is something that we all experience. And we probably do trust our emotions more than we will own up to, right? I'm sure we trust them almost completely, you know, perhaps slightly more than we ought to. As you said, sometimes our emotions are fallible. But certainly on a daily basis, this is, we make many, perhaps most of our decisions emotionally,
00:20:36
Speaker
And it's somewhat refreshing to think that this might be the right way of acting, I have to say. But I think people will also feel a little bit suspicious of this example. It's been set up so that Noor was correct. And we'll probably have all felt instances in our lives where we've got it wrong.
00:20:58
Speaker
we've made a judgment about someone, and we later realize we were incorrect. And I, you know, from personal experience, often those are about people from other cultures, because just the kind of standard operating mode of behavior, let's say, is quite different. And so on, for example, might seem quite
00:21:15
Speaker
standoffish, not necessarily an asshole, but just not a fun person to hang around with and maybe not someone who just seems a little bit, I don't know, shifty or not really on helpfulty or whatever. But then you realize that that's just a particular way of presenting that oftentimes is very culturally dependent. So I sort of I want to buy in. Yeah. But I struggle with
00:21:45
Speaker
things like that. And one final comment. You mentioned this kind of process of emotional, of refinement. And that seems important. And perhaps, you know, getting things wrong is part of that process. That suggests, though, that this kind of emotional moral compass, perhaps, is not something that's completely innate.
00:22:09
Speaker
Or maybe it is, but needs some kind of configuration or calibration, let's say, as we go through life. And I'm interested in your opinions as to whether this, are we discovering something about society within which we find ourselves and learning from the cues from others as to what is right or wrong?
00:22:33
Speaker
And is that calibrating our moral compass? Or are we somehow tuning in on something that's deeper and more constant and does reflect an objective state of the world or truth?
00:22:46
Speaker
Right. Yes. So one thing to begin with, perhaps. So yeah, Srinivasan's example is set up in such a way that, well, we're told we get to peek behind the curtain. Srinivasan is kind of describing this from a third person standpoint and saying like, well, yeah, the facts of the case are that this person is racist and was sending off cues. Real life cases are usually not like that.
00:23:12
Speaker
But let me give you a real life case that was described by the neuroscientist Antonio Damasio, which will hopefully help to help to bring you on board even more. So Damasio in 1994 published this great book called Descartes' Error. And the aim there was really, this was part of what I described earlier.
00:23:36
Speaker
this move across a range of different disciplines of trying to attack this sharp distinction between reason and emotion where reason is construed as the thing that should always be in control and emotion is just this disruptive thing that would be better off without. So Damasio is trying to attack that picture.
00:23:54
Speaker
Somewhat unfairly, he chooses René Descartes as his target in the title of the book. Some listeners might be aware that René Descartes actually wrote a late book called The Passions of the Soul, which is very amenable to the idea that emotions are central to our moral lives. But let's leave that to one side. But anyway, so Damasio, one of the things that he does in that book is he describes some of his experience with people with different kinds of neurological damage.
00:24:21
Speaker
And there's this one guy who he just gives the initials, I forget what they are, but there's this guy who's kind of been extremely successful in his life and he has like a healthy marriage, but then he suffers some brain damage. This is happening kind of in the early 90s, so they're able to perform an MRI scan and figure out what kind of damage has happened. His ventromedial prefrontal cortex has been damaged and
00:24:46
Speaker
Damasio builds this really impressive case that the ventromedial prefrontal cortex is central to our emotional lives. So what happens if you have damage to this part of your brain and your emotional functioning is impaired? So interestingly, this patient does perfectly well on IQ tests, is perfectly good at memory and
00:25:10
Speaker
can see fine and all of that kind of thing. So lots of these kind of non-emotional cognitive capacities are unimpaired, which suggests that maybe we were right. Maybe we were right to distinguish between reason and emotion. You can do fine even if your emotions are knocked out.
00:25:24
Speaker
But he has all sorts of problems in his practical life. He really struggles to make decisions and not just kind of like big life decisions. He finds himself just paralyzed in the morning. He's supposed to be getting ready for work, but he just is unable to keep himself focused on the task of making those rudimentary decisions we need to make about what to have for breakfast, whether to brush your teeth before having breakfast or after.
00:25:52
Speaker
Due to this emotional impairment, this patient finds himself completely unable to navigate those like value trade-offs that we're constantly making in life. He's unable to orient his decisions around which things matter. And so yeah, Damasio makes this great case for this broader theory in which emotions serve to kind of, they're our signals for which things matter that orient us in our decision making.
00:26:21
Speaker
And without emotion, we're really, we're really lost. And he gives some interesting clinical evidence for that. Now, sorry, I had a slight, uh, slow list in my connection, possibly my brain as well. So I wasn't trying to jump in, but, uh, yeah, no, it's, it, it seems like the perfect illustration of humans, very famous, perhaps over-quoted, you know, the reasons are and ought to be that the slave of the passions, right?
00:26:51
Speaker
And the other thing that this example, I think, illustrates, and it's quite an important point, is that ethics is not just about murder or about climate change and big things. It's just about how we live our lives, right? It's about the right way of living, and that includes
00:27:12
Speaker
decisions you take on the most prosaic of things, like what to have to breakfast. And this morning I was thinking, well, should I go to the beach today? And I was like, well, I could do a bit more preparation for this podcast, but it's also really sunny and it's a beautiful day. And it feels kind of wrong to waste that. I'm here in Scotland and it doesn't happen so often. And so I was like, you know, you have this kind of, it is a mini ethical decision. So, yeah, really interesting to hear that.
00:27:43
Speaker
knocking out that center in the brain, it has that kind of almost paralyzing effect on one's decision making ability. Yeah, yeah, absolutely. So yeah, I think that that's right. We sometimes
00:28:00
Speaker
fall into this trap of thinking the ethical decisions are just the really big ones, you know, things like, you know, should I, should I go vegan or not? But of course the ethical decisions are also like, should I cancel on my friend? Cause I'm feeling a bit knackered or should I make the effort? Like it's like, do I really owe it to them? That kind of thing. We're constantly managing these things. We're constantly making decisions on the basis of not just what we want to do, but our concern for other people's interests, wellbeing, and, and the things that we think really matter in life. So yeah.

Everyday Ethics and Larger Moral Questions

00:28:30
Speaker
I think that we should be bearing in mind that those are as legitimate and important ethical decisions as the big ones like should animals be given legal standing and should we mine lithium from the ocean floor if that means reducing carbon emissions but destroying ecosystems. I'm fascinated with those big ethical questions and in my research that's something that I'm beginning to probe a little bit more
00:29:00
Speaker
I do think that we need to stay in touch with the everyday experience of morality, which is just kind of very pervasive. I mean, I think it makes a really strong case that emotions are a part of our everyday ethical practice. Whether they should be, or there's a better way, right, is another question, or perhaps better put, to what extent should they be a part of?
00:29:29
Speaker
our ethical decision making? I don't know. So maybe one way of approaching the question which I tried to hint at at the beginning was
00:29:42
Speaker
Well, if not emotion, then what? So I think that, as I've said, that there's, you know, often we're in a position where we could use reasoning, but not always, right? We don't always have a principle in our pocket that we can apply to the case in hand. So without some kind of intuitive sense of what's the right thing to do in a given situation, we're often just going to be paralyzed. The other issue about that is where do the principles come from? So yeah, you might think that
00:30:14
Speaker
that reasoning is more reliable than emotion when we can use reasoning. But as I sketched before, we need to get those principles from somewhere. The principles that you find plausible, that's of course going to be influenced by the culture you grow up in, just as much as your emotions are going to be shaped by your culture. And sometimes they're going to come into conflict.
00:30:35
Speaker
So in one of my papers that's currently under review, I use this example of Leo Tolstoy witnessing an execution. So would you like me to go into that a little bit to illustrate this? Yeah, yeah. And yeah, I really love this example. Thanks for sharing the pre-print. Yeah, please. This is a beautiful quote. So in
00:30:58
Speaker
So Tolstoy, towards the end of his life, writes this autobiographical story called A Confession, and I'd really recommend it. It's quite short, you can read it quite, you know, it doesn't take too long, but it's really fascinating. And one thing that happens early in the story is...
00:31:16
Speaker
He's kind of telling us about his intellectual development. So when he was a young man, he sat around among high society in St. Petersburg, apparently with all of these literary intellectual friends of his, and he was very taken with this lively intellectual life. And among the people there, they'd have the kind of student discussions that people have.
00:31:39
Speaker
of the hot topics of the day so people today are probably like students right now are probably staying up till 3am discussing whether chat GPT is sentient or not people in the 1800s stayed up all night discussing whether Napoleon was a great man or was evil and discussing whether execution
00:32:04
Speaker
was something that was permitted if it was a force for social progress and Tolstoy and his friends were really tempted by this this idea where progress is important where you know that's the living.
00:32:19
Speaker
So in the 1700s we've had all this social turmoil, we've had the French Revolution, we've had the American War of Independence, we've had people throwing off monarchy and erecting societies on the basis of what they think are principles of human rights, things like that. So Tolstoy and his friends when he's a young man get really kind of... They're really compelled, they're really convinced by this theory where
00:32:46
Speaker
where social progress is what matters. We need to find a way of driving the evolution of society forward, kind of by whatever means necessary.
00:32:57
Speaker
that Tolstoy comes to believe amidst all of this is that execution is indeed justified. If you're discussing the pros and cons of the French Revolution with your intellectual friends, then this is going to be at the forefront of your mind.
00:33:17
Speaker
You're going to be asking, was it okay to execute the French monarchs as a kind of tool for social progress? The assumption being that if you didn't execute them at some point, they'd use their vast resources and their networks of cronies around Europe to drag society back.
00:33:39
Speaker
So yeah, Tolstoy thinks that execution is essentially something that's not something to do willy-nilly, but something that can be justified when it's in the cause of social progress until he travels to Paris a little bit later on and he goes to witness an execution. And what he writes is this.
00:34:04
Speaker
The sight of an execution revealed to me the precariousness of my superstition and progress.
00:34:13
Speaker
When I saw the heads being separated from the bodies and heard them thump one after the next into the box, I understood, and not just with my intellect, but with my whole being, that no theories of progress could justify this crime. I realised that even if every single person since the day of creation had, according to whatever theory found this necessary,
00:34:35
Speaker
I knew that it was unnecessary and wrong. And therefore, that judgments on what is good and necessary must not be based on progress, but on the instincts of my own soul. So why do I bring this up? I mean, this is a case where I've described earlier that emotions can come into play and guide us when we don't already possess principles that tell us what to think ethically about a situation.
00:35:02
Speaker
But this Tolstoy example illustrates that we can also use our emotions to test the principles that we believe beforehand. So Tolstoy believes capital punishment is permissible if it kind of has a net benefit on the progress of the world or of human society.
00:35:22
Speaker
But then he witnesses an execution firsthand. And I would argue that what's going to be happening there is he's going to be having a really intense emotional experience. He's going to be experiencing horror, revulsion, disgust, a whole mix of different emotions. And as I said before, experiencing an emotion makes it kind of seem to you that things have a certain ethical status.
00:35:51
Speaker
He's experiencing horror. He's going to be therefore experiencing this execution as something aversive. He's experiencing repulsion. So he's therefore going to be in the midst of that experience. He's going to be experiencing execution as something that's repulsive and should be stopped. Maybe something that's forbidden.
00:36:12
Speaker
And what he describes himself as doing is kind of taking on board the impression that those emotions are giving him, forming the belief that this execution is

Testing Ethical Principles Through Emotional Experience

00:36:25
Speaker
not as if it's kind of like up for grabs and we just need to taut up whether this really did help social progress in order to figure out whether this is right or wrong. Rather, he's having this, on the basis of this intense emotional experience, coming to believe that execution is inherently wrong.
00:36:43
Speaker
And that leads him to revise his principles. So he stops believing in this theory on which kind of individual human beings can just be chewed up into a pulp under the, you know, in the jaws of relentless social progress. And he moves to a more, well, as he describes it, he abandons his superstition in progress, and he comes to know that you should use your instincts instead of this theory of progress.
00:37:10
Speaker
now anyone who's read the story will know that's not the end of Tolstoy's intellectual journey but but yeah I think it really interestingly shows that we can use our emotional experiences to test and sometimes overturn the principles that we've believed before. Yeah it's a really powerful example I mean it's such a
00:37:33
Speaker
It's such a visceral image that Tolstoy's passed down the generations there, the head thumping into the, you know, you feel yourself having something of an emotional experience, just reading it or hearing it. But again, I have this kind of conflict within me. I think so much is to do with the presentation, right? And one wonders if this had been
00:38:01
Speaker
a cyanide pill or something, right? Would it have provoked the same change within Tolstoy? Maybe it doesn't matter so much for the theory, right? Maybe sometimes the absence of emotion doesn't prove anything. But I also wonder if, you know, sometimes you could, you know, if there could be circumstances where you actually feel like, oh, wow, good, that person's been killed, right? You know,
00:38:29
Speaker
Maybe not, maybe just our emotions aren't rigged up that way and I've never been in such a situation, unfortunately. I think that's absolutely right. So I think there are going to be times when people experience kind of murderous rage as a result of, I don't know, if someone's family member has been run over, there are a lot of societies in which that's going to lead you to basically expect the blood of the person that ran them over.
00:38:54
Speaker
So yeah, I don't think that emotion, I'm not kind of pie eyed about emotions. I don't think that they're always going to be leading us in the right direction. And I think that it's no easy, this question arises in the Tolstoy case. He's changed his mind. Has he changed his mind better or for the worse?
00:39:16
Speaker
argue that, you know, these changes mind for the worst. We do need to think about, you know, all we really need to think about is social progress and anything is justified. That's something that a lot of Marxists would think, perhaps in a very different way, a lot of utilitarians would think today,
00:39:35
Speaker
that, you know, if things really were such that an execution was going to make people's lives better than it's justified. Of course, lots of utilitarians would argue that that's just never really going to be the case. That's always, you know, that the fear of living in a culture where execution is rife outweighs any benefits. But sorry, I'm getting a bit sidetracked there.
00:39:55
Speaker
But so yeah, I do think that there's a genuine question there as to does Tolstoy's emotion lead him in a better direction or a worse direction. I don't think it's that any easier for us as kind of people reflecting on it to answer that question than to answer the so-called so to speak first order question of whether execution is inherently wrong. But what I want to say is
00:40:19
Speaker
If we accept the basic idea that the moral principles we've been brought up to believe, or the moral principles we've stumbled upon through our conversations with our peers or whatever, if we accept that they're going to have their limitations,
00:40:36
Speaker
Then it's a good thing that emotion introduces a kind of instability there. It's a good thing that you've got these two somewhat separate, although mutually informing processes, reasoning from principles and having a more kind of automatic response that's not a product just of what you think, but also of how your emotions have been habituated through experience, to use a term from Aristotle.
00:41:06
Speaker
So yeah, I think that in any given case of conflict between a principle and an emotion, it's a difficult task to work out whether someone who trusts the emotion is changing their mind for the better or changing their mind for the worse. And I think lots of the time, the principles that we believe should lead us to question the emotion that we have when we experience it.
00:41:34
Speaker
So I think that emotion is kind of this input to a process in which we should then try and think seriously about what principles should we believe. You drew the analogy earlier between kind of observation and maybe physics and the principles that we would derive from those. So I do think it makes sense to try and use things like inference to the best explanation or
00:41:59
Speaker
inference by induction to say, well, I've, I've judged this feature to be, you know, the feature of, of being an execution to be something that makes an action wrong in all these cases. And, you know, I've gone through all these thought experiments and I've not found any exceptions. Well, then you should kind of, you should through these domain general reasoning processes that we use in science and that we use in, I don't know,
00:42:26
Speaker
cooking anywhere. We always use inference to the best explanation. We always use analogy. We always use inductive reasoning, which is to say generalizing from particular cases. I think we should be using those in ethics too. I think emotions should be our basic input to those. But I think that that means that once you've come up with principles based on a load of intuitive judgments based on emotion, if your next emotional experience clashes with the principle,
00:42:53
Speaker
A lot of the time you should think with a principle rather than with the emotion. It's a lot like the question in science of when do you trust the theory and when do you trust the observation that clashes with the theory. Sometimes it's going to be an anomaly. Sometimes it's going to be a genuine observation that causes you to overturn the theory.
00:43:15
Speaker
There are certain things you can say, repeat the experiment. If it goes away, then it was an anomaly. But that's actually not good enough because there are lots of cases in which the observation persists. But you still don't want to overturn the theory. Maybe you revise your understanding of the limitations of your equipment. So that's happened a lot in climate science. People have kind of come up with different models of the limitations of, say, satellite data on atmospheric temperature.
00:43:44
Speaker
It's a long and complicated process to figure out whether to trust the quote-unquote observation or the theory. And I think that it's exactly the same in ethics. I think that our ability to use emotion is a key way in which we can test our principles, but it's never going to be an easy question whether to revise the theory or ignore the emotion. Yeah. I really like this analogy. I mean, there's several features.
00:44:13
Speaker
which are nice. One is that the theory is obviously built from the observations in some way, but also the observations are not independent of the theory. In one way, obviously, if you get a false observation, as you say, or an anomalous observation, it may be anomalous. It may be something really important that does end up causing you to revise the theory.
00:44:43
Speaker
But even the process of observing is in the kind of term of art of philosophy of science. That is a theory-laden activity. You bring your theory to the process of observing, whether that's your theory of telescopy, if I can say that correctly, or just of the lens of your eye. And in this case, I guess one might say, well, yeah, your emotions can also have been formed
00:45:13
Speaker
to an extent, your emotional responses can have been formed to an extent by culture, discussions, the framework or theory that you've created already. So even those emotions, which are sort of the counterpart of observations in this case, are themselves somewhat dependent on the theory. And this is a problem in the philosophy of science, but I don't think it causes us to say, oh, well, let's just get rid of the whole
00:45:43
Speaker
idea of observation, right? They're all theory-laden, so we need a theory. And here, it's the same case, right? These are still a valid input. Perhaps one kind of slight disanalogy, I don't know, is that principles like do no harm or do the greatest good for the greatest number seem, as we said earlier, almost like axioms rather than theories. The theory is like, oh, well,
00:46:13
Speaker
How do I reason up from that? How do I infer from that and figure out what, in this case, is the greatest good for the greatest number? And therefore, we have two very distinct elements. If we wanted to have a framework which incorporated both emotions and principles like that, it would seem somewhat different from the physics case, where you have observations on the one hand and a theory on the other hand.
00:46:42
Speaker
in this kind of ethical case, you seem to have two kinds of observation or two kinds of primitive, the emotions and some other principles. I'm not entirely sure if that is a complete disanalogy or, but my question is, I suppose, can those, can we incorporate, can we have a kind of no holds barred sort of ethics where you incorporate both emotions and a principle like greatest good, greatest number or do no harm?
00:47:12
Speaker
That is really, really interesting. Yes. Wow. Lots to think about. So yeah. So what you're saying is, well, this axiomatic method in maths is very different from the theory building method in physics.

Integrating Emotions into Ethical Frameworks

00:47:28
Speaker
In the maths case, you check which axioms seem self-evident and then you try and derive things from those. In the physics case, you start with observations and you try to use inference to the best explanation and induction to come up with general principles to explain those observations.
00:47:48
Speaker
to explain what you know is happening in the world. But in the case of ethics, you're saying we have both. There are certain principles that are self-evident and there are certain kind of cases where we quote where we, so to speak, make an observation by means of emotion. And you're asking, could we combine those two different kind of inputs in order to come up with
00:48:16
Speaker
Yeah, okay. Yeah. And I think your replaying of what I said is really spot on because it highlights the fact that within physics, you can't have a conflict between the observations and the theory, you can have some anomalous data, you might get a conflict at some point, but that's when you've got to change the theory. But the theory is only a kind of general organization.
00:48:39
Speaker
It's like a compressed version of all those observations, right? You try and write them down in the most succinct way possible, like an algorithm that would generate those observations. That's your theory. Right, right. So you're thinking, yeah, okay, that's really interesting. I mean, one thing to note to begin with is that that's not always how people have thought about the principles that you appeal to in physics. So they're probably right. It's a bit of a toy model of physics.
00:49:09
Speaker
It's certainly different from the, I mean, I do think there is some difference there, whereas in principle, yeah, you set it up to do your best so that there's no conflict between observation and theory. Whereas in principle, there's no problem with having axioms, moral axioms, let's say, and emotions that are in conflict.
00:49:35
Speaker
And then, yeah, where do we go from there if they are, or is it just that we better not strive and walk down that path in the first place? Right. Well, so, I mean, I do want to return to the fact that.
00:49:47
Speaker
that we do always have these conflicts between observations in science and our theories. And we do always kind of face that problem of do you alter the theory or do you discount the observation. So, you know, way back when Kepler was figuring out the orbits of the planets, he didn't
00:50:11
Speaker
Plot perfectly elliptical orbits there were all kinds of deviations from that because, you know, his measuring equipment wasn't perfect. And then also because the planets were obviously exerting gravitational pulls on each other, so the orbits themselves weren't quite elliptical. But so look, there's always this question like those fluctuations in the data, when do you think they're a further complication to be explained? And when do they think they're essentially Kepler knocking into the table?
00:50:38
Speaker
So I would say that you have exactly the same structure in ethics. Sometimes you're going to have a lovely, simple theory. Utilitarianism is beautifully simple. You just have one principle. It's hard to apply in practice because just for the listener's benefit. So the principle being act in such a way that you maximize the net benefit on people in terms of pleasure or well-being.
00:51:04
Speaker
Yeah, as you mentioned, you then have a load of difficult questions about measuring pleasure and figuring and predicting which actions are going to have positive benefits on pleasure and aren't accidentally going to lead to suffering. But it's a very simple theory. The question is what happens when someone suggests a case that really clashes with that theory. So one classic one.
00:51:30
Speaker
I'm sure your listeners are all really familiar with trolley problem cases so let's go for something a bit different. There's this case. If it's right to if it's literally than just the numbers and the quantities of pleasure and pain that count will then it seems like it ought to be right.
00:51:48
Speaker
If I'm a doctor and I have five patients who are dying on the operating table and they each happen to need a different organ in order to survive and then someone wanders into my waiting room
00:52:05
Speaker
And they're on their own. No one else is around. It's the dead of night. I asked my nurse to go and, you know, probe a little bit, find out what their life is like. It turns out that they live a really solitary life. No one's going to miss them. Oh, so I can save five people and extend their long and happy lives by kidnapping and murdering and harvesting the organs of this, this one person. Now, obviously the,
00:52:30
Speaker
This case is pretty cooked up. It's not going to be something that happens a lot of the time. And a utilitarian who wants to hang on to their theory can really say like, well, that's not a realistic case.
00:52:41
Speaker
But if their principle is supposed to be a self-evident ethical principle, it's supposed to apply to all possible cases, not just the real world cases. So the utilitarian is really pressed to say, well, if the case is as it's described, then yes, it would be right to kidnap and murder and harvest the organs.
00:53:04
Speaker
I think a lot of us would think that that's wrong. You don't have the right to do that to someone. And the fact that we feel that pull
00:53:19
Speaker
It's kind of an observation that clashes with the theory. Now, utilitarians are going to be, they've thought about this problem. So Peter Singer, whom I mentioned earlier, he has an article called Ethics and Intuition in which he argues that we shouldn't trust our emotions. I think he's wrong.
00:53:39
Speaker
But yeah, he has a view. So he thinks that, yeah, we just shouldn't trust our emotions. We should just always disregard what I'm arguing serve the role of observations. We should always stick with what he thinks are self-evident principles.
00:53:53
Speaker
I want to say where are these self-evident principles coming from? Why should we believe them? Why should we be more confident of the theory than of what we think about the particular case? Do I really know this very abstract principle that Singer and other utilitarians are suggesting better than I know that it would be wrong to kidnap and murder this bloke who's stumbled in to the waiting room?
00:54:20
Speaker
Yeah, I just want to say why believe the principles. You're going to have to have a really compelling story in epistemology of how we come to know these principles if you're going to convince me that I should trust them more than I trust my judgments about the particular case there. So I think we're getting a lot of mileage out of this analogy with science. So here's one more connection.
00:54:50
Speaker
One of the billions of quotations attributed to Einstein, but this one has I think some basis in one of his lectures. The pithy version is Einstein said that your theory should be made as simple as possible, but no simpler. So yes, beautiful if you can reduce your theory down to one principle.
00:55:17
Speaker
But if that means falsifying the complexity that's really there, then your quest for simplicity has gone too far. So I want to say, yeah, really attractive to have just one principle, but only if it can capture what we have reason to think is the texture of the needs and obligations that structure our lives.
00:55:44
Speaker
Yeah, I think there's actually another interesting point about the simplicity example, which is that the idea of simplicity enters into physics as a kind of heuristic, right? And I think that expresses really well what Einstein was saying. And so maybe some of these things, like the greatest good for the greatest number,
00:56:08
Speaker
could be useful within a broader framework, but not as the overall guiding, not as the complete theory that arranges all the observations. Because if we buy into your really nice example there, which I think is really hard to disagree with,
00:56:26
Speaker
If we buy into that, then it just can't be correct. It can't be the universal way of organizing all that empirical data that we get from emotions if we have such clear clashes with it. And it also can't be some kind of separate axiom within the system. But perhaps it could be some kind of heuristic where all else being equal or something like that, do the gorilla's good for the greatest number.

Heuristics and Complexity in Ethical Decision-Making

00:56:54
Speaker
If you're nodding clash,
00:56:57
Speaker
if there's no clash with other moral knowledge, then this can apply. Yeah, it's a really nice trolley problem. On the subject, or not a trolley problem, it's kind of hospital trolley problem. Yeah, just thinking of like a gurney here. I did have like, I mean, it's like a hobby of every person to kind of come up with
00:57:25
Speaker
interested in these things to come up with very obscure trolley problems, but I do have one sort of variance, but I kind of feel like it's going to be quite easy to resolve. So you have a kind of burning trolley that is either going down one path and it's going to ignite a children's teddy bear, a child's teddy bear.
00:57:52
Speaker
So, you know, emotionally, like, oh, gosh, she wants to see that, right? Or it goes down another and it just like, you know, there's all this, let's call it like, it could be like a big thing of natural gas, right? That's going to explode. It won't hurt anyone. It's just going to burn. Maybe it's going to burn slowly for a long time. Okay. But it's going to give off, you know, tons and tons of carbon dioxide, right?
00:58:19
Speaker
I feel like our emotional response really takes us down the track of, let's burn all that carbon dioxide or all that methane and stuff. And our emotional response is probably at the moment incorrect. But as you say, we can perhaps quite easily explain this with our other beliefs and our other knowledge about the world and trying to fit in on all our emotions around climate change.
00:58:50
Speaker
It gives us pause to think, oh, actually, that's going to harm more people. But the other thing that's interesting here is that's a decision where I think the right thing actually depends on very broad and distant and complicated things. For example, it depends on the cost of energy and running direct air capture. Because if you have really cheap energy and you've got lots of direct air capture installations, which hopefully we'll have in
00:59:20
Speaker
I mean, realistically, a couple of decades. You know, we might be in a net carbon negative world, right? Maybe I'm being very optimistic here, but I think there's a chance that that happens. And if we were in that world, it would, you know, the ethical
00:59:38
Speaker
the correct thing to do in that situation changes. And it's just, I think, fascinating to think that the decision that you take, you know, the sense data and all these things can be exactly the same. And even many of the, you know, your ethical framework can be the same, but there can just be contingent facts.
00:59:56
Speaker
that are different and maybe very distant contingent facts, let's say. It could be an installation or the economics of something going on in China. That means that all that carbon will get sucked out of the air.
01:00:15
Speaker
Yeah, I don't have much of a point here other than, yeah, it's complicated, right? But yet it doesn't seem in conflict with what you've been saying, I want to say. I came up with the example thinking, I've got one here, this is going to get James, right? But now I think it can fit into your framework.
01:00:34
Speaker
Yeah, no, it's great. And I hope some listeners could make us some nice illustrations of this incredibly meme-worthy trolley problem you've come up with there. So burning Teddy on the one. So is a child weeping next to the Teddy, I'm guessing. Yeah. All the maximum heart string tugging. Yeah, no, that's a good one. And it segues nicely, I think, into some of the
01:01:00
Speaker
more substantive things I've been thinking about, the limitations of our emotions when considered as a sense of ethics and the impact that has in a kind of immensely complicated world of global environmental destruction and pollution. So yeah, I think that, yeah, our emotions are really well
01:01:24
Speaker
kind of evolved and experientially attuned to dealing with the interactions and choices that we make in kind of small interpersonal situations. Almost everyone
01:01:42
Speaker
If they see that a certain course of action is going to destroy a child's most loved teddy bear, are going to appreciate emotionally that that is a reason not to take that action, to take a different action if necessary, if a different action is available.
01:02:01
Speaker
But our emotions are not going to respond in the same way to the prospect of releasing a load of CO2, unless we've really been embedding ourselves in the sorts of conversation and social interaction that allow us to kind of retune our emotions to this stuff. So for most of us,
01:02:29
Speaker
Action makes child cry, that pushes our buttons. But action releases invisible gas, which sets into motion a chaotic and stochastic process that will increase the likelihood of storms in this part of the world, cholera due to depleted.
01:02:52
Speaker
water sources in this part of the world, koala bears being fried in this part of the world. Back to the bears. They're not actually bears. Yeah, that's true. That is an important ethical consideration. So what I'm trying to say
01:03:12
Speaker
is that certain processes, when they're really one thing complex, but also when they involve probabilities instead of certainties, when they involve really long time scales instead of the time scales that we're used to experiencing the impact of our actions over.
01:03:28
Speaker
when they experience, when they involve unknown, unspecified people. So, you know, we know that these natural disasters that are going to be made more likely, they're going to involve people. But it makes a difference to us emotionally that we don't know who those people are. Yeah.
01:03:45
Speaker
There are many other things we could point to. So we have much more intense emotional reactions to people that we feel to be members of the same group that we are in all sorts of ways. And that's incredibly problematic. So, you know, in that hypothetical, well, not very hypothetical, in some kind of, you know, if there's a wildfire in Sri Lanka,
01:04:07
Speaker
and hundreds of people die, you can bet that the Dutch newspapers are going to devote page space to two Dutch tourists dying rather than, you know, so that group membership. So I think that the way that I like to conceptualize this is that our emotions have lots of blind spots, right?
01:04:25
Speaker
Our emotions can see really well the ethical significance of things that happen to our near and dear in a concrete way with sort of nice, tidy cause and effect processes. But they're kind of blind to a large extent to the ethical significance of things that are abstract, global, probabilistic rather than certain, and out-group rather than in-group.
01:04:55
Speaker
So yeah, isn't that a problem for me? Yeah, you've given us lots of kind of problems to think about. I mean, I think the example of like distance versus closeness, it's a really interesting one. I'm reminded, I don't know if you've seen it, but there was a study recently where they looked at Republican versus Democrat voting.
01:05:16
Speaker
And they correlated this at quite a granular level. I think it was like zip code level with the kind of donations that were made to charity. And you found that in zip codes where it was Republican majority, people were more likely to donate to causes that were geographically closer. I think it's like an intuition that many people have that there's a difference between kind of
01:05:47
Speaker
conservative and more progressive political stances in terms of spending money at home versus spending money further afield. But it was really fascinating to see that kind of check out so closely. And then of course there's, you know, the Peter Singer effective altruist kind of school who are like, we've got to have like a kind of flat
01:06:06
Speaker
function in terms of how we wait across geographical and temporal distance and that a life a million years from now has waited the same as a life now. And I've got to say that does kind of clash with my emotions, but I also feel like there's a reasonable reason, a reasonable reason. I think there's a good reason to doubt that as well. I think there's so much
01:06:35
Speaker
uncertainty in the outcome of our actions, that particularly with regard to distance and time, less so now in space, perhaps more so in the past, I have more ability to influence what's going on. I could donate to the operation tower, to the relief operation for a fire in Sri Lanka, right?
01:07:05
Speaker
Whereas in the past, they just didn't have that option. But nonetheless, I can't necessarily see the outcome of my donation in the same way that I could see it if I were to support a course.
01:07:18
Speaker
more likely to home. And I think that is a rational reason to, I don't mean go hyper local, I don't mean ignore more distant problems, particularly given that often you can make more of a impact per dollar by spending it further afield. But I think it's unfair to kind of characterise people as being self-centred because they have a kind of preference towards their friends, family and people who are somewhat kind of closer to them.
01:07:46
Speaker
So yeah, I'm sort of, I think we, there's a kind of tug of war, a tug of war rather between, again, the emotion and reason, and probably where we end up should be somewhat in the middle. But knowing exactly where to fall, that's the tricky thing.
01:08:11
Speaker
Yeah. So, I mean, part of the issue here is, is so that there's a big debate at a kind of high level within, within ethics, within, in, in the academy that happens under the rubric of the idea of partiality.

Relationships and Ethical Obligations: A Debate

01:08:26
Speaker
So the idea would be, is it sometimes ethically permissible to treat people differently on the basis of what kind of relationship you have to them?
01:08:39
Speaker
And theorists, as you mentioned, like Peter Singer and other utilitarians, also certain Kantian theorists hold that we should just be totally impartial. The right thing to do. I mean, there are going to be cases where, yeah, it's just with a better use of our time and resources to help people that are close to us. But
01:09:01
Speaker
People's relationships to us shouldn't, like, don't make a moral difference. The other school of thought says that no relations really can. Certain kinds of ethical, certain kinds of partiality in your decision making can be ethically permissible, even ethically obligatory. So let's take an example. If a parent wasn't giving special consideration to their children, right? If they really bought the argument,
01:09:32
Speaker
and that everyone should be, you know, everyone just counts as one. And they said, all right, well, there are children in the world living on less than a dollar a day. And in fact, there are, I think about four, 400 million, perhaps children in the world living on, well, actually it's less than $2 a day now with inflation. But if someone, if someone kind of really took the argument seriously and, and, and
01:10:02
Speaker
spent their resources on these cost-effective charities rather than getting their children, meeting their children's needs in terms of food and clothing that are required for proper functioning in Western society. I think we'd think the parent was guilty of child abuse, right? If they just had their children living at an absolute subsistence level in order to donate all of their
01:10:30
Speaker
all of their surplus income to... Sorry, slightly lost. So my doorbell has thrown you, I think. Slightly, slightly, yes. But yeah, if somebody donated all of their surplus income down to the level where the child was really living at this absolute below subsistence level, because every extra penny would do more good elsewhere in the world than it would for their own children, we'd think that was child abuse, right? We would think that would be ethically wrong.
01:10:58
Speaker
We would think that a parent has an ethical obligation to have special concern for their children. They have more obligations to their children, merely in virtue of the fact that they are their children than they have to people elsewhere in the world.
01:11:16
Speaker
there are limits, we might think that lots of parents spoil their children in a way that doesn't actually make their lives any better and lots of the plastic crap we buy for our nieces and nephews doesn't make their lives any better or whatever and maybe we could just, you know, we could use that money in different ways and we could show our love in different ways and give that money to the most needy people if that's the trade-off.
01:11:44
Speaker
There's a conflict between this impartial view in ethics and what I think is a much more intuitive view that you owe it to your friends to give them special attention and consideration and help in a way that's over and above what you owe to certain other people as a result of them being your friend.
01:12:05
Speaker
You described that as a conflict between kind of our emotions about the case and reason, but I don't think that's right. So I think, as I mentioned, I like to think in terms of reasoning rather than, because I think the term reason is maybe a little bit, I don't know, it's a little bit too vague for my theoretical purposes. No, I think you're right. Reasoning is a process.
01:12:29
Speaker
There is no place called reason, right? There is no object reason, whereas emotions are somehow they're things, right? Reason is not really a thing. So what I wanted to say, there's only going to be a conflict between reasoning and emotion if you have kind of preloaded your reasoning with impartialist premises. And there I want to say, where are these impartialist premises coming from?
01:12:55
Speaker
What are our grounds for saying that the correct ethical theory should be one in which parents ought to give special consideration to their children except for when it's a useful means to some impartially describable end? So I just think we should be testing the impartialist ethical theories
01:13:13
Speaker
against our sense of right and wrong in cases. And that leads me to think that certain kinds of ethical partiality are permissible. That's not to say I think nepotism is a great thing. I think we're often far too partial, far too selfish, far too myopic in terms of recognizing the needs of others.
01:13:34
Speaker
I am very convinced by the idea that we should be giving much more money to help people meet their basic needs all around the world than we currently do. Those of us who live in affluent societies. But I wouldn't take that all the way to a fully impartial level. And maybe the source of that thought that we should do our best to improve the lives of people in many places is not based on
01:14:04
Speaker
this utilitarian principle but just based again on well I've seen how much I've had the emotional experience of improving someone's life and even if I can't always see that outcome I can know that it's there if I do it and I know that's a good thing right you have that moral experience in the in the kind of knowledge bank as it were so
01:14:31
Speaker
Yeah, I think you're absolutely right to call me out on the fact that I was sneakily. I wasn't trying to call you out just to, you know, bring in some distinctions. That's that's the Yeah, you're absolutely right. I mean, I think I have, in my mind, you know, I keep on coming back to well, you know, this this greatest good greatest number or some kind of impartial framework like that seems almost, you know,
01:14:59
Speaker
a priori correct.

Balancing Emotional Responses with Ethical Principles

01:15:01
Speaker
And yet I think, you know, there are so many cases that we, so many particular cases that we can find that maybe we just have to abandon that and actually say, yeah, it's a good organizing principle some of the time, but it just doesn't work all of the time. And we have to trust our emotions at least or at least start. I don't want to say we trust them entirely, right? What's the right way of putting this?
01:15:29
Speaker
We don't discount them. And I think here is probably worth talking about your kind of concept of defeaters, right, where let's just buy in, right, and say emotions, right, they're an important part of how we get moral knowledge, ethical knowledge, but we know they're infallible. So how do we make the best use of them, right, given those two beliefs? How do we make sure that we're making, you know,
01:16:00
Speaker
Our emotions are correct or fitting, as it were. Yep. Yep. That's right. So one thing I quickly want to say to get a little pot shot in. So you mentioned the idea that these principles like the principle of utility can seem a priori correct. But returning again to our science analogy, the principles of Newtonian physics seemed a priori correct to a lot. That's good. Yeah. Yeah.
01:16:25
Speaker
I did my PhD on Cunt. He thought that this wasn't something that was just a simplifying way of deriving things from experience. He thought that he could give, that it was just a priori self-evident essentially. Well, he thought there were arguments to get there, but he thought that this was an a priori unshakeable principle. So I want to say something that seems like an a priori unshakeable principle
01:16:52
Speaker
actually, we should always be testing it against our observations. And in ethics, that means we should always be going, why believe this principle? Is the principle something that's derivable from our judgments about the particular cases? Should I be revising this principle in order to deal with the nuances? But yes, then we've got this unavoidable question that you raise. Is there anything we can do to get further with this question of when to trust emotion?
01:17:22
Speaker
And as you mentioned, so I have another preprint that I shared with you called Unreliable Emotions and Ethical Knowledge. And in that paper, I tried to really square up to the fact that human emotions are extremely imperfect when considered as guides to ethics.
01:17:40
Speaker
Not just in this blind spot way that I mentioned earlier, it's not just that they kind of fail to reveal things to us that are there. They also often present things in a misleading light. So in the article, I focused on the way in which absolutely random stuff happening in your day, like, you know, did you get a puncture on your bike on your way into work?
01:18:08
Speaker
Have you had enough to eat or is your blood sugar dipping low? All this really random stuff, that changes your mood, obviously. And your emotional response to a given situation that you encounter is going to be a function of, you know, the ethically relevant features of that situation. Who's helped? Who's harmed? Are promises broken? Are they kept? Are they your friend or are they your enemy?
01:18:32
Speaker
But they're also going to be a function of what mood you happen to be in. And that means that even if this process of habituation that I described earlier has gone well, even if your emotions, you've learned to respond most of the time with outrage to stuff that's wrong and compassion to people needing help and that kind of thing.
01:18:56
Speaker
even if that's the case, this kind of random noise that gets injected into the system by fluctuations in mood driven by irrelevant shit, that that's going to make your emotions unreliable. And what I suggest in the paper is that, yes, this is a real problem. Philosophers should stop shying away from it by just talking about ideally virtuous agents, as some people in the literature have done.
01:19:27
Speaker
What we should do instead is really face up to this and think about, well, one way into this is to think about how we deal with noisy streams of information elsewhere in our lives.
01:19:38
Speaker
So philosophers talk in epistemology about testimony. That basically just means believing things on the basis of what other people tell you. And that can be verbal or it can be written or whatever. And there are debates about whether we can acquire knowledge through testimony, what the conditions are in which you're justified in believing what someone tells you.
01:20:00
Speaker
The consensus view would be that yes, you can come to know something simply on the basis of someone telling you that that's the case. But of course, everyone knows that people talk nonsense some of the time. So a lot of the things that other people tell you are false, either because they didn't know themselves or because they're trying to trick you into buying something or they're just having a joke and you didn't realize that this was a joke. So we,
01:20:31
Speaker
How do we fit these two things together? On the one hand, it seems like you can know things by people telling them you. Otherwise, think about how little you would know if you'd never trusted what people tell you. You wouldn't even know that there was such a place as Luxembourg, if you've never been there. You only know that on the basis of someone's say so.
01:20:51
Speaker
you wouldn't know i don't know like think about all of the animal species that you believe in that you've never seen or that you've seen them but you were in a position to judge whether they were members of that species or whatever there is just so much even your date of birth right you didn't
01:21:09
Speaker
You didn't have the wherewithal to make a note of what the date was. Every person's knowledge is massively, massively dependent on what they know from others.
01:21:28
Speaker
So if we philosophers come up with a skeptical conclusion that you can't know things on the basis of what people tell you, well, that just seems like a really implausible place to go. You're going to need a really strong argument. But on the other hand, we all want to acknowledge that testimony is very unreliable. So how do we how do we square this? How do we reconcile the unreliability of testimony with the idea that we can know things on the basis of testimony?
01:21:53
Speaker
Well, it's because we don't trust everything we're told. People live in these environments where there's lots of falsehoods floating around. But the vast majority of the time, we're fallible but okay at sorting out who is competent, who is sincere as opposed to pulling a leg.
01:22:21
Speaker
who is saying something literal as opposed to metaphorical. We just have through the course of our lives figured out roughly what some signals are that tip us off that we shouldn't trust the piece of information in question. Now there are exceptions to this. We live in an age of massive polarization. We live in an age of people believing wild things about the shape of the earth and you name it.
01:22:50
Speaker
But that shouldn't blind us to the fact that nevertheless, people are pretty good at not being taken in when a falsehood is being proclaimed. It's highly fallible, but the picture would be that
01:23:11
Speaker
There is lots of false information floating around, but we are able to filter out a lot of that false information by not lending credence to stuff that's printed in the newspapers that we've learned are unreliable or stuff that people say when they stand to massively financially benefit from having us believe one thing as opposed to another thing.
01:23:35
Speaker
There are so many cases where without having to think about it, we know to double check before just taking this person at their word or do a bit more due diligence before we make an important decision on the basis of this information. We pick up on cues that kind of defeat our default reason for believing what people tell us. That's how we might theorize this.
01:24:03
Speaker
Philosophers talk about defeaters. In epistemology, we talk about, well, you're entitled to, you'd be justified in believing something on such and such a piece of evidence, you know, someone's telling you that it's the case. Unless that's defeated, unless there's some warning sign around that should give you pause.
01:24:21
Speaker
I think the same thing can happen with emotion. Just as we are all, just through experience, we learn to tell when someone's being a bit shifty and therefore we shouldn't trust them or we learn to notice that this is a context where the stakes are really high and the other person stands to benefit financially.
01:24:43
Speaker
I've argued that there are things that we can notice. One thing that I think we can notice that tips us off that we should double check rather than trusting our emotion is just what we know about the limitations of our emotions. So if you've learned through hard experience that sometimes people get really irritable when they haven't eaten for many hours, well then when you start to feel
01:25:11
Speaker
outrage, you know, the fact that your partner has left a plate in the sink and hasn't washed it up again, you feel outraged. So it's kind of seems to you as if they've done something wrong, they've, they've violated the expectations of proper conduct or whatever. But you realize that hang on a second, you haven't had anything to eat. Well, then that's one of these mundane signals that should tip us off that maybe we shouldn't fully believe what our emotions are telling us.
01:25:41
Speaker
Yeah, I really like some of the other examples you have as well of one of the six in my head is negative meta emotions, partly partly because that's quite a cool label. So emotions directed at your emotions. So you might feel ashamed of feeling angry, right? Yeah, because and that could tip you off that your anger is not not really justified.
01:26:09
Speaker
On a related point, I wonder, again, we're buying into this, let's go with, you know,

Cultivating Emotional Awareness for Ethical Behavior

01:26:18
Speaker
let's go with emotions being an input to knowledge. Should we become more attuned to other forms of emotion? One that comes to my head is like, Monon no Aware, if I say that correctly, you know, this Japanese idea of kind of transience, the passing of things.
01:26:38
Speaker
And where you are in, I mean, there's a whole list of, you can find on the internet, great lists of supposedly untranslatable emotions. Should we try to, I don't think they are entirely untranslatable. I think just untranslatable in the sense of we don't have a single word many times for these things. But there's still things that we experience. I think having a word or a category for such things,
01:27:09
Speaker
You know, Moanauwara, I suppose, is kind of similar to awe, perhaps, but slightly different. Having an appreciation of those things and having a greater kind of emotional repertoire, perhaps, could that help us be better people? I mean, in this case, it seems like that's a useful emotion to have in a time of climate crisis.
01:27:35
Speaker
And maybe it would help with my, you know, maybe our emotional response to the burning of the coal, or the, what is it, the gas in my rather, yeah, research a trolley problem, you know, that might have actually helped us develop a different kind of emotional appreciation of that situation. Yeah, I'm not sure, but
01:27:55
Speaker
Yeah, so I definitely think that because our emotions can, our emotional dispositions can change through experience. I think that that's a really fertile way in which society can change for the better. So,
01:28:18
Speaker
There's again, this is a case where there's, there's a kind of feedback loop and mutual kind of hopefully beneficial effects between our moral theories and the shape of our emotional dispositions. Right. So a century ago.
01:28:36
Speaker
A lot of people felt an emotional aversion to interracial relationships. They thought it was disgusting for a white person and a black person to hold hands or kiss or be romantically involved. People don't really feel that anymore. So our emotions have changed over time as a result of dialogue, as a result of
01:29:00
Speaker
coming to see the commonalities between interracial couples and same-race couples. Similar things can be said about the slow and faltering but real progress regarding same-sex couples.
01:29:19
Speaker
So I do think that yeah, our emotional dispositions can change over time. Our positive experience of things can stop us from experiencing negative emotions. Often it's on us as individuals or on us as groups to really reflect and reframe and kind of regulate those negative emotions that we feel when we kind of start to suspect that they're not serving us well.
01:29:49
Speaker
And yeah, you also mentioned the possibility of cultivating new kinds of emotion. So that might be something really radical where it's almost like seeing a color that you've never seen before. David Hume talks about the missing shade of blue. Maybe this is a totally novel kind of experience, a totally novel, almost like emotion added to your palette of all the different values you can kind of sense.
01:30:16
Speaker
or it might be a kind of fine tuning in of an existing emotion. So an example of that would be in Sweden, following kind of 2019, there was a lot of public discussion of the idea of flight shame. And that this wasn't just kind of idle chatter. There was a market decline in
01:30:45
Speaker
in how much people flew, even prior to the onset of the COVID pandemic, the number of Swedes taking flights decreased as a result of people cultivating this feeling of shame towards, I would argue, the right way of doing it would be, you've got to be very sensitive to context here, people from an immigrant background. It means a lot more to see your family as opposed to
01:31:10
Speaker
choosing to go on holiday to Mallorca rather than doing a staycation or whatever. There's room for nuance here. But yeah, so cultivating an emotion of shame towards a kind of luxurious, rather than for a weighty reason, choice to go flying, that can have a real
01:31:31
Speaker
I would argue positive effect on the intuitive sense that people use to navigate the ethical landscape. Yeah, I think it's a wonderful example because
01:31:44
Speaker
Again, it's one of these cases where you could have arrived at that conclusion from reasoning on your beliefs, and presumably people did, right? That's how they cultivated that sense of shame. In fact, they looked at other beliefs they had about what was right and wrong, and they came to the conclusion that it was wrong to fly. And I think many people, you know, well, better not to fly, let's be like that.
01:32:10
Speaker
Many people would agree with that, but unless you have the emotion that goes with it, it can be very difficult to act on it. So in a sense, I think regardless of whether one thinks that, you know, or what priority one puts on emotional emotions as a guide to moral knowledge, they are the gateway to most of our behavior.
01:32:40
Speaker
If not all of it we might we might like to think that there's some of our behavior which is completely unemotional, but Maybe we're just kidding ourselves so Yeah, I think In a certain sense, you know, even if people don't agree with everything right of this framework I think that's one that one can't deny that for most people emotions are the way that things get done the other thing I like about this is that
01:33:10
Speaker
even if one wanted to say, actually, no, I just want to reason more from some other first principles, which I think are really important. Again, on a daily basis, you're not going to get that much done. For all the small decisions that you take, but even some of the larger ones, you just don't have
01:33:35
Speaker
the time, patience or even cognitive abilities to kind of crank through some of the kind of
01:33:43
Speaker
moral decisions that we'd like to take. And I think we kind of do need to get stuff done. So there might actually be just an argument for, well, sometimes we can get, sometimes we might not get it right with our emotions, right? Sometimes we might not be as watchful as we should be, or we might act on things where we have, we're a little bit hungry, we have this needling feeling of devout, but that can be okay because we do need to get stuff done. And
01:34:10
Speaker
oftentimes they might lead us to the right decision. Yeah, I do, we're running out of time, so it's maybe unfair of me to come up with, throw another spanner into the works, but I'm gonna do it anyway, because it might be something that other people are kind of like wondering. There's one kind of, there's one final thing that gives me a little bit of pause to really buy into this emotional framework, and it's,
01:34:38
Speaker
our emotions have probably primarily been generated by lots of evolutionary processes.

Evolution's Role in Shaping Emotions and Ethics

01:34:46
Speaker
And there's some cultural stuff going on there. And there's some really clever and high level things like the flight sailing is just a wonderful example where we're kind of overriding, you know, other like earlier ways that earlier emotions that we might've had towards flying. We might've found it great, fun, brilliant going on holiday. And we're able through
01:35:08
Speaker
very complicated process of reasoning based on other principles and so forth, some of which may have been derived from other emotional experiences to affect our emotions. Nonetheless, I think a lot of what we feel emotionally will have been evolutionary mediated. There's really kind of simple toy examples of that. We feel a lot more empathy for probably dolls or teddy bears which have big wide eyes and are fluffy.
01:35:37
Speaker
than maybe even real animals, which are scaly and slimy or whatever. And that shouldn't be the case. And I just wonder, could this all be so infused with evolutionary concerns, the concerns just to propagate?
01:36:00
Speaker
Again, I'm having a very toy and basic description of what evolution does. But is that a problem for this framework? Or is it just something that it can work with nonetheless?
01:36:21
Speaker
That's an interesting and massive kettle of fish. But yes, so this is a big topic in moral epistemology, so the field of people thinking about, is there such a thing as ethical knowledge? So people talk about evolutionary debunking arguments, and those are arguments starting from what you said that
01:36:41
Speaker
our capacities result from evolution and trying to use that as material to argue that yeah maybe moral knowledge isn't possible or something like that. So one thing to point out is that every aspect of our minds is some mix of biological evolution and cultural shaping
01:37:07
Speaker
That's it. What else is there? Unless you think there's some kind of divine influence, you're going to think that every aspect of our minds is a mixture of evolutionarily derived starting points and the fine-tuning or further development of those that occurs in the course of our lives.
01:37:36
Speaker
Could should we should we be worried that the things that all that seem like mathematical truths to all of us that we shouldn't believe those anymore because well.
01:37:48
Speaker
Evolution doesn't care whether we believe, whether when we're multiplying two four digit numbers together, we get the right answer or the wrong answer. Evolution is what's shaped our sense of which processes to trust there. So should this lead us to be skeptical of our mathematical abilities? I think that sounds crazy.
01:38:14
Speaker
And I think that like sometimes, yeah, I can get into the spirit of feeling a certain skeptical vertigo when faced with these arguments. But yeah, I don't think the fact that a cognitive capacity, a mental capacity stems from evolution is a reason not to trust that capacity. Evolution has equipped us with lots of different
01:38:38
Speaker
mental capacities that work great. A lot of them work great for domains that don't have any direct applicability to increasing survival, like the case of abstract maths or our ability to
01:38:54
Speaker
I don't know, to remember specific episodes from the far distant past. We don't only remember things that are really useful for our survival. So I think that there's lots of capacities and the mere fact that they're evolved. So I'm being a little bit rhetorical here, but I think there's a lot more work to be done if you're going to turn this into an argument against trusting emotions. Let me just summarize the two reasons. One reason is,
01:39:24
Speaker
Why is it a problem for emotions rather than for everything else? And the other thing is, well, why is it a problem in the first place? Why should the fact that something is shaped through evolution be a reason to mistrust it? Yeah. Yeah, no, I think there's very good responses. I mean, I think the second point is one would undermine everything, right, by trying to dispute that. And I think that would be
01:39:52
Speaker
that would be a mistake. There may be differences between the faculties of reasoning, where evolution has quite clear, probably, interests, if that's the right way of characterising evolution, which it isn't. But one can see why evolution would produce faculties of reasoning that more or less get you to the correct knowledge of the physical world.
01:40:20
Speaker
The question is, is the same true? Does evolution produce an emotional faculties that get you to the correct knowledge of the moral world? And yeah, I'm not sure. I mean, but I think, yeah, you're right, like to say, well, why shouldn't it, right? If we're happy with the one, why can't we be happy with the other? But I think you're right to split out those two questions. Yeah.
01:40:47
Speaker
One thing to bear in mind as well is that when we make that judgment that evolution has equipped us with reasoning skills or a visual system that reliably gets us to the right answer, we're having to rely on the understanding of what the world is like.
01:41:06
Speaker
that we formed using those very capacities in order to judge whether evolution has given us a reliable or unreliable set of reasoning skills or visual abilities.
01:41:22
Speaker
unless you're using a double standard that suggests that the right approach here in the ethical case should be to unfortunately have to start out from our best understanding of which things really are right and which things really are wrong.
01:41:41
Speaker
That is shaped by the capacities we have because we can't come up with the best understanding apart from by using those capacities. And then sort of check in that way. So I'm all up for that. I'm all up for reflectively scrutinizing, developing our best understanding of what we think is genuinely right and what we think is wrong.
01:42:05
Speaker
And then kind of holding up our best understanding of the human mind against that and going, all right, well, there's a limitation here. And that's what I've been doing, you know, by the very claim that our emotions are limited in that we don't have strong emotional responses to the needs of our group members. That's a conclusion that you can only get to.
01:42:29
Speaker
by kind of doing this slightly circular but hopefully virtuous circle process of using the tools you've got, come up with the best understanding you've got of the thing
01:42:45
Speaker
in question, namely what's right and wrong, and then just kind of comparing that, coming up with identifying limitations. So I think that's what we're doing when vision scientists look at the evolution of the visual system, and they characterize it in terms of, well, when there was this kind of photo receptive nerve forming clusters in this thing, and then that started to be within a little dip,
01:43:12
Speaker
And that was useful because it allowed us to figure out what direction the light was coming from. We're really relying on a lot of non-trivial assumptions about what the external world is like and what geometry is like and how light moves in order to say, well, in that case, evolution led us in the right direction.
01:43:33
Speaker
And that's fine, that's totally legitimate, but because what else can you do? You have to use our best scientific understanding of what the world is really like in order to judge whether evolution has equipped us with a good visual system or not. Similarly, I would argue we have to use our best understanding of which things are really right and which things are really wrong in order to judge whether evolution has equipped us with a good emotional sense of what's right and wrong.
01:44:00
Speaker
It's circular, but it's not viciously circular. That's what I would hope. Yeah, I think so. There's probably a role for the inference of the best explanation here in both cases as well. The reason why or one justification for believing that evolution has equipped us well for finding physical facts about the world is it's just the best explanation of why our theories just keep on seeming to tell us things that we expect to see ahead of time. And that would be
01:44:30
Speaker
Spooky fit if it wasn't but one can probably run that same argument for the way that our emotions seem to Keep leading us towards the right decisions apart from in certain contexts where we might have good reason to distrust our emotion Yeah, I I apologize for dropping this kind of it's a bit like mentioning Hitler in an essay It's something you're never supposed to do probably like evolution is just one of those things where
01:44:55
Speaker
No, far from it. It's a really interesting topic. And there have been

Conclusion: Pluralist Ethics and Emotional Understanding

01:45:00
Speaker
some really good papers kind of back and forth on this in the last few years. So I really like Andreas Morgensen's work. So he's a researcher in Oxford and he's involved with the Future of Humanity Institute who've been producing a lot of these long-termist ideas. But his doctoral work was on this very question of whether evolution undermines ethics. And he's
01:45:24
Speaker
I like his work because he's one of these people who does philosophy with a scalpel. Would you say he's a consequentialist or? He's not actually, so he describes himself as a deontologist. Fill him in the bucket because he goes at the long-term thesis institute.
01:45:40
Speaker
No, so Andreas Mogensen, he thinks that it's really important to act in a way that helps others, but he thinks that there are other ethical considerations. So it's a more kind of pluralist view of what the principles are. He doesn't think with the utilitarians that
01:45:57
Speaker
There's only one ethical principle and that's to act in a way that maximizes net benefit in terms of pleasure. He thinks there is a range of different principles but he just thinks that a lot of the time the main thing that we need to be doing is helping others and therefore he agrees with a lot of what the
01:46:17
Speaker
what the what the right utilitarian say he just he just thinks that there's all kinds of stuff that could override that but a lot of the time it isn't overridden by anything else yeah yeah it's it's hard to argue with that heuristic well this has been really fascinating i feel like i've taken us over time a bit by trying to close off every objection that one might have about this but i think it's been fun it's been fun yeah you've done a great job of
01:46:44
Speaker
Yeah, I think really arguing for the cogency of this. And as I mentioned earlier, I think it's really refreshing to be able to pay a bit more attention to one's emotions and not feel guilty about that there's a negative amount of emotion.
01:47:01
Speaker
Thank you so much, James. I don't know if you have any final things you'd like to add. Advice for life. Advice for life. I don't have so much of that. But yeah, so if anyone wants to contact me about my work, all of the ideas I've been mentioning are
01:47:16
Speaker
work in progress and partly in the sense that these are preprints that we've been discussing for the most part and partly in the sense that, yeah, I don't think I've come to my final philosophical view. So if anyone does want to contact me, if you Google James Hutton, and I'm now at Delft University of Technology in the Netherlands, so if you Google James Hutton, Delft University of Technology, I'm sure you'll be able to find my profile and I'd be very happy to speak to anyone that wants to drop me an email about any of this stuff.
01:48:09
Speaker
Thank you so much for having me on the podcast, James.
01:48:14
Speaker
you