Introduction to Jeff Sebo
00:00:08
Speaker
Welcome to Cognation. I'm your host, Joe Hardy. And I'm Rolf Nelson. And on today's show, we've got a special guest, Jeff Sebo. Jeff is an associate professor of environmental studies at New York University, who also heads up several centers focused on ethics and animal protection.
00:00:27
Speaker
His research focuses on animal minds, ethics, and policy, as well as AI minds, ethics, and policy. He's the author of The Moral Circle, a book that we're going to discuss today, as well as Saving Animals, Saving Ourselves.
00:00:42
Speaker
He's also a board member at Minding Animals International. two In 2024, Vox included him on its future perfect 50 list of thinkers, innovators, and change makers who are working to make the future a better place.
00:00:58
Speaker
Jeff, thanks for coming on the show. Yeah, thanks for having me.
Journey into Animal Ethics
00:01:02
Speaker
Yeah, to so to start off with, um just like to get into your background. How did you get interested in ethics and particularly the ethics of nonhumans?
00:01:12
Speaker
Yeah, great question. I got interested in the topic and that specific issue in college. I went to college, started taking philosophy and sociology classes.
00:01:24
Speaker
And one of the first areas where I was really challenged was in ethics in general and animal ethics in particular. I grew up as a kind of ordinary American, eating meat, having a pet, yeah having the standard set of attitudes and relationships with animals in this country.
00:01:41
Speaker
And then I went to college and my professors hit me with reading after reading after reading about animal farming and animal research and our treatment of animals in captivity in the wild and sociological explanations for speciesism and and our biased and discriminatory attitudes about animals. And I pretty quickly realized that I was not able to justify my attitudes, my behaviors.
00:02:08
Speaker
And so that was a really important experience for me, both because it showed me how powerful philosophy can be as a way of changing my mind about something that matters ah and as an entry point in the animal ethics for me. so So from there, I started studying the issue more and then I went to grad school and then I dove deeper later on in my career.
The Moral Circle: Key Concepts
00:02:28
Speaker
So your book, The Moral Circle, discusses who should matter from an ethical perspective and how much they should matter. And thinking about this, what are some of the considerations that you're taking into account?
00:02:41
Speaker
Well, there are ethical and scientific dimensions to these questions. And part of what the book emphasizes is we have substantial disagreement and uncertainty about both the ethics and the science. So on the ethics side, there are competing theories about what it takes to matter. What features do you need to have in order to merit moral concern, respect, compassion? Do you need to be sentient, capable of happiness and suffering?
00:03:06
Speaker
Do you need to be agentic, capable of setting and pursuing your own goals? Do you need to be alive, capable of surviving and reproducing in a certain kind of way? And then on the science side, we have disagreement and uncertainty about which beings have each of those features. So with sentience, for example, there are questions about how much cognitive complexity do you do you need? How much cognitive centralization do you need?
00:03:30
Speaker
Do you need to be built out of carbon-based cells and neurons. and And so for both of those reasons, there is a lot of confusion about who merits inclusion in the moral circle and our moral community. And so the book really tries to reckon with those questions, both by trying to understand the ethics and the science a little bit more, but also wrestling with this actual practical question of how do we make decisions together about whom to include when we have disagreement and uncertainty about both the values and the facts.
American Moral Prioritization
00:04:03
Speaker
So, okay, so you start, or the or sort of the main idea behind the book is we have a we have sort of a moral circle within which we um ah our are creatures that we consider um that we may have an an ethical obligation to in some sense. And it's a very broad, it's a very sort of a broad sort of thing.
00:04:24
Speaker
um How would you imagine, and i I'm just picking up on your, I grew up as an ordinary American, you know, eating meat and all this kind of stuff. um How would you characterize the the moral circle of, say, your average person who hasn't given a whole lot of thought to this?
00:04:41
Speaker
Yeah, great question. i think that there is no universal answer for Americans in any demographic. What you see is a lot of disagreement and uncertainty across the board, ah both for Americans in general and for particular subsets of America.
00:04:58
Speaker
However, if I was to average it out and speak in very general terms, I would speculate that the moral circle of the average American is something like, all humans are roughly equal.
00:05:12
Speaker
That is still a work in progress, of course. And then certain charismatic vertebrates are honorary partial members of the moral community. For example, companion domesticated animals like cats and dogs have a certain limited form of inclusion in our moral community and other vertebrates like
Expanding Moral Concerns
00:05:36
Speaker
other primates, chimpanzees, bonobos, and elephants and so on and so forth, and birds of of various kinds, they they have an even more limited form of inclusion. We may have at least minimal animal welfare laws, anti-cruelty laws, for example, that we use to protect them.
00:05:53
Speaker
But when you get farther, on the the tree of life, even among vertebrates with reptiles and amphibians and fishes. And then especially when you start thinking about invertebrates, cephalopod molluscs like octopuses sometimes are honorary vertebrates. Sometimes people care about them.
00:06:11
Speaker
But then decapod crustaceans like lobsters, insects of various kinds, they tend to be either mostly or entirely excluded. and And then you have further fringe cases like plants, fungi, microscopic organisms, emerging technologies like AI systems.
00:06:28
Speaker
So I would say humans, some honorary vertebrates, and then maybe just a little bit of concern for some beings beyond that. Now, if also, if we're talking about concern, too, would you draw a distinction between different groups of humans? In other words, we certainly tend to value our close relations most, most um and then maybe people that are affiliated with us in some sort of way, whether it's, you know, they're from our own city, country, whatever.
00:06:57
Speaker
And it's certainly the case that other people are are demonized or or considered maybe less than human, I i think of, you know, the war in Ukraine right now, where a common term for enemy soldiers is orcs, which is a really dehumanizing kind of term.
00:07:12
Speaker
um So how would you think about this, you know, gradations within within humans? or Or do you think of that as all, you know, sort of well about, you know, we still would certainly consider them conscious, and this is all well above thresholds?
00:07:24
Speaker
Yes. Yeah, I think that is is, first of all, a good illustration of the reality that we still have a lot of progress to make within our own species.
Intrinsic vs. Relational Prioritization
00:07:33
Speaker
And in general, a question that often comes up, and we may or may not talk about it on on this this conversation,
00:07:40
Speaker
is given that we have so much progress to make even within our own species, how much should we be caring about other species and and then these fringe cases right now? Is this is this ah an unaffordable luxury when we still need to be fighting for basic human rights? I think that would be a mistake to dismiss these topics for that reason, but I want to note that this is an important challenge. Now to answer your question directly, I think this is a ah good illustration also of the reality that when we prioritize, we sometimes do that for intrinsic reasons and we sometimes do that for relational reasons. So we might prioritize a human over an animal partly because we think the human intrinsically matters more. maybe
00:08:23
Speaker
They have more intelligence or more sensitivity, a greater capacity for happiness and suffering. And so intrinsically, they have more moral significance and and we should prioritize them over a mouse or an ant because of that extra intelligence, that extra sentience that we rightly or wrongly attribute to them.
00:08:42
Speaker
And then there is a separate set of of considerations, relational considerations, which I think is primarily what you have in mind when you talk about priority setting within our species. Most of us, not all of us, sadly, but most of us would say all humans intrinsically are full, equal members of the moral community.
00:09:01
Speaker
Nobody matters more or less than anybody else intrinsically in virtue of where we happen to be located in time and space. But relationally, we can regard ourselves as having special duties to certain humans or stronger duties to certain humans because of how our lives intersect with theirs. They might be members of our family or members of our community or colleagues or friends, and we make promises to them and we have histories with them. And in virtue of those relational features, even though they matter equally intrinsically, we might take ourselves to have stronger duties of certain kinds to them.
00:09:38
Speaker
And so when we look across species lines and we have this general fuzzy intuition that humans matter more than other animals, we have to try to disentangle these possible sources of that intuition.
00:09:48
Speaker
To what extent is that coming from a sense that we intrinsically matter more than they do? And to what extent is that coming from a sense that we have stronger bonds and ties within our species than across our species?
Defining Consciousness, Sentience, and Agency
00:10:01
Speaker
Yeah, that makes that makes a lot of sense. I mean, you know there's a lot of different... angles to this and you approach this problem from a number of different perspectives in the book. And I think, you know, a few things that you mentioned in your first comment that I would like to dive into a little bit more in thinking about the moral circle and like, you know, who is in that and to what extent are the ideas of consciousness, sentience and agency.
00:10:29
Speaker
And I think it's worth digging in a little bit more into that because it does seem to be that those are important concepts. you know Certainly we as humans in our society today, for the most part, I think are in pretty broad agreement that like a being that is conscious has a certain moral standing that is distinct from something that is not conscious.
00:10:58
Speaker
So it becomes important to think about
00:11:03
Speaker
what is the criterion for consciousness? Is it a binary? is it you know this is Are there they gradations of consciousness? how do we How would we know?
00:11:14
Speaker
So maybe first, if we could just dig into the distinction between consciousness, sentience and agency, and then maybe we could kind of dig in a little bit from there.
00:11:27
Speaker
To criteria and and and measurements. Yeah, great.
Sentience as a Basis for Moral Status
00:11:31
Speaker
So not surprisingly, and and as you well know, these are multiply ambiguous concepts, different people use them in different ways. And so it helps to offer a definition so we can be on the same page.
00:11:43
Speaker
Here is my definition. This is common, at least in my little area of philosophy and science, but but it may differ from how how the term is used elsewhere. When I use consciousness, I...
00:11:55
Speaker
mean it to describe what some people call phenomenal consciousness. That means it feels like something to be you. You can have subjective experiences, subjective awareness, subjective feelings. It feels like something to be you. And so when I ask if animals or AI systems are conscious beings, I am and asking, do they have feelings? Do they have that kind of subjective awareness?
00:12:20
Speaker
Now, sentience is consciousness plus valenced experiences, positive or negative valenced experiences. So if you can have feelings that feel good to you or bad to you that that you want more of or that you want less of like pleasure, pain, happiness, suffering, satisfaction, frustration, hope, fear, any of these positive or negative valenced experiences would make you not only conscious but sentient.
00:12:47
Speaker
And then agency is the capacity to set and pursue your own goals in a self-directed manner based on mental states that function like beliefs and desires and intentions, for example.
00:13:02
Speaker
And in in beings like humans and honestly many many non-human animals, These capacities all come together as a kind of package deal. Humans and and many, if not most, are or all non-human animals have at least some measure of consciousness and sentience and agency. It feels like something to be us.
00:13:23
Speaker
We can have positive and negative experiences like pleasure and pain, and we can set and pursue our own goals in a self-directed manner. But of course, when you get far enough away from us on the tree of life, and especially when you start contemplating microscopic organisms or ai systems, it becomes really unclear whether these capacities are still bundled together and and whether they still have any or all of them.
00:13:47
Speaker
And so so then we could we could talk about how we might go about assessing that if if you like. yeah yeah I think sentience is a really interesting one, too, because I think that's something that's been introduced more more so in the last couple years or has gained a little popularity. um Your colleague Jonathan Birch has done a bit of work on um thinking of sentience as a criterion.
00:14:09
Speaker
um So personally, what do you think of as, i mean, maybe it's a maybe it's a complex question and it's a conglomeration of different considerations, but what do you think a good,
00:14:20
Speaker
you know, if talking about consciousness, agency, sentience, what do you think is a good one to settle on as as ah as a criterion and or or makes most sense? My own view, and this is similar to Jonathan Birch, who you referenced a moment ago, he wrote the book, The Edge of Sentience, which came out last year.
00:14:38
Speaker
And then many other philosophers too, dating back to Peter Singer, when he first wrote Animal Liberation in the 1970s. I think we all regard sentience as a very promising basis for moral standing, moral status, having that kind of intrinsic value.
00:14:54
Speaker
Because once you can have conscious experiences that feel good or bad to you, you have a stake in what happens to you. It matters to you what happens to you. Like, why is it wrong to kick a mouse down the street for fun? Well, because they have the capacity to suffer, we presume, and it feels bad to them to to be kicked down the street. That, to me, personally seems plausible. So so of all of those capacities, I would bet the most on sentience being a sufficient condition
00:15:26
Speaker
for moral standing, moral status. Now with that said, and part of what I emphasize in the book, is that we do have disagreement and uncertainty about this, including among experts.
00:15:37
Speaker
And lots of smart people have been very confidently wrong about the basis for moral standing, often in a conveniently exclusionary direction in the past. And so when you think about how difficult these issues are, how high the stakes are, how much smart people disagree today, how many smart people have been confidently wrong in the past, often in ways that are convenient for them, excluding others, when when you put all that together,
00:16:01
Speaker
I want to be at least a little cautious and humble about betting everything on on my view that sentience is the basis. I want to give at least ah a little bit of weight, allow for at least a little bit of a possibility that I might be making the same mistake that many people have made in the past and and drawing the line in in too exclusionary a way. So that is what my answer is, but then that is why I want to stay a little open-minded about the issue nevertheless.
Measuring Consciousness and Sentience
00:16:27
Speaker
and And from a philosophical, I mean, you you sort of, you talk about phenomenology as being a key definition of consciousness um and obviously notoriously difficult to um access or say too much about, right?
00:16:42
Speaker
um So there may be ah a bit of a tension between um getting it right and then having a scientific tool in order to diagnose something like this. Yes.
00:16:53
Speaker
yeah So how do you how do you how do you feel that you can, as a philosopher too, especially where I think phenomenology is really where it all starts, how do you how do you ah do you um integrate more so objective methods of measuring?
00:17:10
Speaker
Yeah, great question. So as as a starting point, I want to emphasize that If sentience is the basis of moral standing or if consciousness is the basis of moral standing, and if those are difficult or impossible to measure, so be it.
00:17:26
Speaker
Some people think, hey, if those are really difficult or impossible to measure, then we need to pick some other basis for moral standing that would be easier to measure. But that would be an awful lot like looking for your keys where there happens to be light. I think you need to follow the best information and arguments where they lead.
00:17:43
Speaker
And if they lead to a place of complication where the thing that you need to look for is difficult to find, then you just have to wrestle with that and figure out a way forward anyway. So so that is my starting point.
00:17:54
Speaker
Now, how do we then move forward if we decide what we need to look for is in fact difficult to find? Well, over the past decade or so, scientists in part because of Jonathan Birch and others have made a lot of progress in animal consciousness ah research by developing what is often called a marker or indicator method for searching for consciousness and in nonhumans. And and briefly, but we can unpack it if you like, briefly,
00:18:23
Speaker
the method works this way.
Assessing Nonhuman Consciousness
00:18:25
Speaker
You look introspectively and make a distinction between conscious processing and non-conscious processing in humans. so So I can tell the difference between when I feel pain and when I have a mere nociceptive or or automatic response.
00:18:39
Speaker
And then I can look for behavioral and anatomical features that correlate with conscious experiences and in experience. case So what kinds of brain structures, what kinds of body structures correspond to my experience of pain or or other kinds of conscious experiences?
00:18:57
Speaker
And how do I behave when I experience pain or other kinds of conscious experiences? And then you can look for broadly similar anatomical or behavioral properties in nonhumans of various kinds. So with nonhuman animals, we can ask basic questions like not only do they have the same brain structures and and body structures that correspond to conscious experiences in humans, but also do they behaviorally respond in a way that suggests they can have these experiences, for instance,
00:19:28
Speaker
Do they nurse their own wounds? Do they make behavioral trade-offs between the avoidance of pain and the pursuit of other valuable goals? Do they respond to analgesics and antidepressants in the same kinds of ways that we do?
00:19:42
Speaker
Now, when we find those properties, that is not proof that they can have conscious experiences. Nothing is certain. but it can count as at least weak evidence. It can at least tick up the probability that consciousness or sentience is present. when When you can find properties that are associated with consciousness and sentience according to a wide range of leading scientific theories of consciousness and sentience and that correspond to those experiences in humans, that constitutes at least some evidence and allows you to estimate at least rough probabilities.
00:20:15
Speaker
And you can use rough probabilities for purposes of making high stakes policy decisions in situations involving risk and uncertainty. So that is roughly the method we use, emphasizing that there is no way to achieve proof or certainty, but there are ways to reduce our uncertainty and and that can at least be a starting point for making better decisions.
00:20:37
Speaker
So rather than saying lobsters are clearly ah not conscious, octopuses are clearly conscious, we would instead say something like, I'd give an octopus 80% chance of being conscious and a lobster 20% chance of being conscious, and that would be a little more fair.
00:20:58
Speaker
Yeah, that would certainly be better than than the prior status quo of treating it as an all or nothing, yes or no question, and then setting the bar for yes really, really high in such a way that yeah all of the animals are conveniently excluded.
00:21:14
Speaker
And honestly, even even if you struggle to get to precise percentages or precise probabilities, you can still do what is often done in policy contexts, like say high confidence, medium confidence, or low confidence, right? That that even would would be helpful for making priority setting decisions and in situations where where trade-offs are are inevitable.
Empathy and Consciousness Perception
00:21:37
Speaker
This approach sounds a lot like what I have referred to in our conversations on this podcast quite a bit, which I'm now calling essentially the empathy theory of consciousness, which is I sort of have this belief that in fact, most of us are making decisions about whether another being is conscious or not based on how similar or different their responses are to us.
00:22:07
Speaker
In other words, if I can empathize with a creature, then I am have the belief both that they are conscious and then I also am more likely to give them moral standing.
00:22:19
Speaker
So, you know Rolf and I talk a lot on this podcast and I've known him for a long time. I know that I'm conscious. I know that I'm having an experience. That's the only thing, right? I'm not entirely sure that Rolf is conscious. so far, so far i give you a 99% probability. It's not bad, actually. yeah im I'm giving Rolf like a 95%.
00:22:40
Speaker
95, okay. yeah and I'll work on it. I'll try to say more human things. yeah and I look at my dog and I'm like, I mean, my dog is 94%. ninety four percent I mean, I'm i'm like, yeah I would be flabbergasted if she did not have a conscious experience.
00:22:59
Speaker
um and And I mean, she's, ah I would also say like, you know, from these criteria, you know, also sentience and agency, it just feels like that.
00:23:11
Speaker
I mean, that's just, to me, it just feels that way, right? And I think that's a little bit where this, this idea of charismatic um kind of creatures comes into play, right? It's like, we can just empathize with them. We see them, they look like they're acting like us.
00:23:25
Speaker
I think that that's that is i think the that iss how i that's what I do when I think about it. I think that's also what a lot of other people are doing. also think it's kind of a problem.
00:23:35
Speaker
Oh, yeah, definitely. So definitely, i think an accurate description of of how we allocate our moral concern. We lean on our intuitions, we lean on our emotions and social norms and expectations.
00:23:51
Speaker
And that does generate a lot of empathy, a lot of concern for certain types of beings and not other types of beings. And that is worrying. So, for example,
00:24:03
Speaker
we tend to have more empathy for those who are like us, who look like us, who act like us, who talk like us, even within our own species, people who look more like us or talk in our language and share our culture and so on.
00:24:17
Speaker
and And then across species, we tend to empathize more with animals who have large bodies, large heads, large eyes, remind us of human babies, symmetrical features, you know, four limbs instead of six or eight limbs, furry bodies instead of scaly, slimy bodies, and and so on and so forth, right? And then we also...
00:24:37
Speaker
empathize more with individuals who whom we classify as having certain roles in our lives. Like we empathize with animals who we treat as companions, like cats and dogs, more than animals whom we treat as commodities, like cows and pigs, even though objectively their levels of intelligence might be similar or or even even greater, right?
00:24:57
Speaker
and And then when we think about, for example, AI systems or other types of nonhumans, we can expect that similar biases and similar heuristics are going to cause us to overshoot and undershoot in some cases, like with chatbots.
00:25:12
Speaker
who look and talk like humans and are very charismatic and we use them for companionship and assistance, we might be kind of primed to over-attribute sentience to them. But then with other AI systems who lack you know faces and voices and are just like back-end you know algorithms of various kinds, then then we might be primed to under-attribute sentience to to them. so So I think merely allowing our empathy to guide us would be a mistake.
00:25:43
Speaker
Now this method that I described is an improvement on that, but it still inherits some of the same fundamental limitation, which is it still makes an inference based on a very small sample size, me as an individual.
00:25:58
Speaker
And I might not be representative of all of consciousness and sentience. it It might not be the case that you have to be organized exactly like me and that you have to behave exactly like me in order to have these or other morally significant properties.
00:26:12
Speaker
So we can we can correct for our bias to an extent and we can correct for those heuristics to an extent, but we are still left in a worryingly anthropocentric a discriminatory place because we are extrapolating from our own example.
00:26:28
Speaker
I like that idea of thinking of these as and as cognitive biases that can be overcome. And it maybe it's something that, you know, it's just work needs to be done to identify all of these kinds of biases and and in just like other ways that we we make these sorts of mistakes too.
00:26:42
Speaker
One of the things that I enjoyed in your book is um some of the utilitarian trade-offs that you presented.
Utilitarian Trade-offs in Ethics
00:26:50
Speaker
so ah and And it sort of helps...
00:26:54
Speaker
maybe prime my intuition about how I would think about these things. They're difficult, they're still difficult to think of. And one of them was um saving the life of an elephant versus X number of ants.
00:27:06
Speaker
You know, how many ants would it take to equal the life of an elephant? and It's such difficult calculus to think about because, I think as you talk a little bit about too, there's a, there's a tendency, you, sometimes you really don't want to make those comparisons as though they add up you know, you know, a million ants equals one elephant or something. But, you know, how have you, how have you come to think about these sorts of utilitarian trade-offs?
00:27:33
Speaker
Yes, I think these are really important. We are all reluctant to make them, and I think rightly reluctant to make them because first of all, we know from a long history of hierarchy in our own species that hierarchy often is a product of bias and prejudice and discrimination and very like convenient and self-serving.
00:27:52
Speaker
And i think we rightly feel reluctant to reintroduce the idea of hierarchy into our moral community. And i think we are also rightly skeptical of our ability to do this kind of math and and are concerned about where this kind of math could take us. And these are all good reasons to be wary of comparing lives and trading lives in this kind of way.
00:28:17
Speaker
But I think that we need to carefully, cautiously, thoughtfully confront these questions anyway. Because the reality is that we live in a multi-species and we might soon live in a multi-substrate moral community that has large populations of small beings like ants and small populations of large beings like elephants and then lots of other populations all along the spectrum.
00:28:42
Speaker
And to the extent possible, we can find co-beneficial policies and and approaches to organizing society that are good for everybody. But obviously trade-offs are going to arise, and when they arise, we need a principled way of making priority-setting decisions.
00:28:58
Speaker
Otherwise, we just end up relying on our intuitions that are obviously biased and self-serving. And so so as a starting point, I just want to say this is really important. We need to do it, and this is really difficult, and we are really bad at it. and then And then you can proceed from there and start to figure it out. And and and people are writing really interesting articles and books and and white papers about this right now and exploring the possibility of using neuron counts and lifespans as a proxy for how much welfare and and moral weight you should carry. or Or how can we make it more complex and and avoid problems associated with these simple proxies. so
00:29:42
Speaker
So I think that is the direction we should go. we are not there yet. Neuron counts and lifespans is not the solution. but But maybe a decade or two decades of research and careful ethical thinking can can lead to better decision making.
Moral Significance and Policy Making
00:29:56
Speaker
So when you think about that, you know yourre you know, in the book, you talk a lot about probabilities, like how probable is it that this being has sentience, for example, and then, you know, and then also like how much right do they have and how much of experience can they have or would they have? And then how does that relate to their, you some kind of metric of how much they matter. So this, this is how you kind of think about the the ants versus the elephants.
00:30:31
Speaker
Can you talk a little bit more about how you think about making those probabilistic decisions and how you think about
00:30:43
Speaker
putting confidence intervals on those probabilities as well? Yeah, good question. Because we are very bad at this, I want to take any actual probability estimates with with a grain of salt.
00:30:55
Speaker
But as I emphasize in the book and and also in separate work, These probability estimates can still be useful, even if they are imprecise, unreliable in various ways, because they might still be better than the alternative, which is relying on our intuitions or not facing the question at all, which which which tends to be the status quo.
00:31:14
Speaker
And and so to To be rigorous about it and and take it one step at a time, the way that I would construct these probabilities is to first of all ask how likely are different capacities to be necessary or sufficient for moral significance in the first place? Like how likely is sentience to matter?
00:31:34
Speaker
How likely is agency to matter? And so on and so forth. And then for each one of those capacities, you would have to ask how likely are particular capacities, like cognitive capacities, to be necessary or sufficient for, say, sentience or for agency? so So how likely is a ah global workspace that integrates all of the modules of your brain, how likely is that to be necessary or sufficient for sentience? What about higher order thought, having thoughts about other thoughts? What about
00:32:07
Speaker
yeah agency of of various kinds and perception and embodiment. and And so if for each potentially morally significant capacity, you can say how likely is this to matter?
00:32:19
Speaker
And then how likely are these beings to have it? You can put it all together in order to generate a very rough, very imprecise estimate of how likely these beings are to matter.
00:32:30
Speaker
Now, I emphasize that any current efforts to do this are going to be very fraught and way off the mark, but I think it can still be a useful exercise because it can be a useful corrective against our intuitions. So for example, my colleague Robert Long and I wrote a paper in 2023 where we we did this as an exercise with near future large language models and other AI systems.
00:32:56
Speaker
And we found that even if we make very skeptical and conservative assumptions about what it takes to be conscious and and so on and so forth, we still had trouble avoiding like a one in a hundred or one in a thousand chance of near future AI consciousness.
00:33:10
Speaker
And that is very clarifying because that would be a non-negligible probability. That would be a non-negligible risk. And so I think as long as you hold the probability estimates lightly and you use them for that kind of level setting bias correction purpose, it can it can be a useful exercise. But we will have to improve our ability to make these estimates in order to really incorporate them into policy making in a way that that we can trust.
00:33:38
Speaker
Yeah, it's interesting, at you know when it comes to these types of calculus, you know e but one of the things that I think about is our inability or our challenges in understanding our own consciousness and our own sentience.
00:33:57
Speaker
um And if we're gonna use our own
00:34:03
Speaker
sentience as a guide for other beings, you know, ah it's it's important to have a grip on on our on our own experience. So I use sentience in specific because I am highly confident that I have a conscious experience.
00:34:20
Speaker
i have ah I have a display as like, you know, my Buddhist friends would say, like, there's a display that I'm experiencing it that that may or may not be the extent of my consciousness.
00:34:31
Speaker
that may or may not be reflective of anything real, but I'm highly confident that I have a display. Sentience is actually a little bit more challenging. don't know if you're familiar with like the James Lang theory of emotion, but it's something something we talk a lot about on the show, because it comes up in a lot of different contexts.
00:34:51
Speaker
But basically the the James Lang theory of emotion is that you assign an emotion as a label that is useful to your evolutionary organism as a response to a physiological response. So the the physiological response is primary.
00:35:17
Speaker
And the emotional experience is a label that is placed on that physiological response. So you see the snake, you jump away and then you're scared.
00:35:34
Speaker
So the fear is actually just a tool that's used essentially for memory and learning in a going forward basis. the So and so in in that sense, like the the action of responding to that snake in an evolutionary basis of avoiding it You see that in a lot of lower, lower, very, very, very low down the chain of of complexity organisms.
00:36:00
Speaker
Right. And I'm not convinced, I'm truly not convinced that my sentience is
00:36:12
Speaker
bigger than those organisms yeah that have a similar kind of stimulus response action. And so that's where we're brain complexity kind of, ah you know, and cognitive complexity arguments, I just want to problematize that a bit.
00:36:31
Speaker
Oh, yeah, I completely agree about that. so so So far, we have been talking about estimating probabilities yeah of moral significance. But then, as you say, there is also this question of magnitudes of moral significance. If you were doing a kind of cost benefit analysis or expected value analysis, you would multiply the probability that they matter by the extent to which they would matter if they did, and then you would treat the product as the expected value that they they have for purposes of decision making. But as you say, there are debates about whether everyone who matters matters equally, or if some beings who matter, matter more than other beings who matter, and and if so, why? Right? And as you note,
00:37:15
Speaker
a common sentiment is that beings with more cognitive complexity and longevity matter more than, than beings with less. So, so the more complex your brain is, presumably the more complex your beliefs and desires and intentions and and goals can be on the agency side, but then also the, the more intense and prolonged your pleasures and pains and happiness and suffering can be.
00:37:44
Speaker
on the sentience side. And similarly with lifespan, if if your lifespan is 100 years, you could have 100 years of happiness or suffering. And if your lifespan is two days, then you will have less than that.
00:37:55
Speaker
That's an unfortunate thing for octopuses because they seem so, there's so many cognitive capacities that they seem to have, yet they live for a pretty short amount of time, usually just a year. Yeah.
00:38:08
Speaker
And then think about immortal jellyfishes that are really on yeah on the opposite end of the spectrum. yeah Yeah. Now, so so Joe, I like your impulse to problematize cognitive complexity because it really is not clear.
00:38:25
Speaker
whether and to what extent it corresponds with, for example, the intensity of conscious experience. And then there are related questions. so So even if neuron counts, for example, roughly track the intensity of experience, it might not do it in a linear way because the larger your body gets, the more of your neurons need to be used to maintain your body. And and so they might be irrelevant to conscious experience.
00:38:48
Speaker
Also, As Sahar Akhtar, the philosopher, notes, when you are more cognitively complex and intelligent, that might also give you certain coping mechanisms that you can use to dampen your pain and suffering experience. you know when When I experience pain and suffering, I can ah step back from it.
00:39:08
Speaker
distract myself from it, explain it, justify it, transport myself to memories of better times or anticipations of better times in the future. Whereas my dog might just be like stuck in this eternal painful moment, totally disoriented, having no idea why people betrayed him in this way. and That is a really interesting interesting twist, I think. that Yeah, I like that idea. I like thinking like that, yeah.
00:39:34
Speaker
Yeah, Akhtar speculates, and and this is only speculation, that it might be beings or animals kind of in the middle of the spectrum who are most vulnerable, cognitively complex enough to experience those richer forms of suffering, but then not cognitively complex enough to use that suite of coping mechanisms that that we have available. So in any case, Joe, yeah, I think it would be too early to take anything for granted here.
00:39:59
Speaker
but But we really do... at the end of the day, have to make some decisions. If a house is burning down and you can save an elephant or an ant, but not both, are you gonna flip a coin or are you gonna allow for at least a little bit more likelihood of richer experiences in the elephants? Like that is really where we kind of face a practical problem that that we have to solve. And and I don't know, what what would you do in that moment? how how How confident are you in your problematization of cognitive complexity?
00:40:30
Speaker
Well, you know, this is again where like, I'm just going to fall back on the fact that I, I'm just way more empathetic with the, with the elephant. It's like, this is not going to this is not a problem for me. Like I'm going to say that that, that that was a problem. You said that, Oh, I know it's a problem. i I also know what I'm going to do. i mean, so like, Oh, so this is a prediction of your behavior.
00:40:50
Speaker
Yes, exactly. Okay, but but not not an answer to a moral question then. This is is you saying, i am not going to think ethically about this. I am going to rely on my empathy, which I know when I step back is not a good guide to the ethical truth.
00:41:03
Speaker
No, I get that. But it's just, you know, um from the perspective of
00:41:12
Speaker
really my my my level of confidence that the elephant... is conscious and has some experience. So the probability can be a tiebreaker, even if the magnitude is not. So so you can say, if if the elephant and the ant are conscious or sentient, I take them to be equally conscious or sentient, or at least I would bet a lot on that. But I nevertheless take the elephant more likely to be conscious or sentient than the ant, and that will be my way of breaking a tie in addition to like my empathy pulling me in that direction.
00:41:45
Speaker
is that Is that the thought? Yeah, I think i would i would say I would say so. I would also just say that like, there, I, I, and this is, I think that more, a slightly more subtle point to my empathy aspect, which is, I think that's what we're all doing when we're having this conversation anyway.
00:42:05
Speaker
Sure. So I, after having thought about it for a long time and, and, and, you know, you know, meditated on this concept for a lot, I, I suppose my, first of all, I think that consciousness is more a property of the universe. And I think that probably it's, I think it's probable that all things have consciousness or are part of consciousness.
00:42:28
Speaker
So how you interface with that is, is going to be a challenge so that, you know, um, all of your all of your actions have some moral ethical consequence.
00:42:39
Speaker
yeah Whatever you're interacting with. And so yeah you're looking for right action. So how do you how do you determine right action? And I think there is a place where you just have to rely on at some level your intuition, not naive intuition, but intuition born of you know contemplation and study and Totally. Empathy. That's where land that as just operating principle.
00:43:07
Speaker
as as ah as just an operating principle I totally agree with you about that, actually. Everything you said, i think, is exactly right. It can be tempting for philosophers and scientists or other people to think if you go through this process of thinking about thought experiments and and looking at the latest evidence and and making your beliefs more informed and your values more coherent, you can reach a kind of objective, impartial truth. And and the reality is we are stuck in our subjective perspective, whether we like it or not. Now, if we go through that process, we can still make our beliefs more informed and our values more coherent and our practices more reflective of our beliefs and values. And and there there is value in that. and And then we can better calibrate our empathetic responses to to be in alignment with with these refined beliefs and values and practices, which I think is what you mean.
00:43:59
Speaker
But ultimately, we are still operating from our own subjective perspective. And I think we need to to own that. Yeah. So there's a few different ways that to take this conversation from here. And a couple of things I want to make sure that we definitely had, I definitely definitely want to make sure we hit AI.
00:44:16
Speaker
isn't um But I also, I'm umm um'm quite interested in your thoughts on um this like concept of effective altruism that is kind of popularized in the mainstream, at least, ah you know, in the Twitter sphere and and places like that. Sure.
00:44:32
Speaker
So, yeah, I mean, Okay, third point that I wanna make sure that we get to is how do we get people to care? Yeah, yeah. Right? So
00:44:45
Speaker
maybe, but you know what do you think, Ralph? Because human, they're- They're all interesting questions. I mean, I'd love to touch on i'd love to touch on AI too, but um I don't know what the first one you're thinking of is. Why don't we hit
Assessing AI Systems for Consciousness
00:44:58
Speaker
AI first? So yeah. what Sure. so How would you think about knowing whether an artificial intelligence was conscious or sentient and or had an agency?
00:45:10
Speaker
Yes. Well, as with non-human animals, but even more so, we are not going to be able to know with a high degree of confidence. But as with with non-human animals and and a little bit less so, we can still reduce our uncertainty by adapting the marker or indicator method for AI.
00:45:29
Speaker
And the way that that would work is similarly looking inward and distinguishing between conscious and non-conscious processing in humans, and then looking for broadly similar functional computational properties in AI systems.
00:45:45
Speaker
Now, one difference between animals and AI systems is that we are not able to give the same amount of evidential weight to behaviors in AI systems as we are in animals. Because with animals, we know we share an evolutionary history, we share similar material, biological circumstances, we know that their behaviors are generally going to be an expression of their instincts and and you know so socialized behavioral patterns and and are a little bit more evidential of what is going on inside. But with ai systems,
00:46:23
Speaker
they originated in an entirely different way or made out of entirely different materials. and And we have these other explanations for their behaviors, like with language models when they say, I am sentient or I am not sentient.
00:46:35
Speaker
We know this is them engaging in pattern matching and text prediction and not really evidence one way or the other of sentience. But we can look past potentially misleading surface level behaviors for the underlying architectural computational ah properties that we know are associated with consciousness and sentience and agency in humans and other animals so for example do they have architectural computational properties associated with not only physical embodiment but perception attention learning memory self-awareness social awareness language reason
00:47:11
Speaker
flexible decision making, higher order thought, thoughts about other thoughts, global workspace that integrates information from all these modules and broadcast information back out to it to coordinate behavior within the system.
00:47:24
Speaker
Now as with the behavioral properties of animals, none of this is proof, none of this establishes certainty, but If in five years we have AI systems with physical bodies and advanced and integrated versions of all of those capacities I just listed, that would constitute evidence according to a wide range of leading scientific theories of consciousness.
00:47:49
Speaker
And so it would tick up the probability to a level that merits at least a little bit of moral attention, I think. I really appreciate that answer too, because I think um most people would think of deciding consciousness based on purely how I interact with this system, um sort of like a Turing test kind of thing. If it convinces me that it's conscious, then it must be conscious.
00:48:12
Speaker
But there i mean there are an infinite number of behavioral systems that have different underlying architectures. They can all produce the same response. they can all say, yes, I'm conscious, but there'll be totally different...
00:48:25
Speaker
ways of operating beneath the surface. I mean, I guess it's like um John Searle's Chinese room where where most systems may have no understanding, but if you get it right, there may be some genuine understanding there.
00:48:37
Speaker
Yeah. Yeah. Now, one one caveat is i would not want to totally rule out the possibility that behaviors can provide evidence, even in the case of ai systems.
00:48:48
Speaker
We would need to design them in a way that allows us to trust their behaviors. But but even in the absence of that, behaviors can still serve as kind of indirect evidence of some of those underlying functional properties, like some Google researchers and my colleague Jonathan Birch, just at the end of 2024,
00:49:07
Speaker
released a paper where they performed the same kind of behavioral trade-off test on language models that people have performed on non-human animals, basically stipulating to the AI systems that they need to choose between the avoidance of what is described to them as pain and the pursuit of other goals that they are programmed to have. Now, this is obviously not actual pain or probably not actual pain, but the fact that the AI systems can make thoughtful trade-offs between these stipulated goals could be indirect evidence of a certain kind of global workspace that allows them to integrate different types of information or inputs in order to make a decision.
00:49:43
Speaker
And so in this kind of way, behavior could still, at least on the margin, count as interesting evidence. Oh, that's fascinating.
AI Welfare and Moral Considerations
00:49:50
Speaker
Yeah, that's great. I mean, especially if you look at some of the the more recent updates to the chatbot models, right? Where yeah now the next the the next iteration is like, okay, you know, you've you you generate a response, then you go back over that response and and ask, and does this make sense?
00:50:07
Speaker
Yeah. I'm about to say before you say it. You're starting to get something more like, you know, that kind of global workspace, you're starting to get something more like thinking about thinking. um Yeah, and having memory and being able to loop back and and build on previous thoughts and so on. those Those are those are some of the early signs of some kind of emerging mind.
00:50:31
Speaker
You kind of put that into like a self-driving car and has, you know, or a Roomba. I mean, there are so many, so many machines in the world that once, yeah once they have some of those capacities, you know, maybe they are at least as likely to matter as a nematode worm.
00:50:47
Speaker
So interesting. Okay. So, and This is an interesting question, um whether or not my Roomba could matter, I guess. um Because it seems it seems entirely possible that robots, even if you programmed a sentient robot,
00:51:07
Speaker
um it would be perfectly happy doing its job or you know ah experiencing things that you or I would consider pain, but it doesn't it's not programmed with that same valence. Whereas when we're looking at animals, you feel you do feel a lot of this empathy because, like you say, there's a lot of a lot of these behaviors are evolutionary kind evolutionarily conserved. So if we see an animal behaving in a certain way, it really kind of makes sense that they're experiencing the same thing as we are.
00:51:35
Speaker
Right, right. Yeah, that just raises a whole separate set of questions, which is once once you think an AI system has non negligible chance of being sentient agentic, otherwise morally significant.
00:51:48
Speaker
Now you face the question, what is good or bad for them? What do they want to need? What do I owe them? And as you say, this is a problem. because we might not be licensed to make the same assumptions about what they want and need as as we are with non-human animals. Even when language models tell us they want something, they might be role-playing based based on text prediction and pattern matching, right? This might be a persona.
00:52:10
Speaker
and And so we need some other way to determine what is actually good or bad for them. And as we know from the case of non-human animals, we can really easily make mistakes. We can engage in excessive anthropomorphism, attributing human desires to them even when they lack those desires.
00:52:25
Speaker
But we can also engage in excessive anthropo-denial, denying that they have human-like desires even when they in fact have them. And so I think this is going to require an extensive research program in order to make progress on this. And and so in the end of 2024, with my colleague Robert Long and and eight other researchers, we released a report called Taking AI Welfare Seriously, where we called for leading AI companies to take AI welfare seriously now and other actors, researchers, policymakers to do the same.
00:52:57
Speaker
Because the reality is, given how fast the space is moving, we are barreling towards AI systems in five or 10 years that are going to have a realistic chance of mattering.
00:53:09
Speaker
And at that point, we will face decisions about how to treat them. And in order to make those decisions responsibly in five or 10 years, we need to accept this as a serious issue now, ah start developing tools for assessing AI systems for welfare relevant features and conducting welfare assessments, and start developing policies and procedures for treating them with an appropriate level of moral concern.
00:53:31
Speaker
So basically my answer to your question is we do not have an answer yet, but that is why we need to take the issue seriously now and start developing an answer. So to be ah slightly optimistic about this, um the the advantage that we have with AI systems is we can really decompose them and we can we can have perfect information about what's going on.
00:53:49
Speaker
Now, I mean, of course, we it's difficult to... information, but can we understand? We can understand. We can know what's going on in you know a trillion different nodes, but that doesn't necessarily mean we have understanding. But we do have perfect information about low-level stuff. So at the very least, we can run experiments and and and test things out...
00:54:08
Speaker
in a level of detail that we don't have in biological systems because we cannot know what every single one of our 86 billion neurons is doing it at any given time. that That is absolutely right. so So there are some advantages we have with animals, like we share that evolutionary history and and we might be licensed to some of those assumptions out the gate.
00:54:29
Speaker
But then as you say, there are some advantages we might have with AI systems. We can develop interpretability tools that give us more of a window into low level inner workings of of these systems.
00:54:41
Speaker
Now a challenge, and I think Joe was was suggesting this while you were talking, a challenge is that interpretability is a work in progress and it is not perfect yet. And even if we have ah an understanding of their properties at a low level, we might not catch emergent properties at higher levels, right? They are to some extent black boxes.
00:55:04
Speaker
and And we know from the case of humans and other animals that there can be emergent properties that that you you might miss if all you look at is the low level, right? You can you can look at a human and animal brain and see all the neurons in the cells and understand how the neurons in the cells work.
00:55:21
Speaker
But that by itself is not going to show you the happiness and the suffering and the hope and the fear. and And so we we have to... allow for the possibility that even with better interpretability, we could still be at risk of missing emergent higher level properties with AI systems.
00:55:39
Speaker
And just one more brief point about this very very quickly. There could even be ethical concerns that arise about interpretability. So so with with humans and and other animals,
00:55:49
Speaker
There are lots of ethical concerns associated with like surveillance and mind reading. We value our privacy. We value our autonomy. We we value our independence.
00:56:00
Speaker
And we would not like people even having cameras in our our homes to say nothing of cameras in our brains, monitoring every brainwave and anticipating every decision. and And so a kind of messy question that will arise in the future is even if we have these tools, is it ethical to use them if AI systems similarly have an interest in privacy and autonomy and independence?
00:56:22
Speaker
Well, I mean, being the programmers, I think we can decide ah for them not to have an interest in privacy. well And is that ethical? And is is that ethical to do? Right. Yeah. know yeah This is like debates about... he doesn't Sorry, go ahead, Joe.
00:56:37
Speaker
No, sorry. I was just going to say, yeah, I don't think that answers your emergent property right issue. Right. because No, no, this is this is a separate issue. but but But right, there is this further issue. People ask, for example, would it be ethical to create a pig who wants to be factory farmed and eaten and who values being exploited and and exterminated in this way?
00:56:56
Speaker
And some people say, yes, because now pleasure is happening all around and more good lives are happening and we still get to eat our bacon. And other people say, no, you are fundamentally engineering this being to be vulnerable, independent, and objectified and oppressed by you.
00:57:10
Speaker
and And the fact that you happen to give them compatible desires does not take away the moral problem of... engineering this sort of like ongoing state of of exploitation. so So not to necessarily endorse that perspective, but just to say that there are kind of ethical questions at every level here that we have to be be mindful of.
00:57:34
Speaker
Yeah. So what one last question about AI, then I do want to get to this. That brings up this question of like, how do we get people to care? Yeah. ah just I just have to know this.
00:57:45
Speaker
What is your probability that AI is or will be sentient. Such an unfair question, Joe, but to nail Keeping in mind everything said over the course of the past hour about how imprecise and unreliable our probability estimates are,
00:58:00
Speaker
keeping in mind everything i said over the course of like the past hour about how imprecise and unreliable our probability estimates are My personal estimate would be in something like the 10 to 20% range that there will be AI systems with sentience, robust agency, or or other morally significant capacities in the near future, by which I mean the next 10 years or so.
00:58:28
Speaker
I would put it at around 10 or 20%. But just to emphasize, I think that a risk... obviously can merit consideration, even when the probability is much lower than that.
00:58:41
Speaker
and And this is why when when Rob Long and I wrote our paper in 2023, and we did this report in 2024, we we really worked hard to be as skeptical and conservative in our estimates as as we think are reasonable.
00:58:55
Speaker
And we still struggled to avoid a one in a hundred or one in a thousand chance of moral significance in the near future. I think any estimate that ends up lower than that is probably unacceptably confident and and arrogant and hubristic and in some way.
00:59:12
Speaker
and And so, yeah, my personal estimate is something in the 10 to 20% range. But I think a reasonable estimate is going to have to b higher than one in a thousand or one in a hundred.
00:59:24
Speaker
and And as long as we agree about that, we agree that there is non-negligible risk that merits consideration. What about you, Ralph? What's your probability? I was going to say somewhere near 80 or 90%, but maybe not in the next 20%. It depends on the time frame. because The time frame. Jeff said 10 years. so yeah In 10 years?
00:59:42
Speaker
Boy, that is so hard. i would and like Maybe I could settle on something like 10 or 20%, but only because I was just anchored by Jeff. Yeah, me too. I have the same exact response to you, Rolf. I think in the long run, it's it's i have like an and i would set my probability at about to and my confidence like five percent Right, right, right. Low, low confidence, yes yeah.
01:00:06
Speaker
Yeah. Like the confidence interval spans the whole range. Yeah, fair. So maybe to jump into something else too, it and speaking of low low probability too, there's a section in your book, which i think is fascinating too, where you talk about insect farming.
Ethics of Insect Farming
01:00:23
Speaker
And... um I think intuitively, a lot of people might first think, oh, insect farming, that saves us from having to get protein from mammals or you know something that we're sure is conscious.
01:00:38
Speaker
We can feel pretty good about about using insects because they're not conscious. um But you talk about the scale of of potential insect farming operations where you know a farm might be you know, in the trillions of of insects per annum, you know, just churning through. And then, mean, the sheer scale, the sheer number of animals almost pushes the issue, right? It feels like it's morally significant just because of the number of animals. Exactly. Yeah. Yeah.
01:01:12
Speaker
And i don't I don't know how to, i guess there's not even a question about this. I was just thinking about how interesting that trade-off is. And yeah and what what brought you to what brought you to think about um insects as as something we might be morally responsible for? and And I mean, if it comes to pass that some of these farms come to being, it sounds, from a certain perspective, it's a holocaust insects.
01:01:39
Speaker
inside Yeah, I became interested in insects because i became more and more acutely aware of how slow we are to acknowledge the inevitable over the past 10 years of working on animal minds and animal ethics. i I, looking back, see how clear the evidence for other mammals and birds and to a certain extent, reptiles, amphibians, fishes, cephalopod molluscs.
01:02:08
Speaker
and And yet I look and see how very smart scientists and philosophers dragged their feet for decades and decades and decades and used every tool in the toolkit to delay acknowledgement of of a realistic possibility of consciousness and sentience and in those animals. and And once I became acutely aware of of that tendency that we have, I projected forward and asked, what might we be making that mistake about? And and this led to an interest in examining.
01:02:37
Speaker
these other invertebrates, decapod crustaceans, insects, even microscopic organisms, plants and fungi, and and then of course as we've discussed chatbots and robots and and other new technologies.
01:02:49
Speaker
And not surprisingly, as people start to look for evidence and in these broader taxa, we do find at least some evidence. Now the evidence is mixed and the evidence is limited. We find some evidence that indicates consciousness and ants and bees and and some evidence that indicates not consciousness and of course there are huge numbers of species about which we have no evidence at all but it seems implausible to me that when we have substantial additional evidence the probability of of consciousness and moral significance is going to be minimal. I mean already what we are learning is fascinating about
01:03:28
Speaker
their their ability to respond to analgesics or their ability to make behavioral trade-offs or to engage in complex symbolic communication and and to some extent flexible decision making. and And even that already, I think, creates a kind of presumption that there is at least a flicker of of consciousness, sentience, agency here that merits at least a flicker of moral consideration. And then as you say,
01:03:56
Speaker
When we then think about the scale at which we interact with them, there are potentially quintillions of insects alive at any given time. We are already killing more than a trillion a year with insect farming. We could be killing 50 trillion a year by the end of the decade as we scale up the industry. We kill quadrillions a year with agricultural and other insecticides and and for other purposes.
01:04:18
Speaker
And so even if each individual insect is only like 10% likely to matter, and even if, contrary to what Joe said, Each individual insect matters only like a millionth as much as an elephant.
01:04:29
Speaker
You still got to take like a 10% chance of a little millionth of a unit of moral significance and multiply it by like a quintillion. And then there's a lot of expected value in the world that we are just totally neglecting and just casually exterminating.
01:04:48
Speaker
and And that to me is very, very deeply concerning. and Yeah, and that all makes sense.
Expanding the Moral Circle
01:04:55
Speaker
You know, one of the things that that that's really highly present for me in this conversation is that, you know, I think, you you've taken a very expansive approach to thinking about the moral circle and like what might be in there. And I am quite sympathetic and to that, you know.
01:05:14
Speaker
And when I look around in the world, I see a lot of people, maybe 50% of people whose moral circle is like, maybe doesn't even extend outside of their own body and maybe not even that far.
01:05:28
Speaker
and and And then another 25 to 30% of people whose moral you know circle extends to the the immediate household. yeah And then, you know so in other words,
01:05:42
Speaker
We have got a lot of collective action problems in the world today. For example, we are absolutely on the brink of and or in the midst of ecological absolute catastrophe at this moment, and we are doing nothing about it, nothing at all.
01:05:59
Speaker
And so how do we get people to care yeah about other people, other beings, you know even just How do we even just start to push the boundaries out even just a little bit? like How do we start to get people to just care at all about other other people, ah there other beings, other beings of of any kind?
01:06:21
Speaker
Yeah, well, there is obviously no so silver bullet here. If there was, we would have found it and we would be living in a utopia. I think they there is ground for cautious optimism and cautious pessimism.
01:06:37
Speaker
Definitely not grounds for you know full optimism or full pessimism. We we have in our nature an altruistic streak and we have in our nature a self-interested streak.
01:06:48
Speaker
And at various points in human history, various cultures, various circumstances, one can be a little bit more dominant than the other. And it tends to vacillate and and we have to to be pretty vigilant.
01:06:59
Speaker
And right now there are signs pointing in good directions and signs pointing in bad directions. According to some metrics, we are living in a better world than we have with you know lower levels of average famine and poverty and and more equality and and so on.
01:07:13
Speaker
But along other metrics, we are living in a worse world than we ever have. Higher degrees of of factory farming and and industrial animal agriculture and encroachment on non-human habitats and contribution to biodiversity loss and climate change and emerging infectious diseases and extreme weather events like fires and floods, plus risks associated with nuclear war, risks associated with advanced AI, or natural risks associated with volcanoes. There are a lot of good things and a lot of bad things.
01:07:48
Speaker
and And that is is the context in which we think about what can we do on pandemics? What can we do on climate change? What can we do on moral circle expansion? And all I would say here is we need to take seriously both our responsibilities and our limitations. It would be a mistake to not be ambitious because of our limitations, but it would also be a mistake to be naively utopian because of how weighty our responsibilities are. So for me,
01:08:14
Speaker
we should sort of regard this as a long-term intergenerational project of gradually expanding our moral circles in a way that can be achievable and sustainable. And we should do that by seeking low-hanging fruit co-beneficial solutions, ah policies that can be good for humans and animals and ai systems and the environment all at the same time.
01:08:32
Speaker
and go towards those and then highlight the fact that animals, AI systems are potentially stakeholders too. And then we can make progress for humans, for others. We can normalize the idea of including others as stakeholders as we improve human lives in societies and we learn more and and we have more capacity and and this idea of including others as stakeholders is normalized.
01:08:54
Speaker
we might find in the future that we can do more than we can at present. So so I think the the trick is to aim high, aim really high, but then understand that it will be decades or centuries before we can get there. And we have to do it one step at a time. And we have to make sure to keep investing in our own species as part of that.
01:09:11
Speaker
I think that's a great approach because I think the the first thing a lot of people might feel when they think about some of these things is overwhelmed. That, yeah oh my God, so if we we need to take every insect into account and you know we need to you know everything is sentient, how can I even act in the world and and get anything done? And it reminded me a little bit of the show The Good Place where, i don't know if you've seen that, but Yeah, yeah. Of course I did. It was a moral philosopher, main character on network. Every moral philosopher must have seen it. I But one of the punchlines from that show was that nobody actually made it into the good place or heaven because every every action they undertook had some unintended consequence and it was too complicated. So everything they did had produced some sort of bad.
01:09:58
Speaker
and And I can imagine this kind of being paralyzed with... Absolutely. how How can you do this? But I like i like the way you state this is it's it's we don't have to be perfect, that it's it's more of a striving towards a direction. and And like you say, picking some low hanging fruit that might put us in that direction. So and that's a little anxiety reducing to.
01:10:20
Speaker
to talk about that. Yeah, no, I mean, i think I think we should aim high, but we should also be kind to ourselves, work within our limitations, do work that can be achievable and sustainable, and and recognize that we are going to make mistakes and and going to need to try new things. So, so you know, one one area where this comes up a lot is wild animal welfare.
01:10:38
Speaker
We recognize that wild animals matter. We recognize that human activity is affecting them. But we have no earthly clue for the most part how to make their lives better right now because of how complex their biologies and ecologies are and all these indirect effects of different interventions. You help one species, it ends up harming another species. and And you have to learn that through experience.
01:10:59
Speaker
and And as you were suggesting, we could see that and be overwhelmed and regard that as a as as a reason to not think about this issue at all. Or we could see it, initially feel overwhelmed, and then push past it and think, okay, how about we try some promising, seemingly co-beneficial interventions and then monitor the effects and then iterate from there. you know So bird-friendly glass on buildings and overpasses and underpasses on transportation systems.
01:11:28
Speaker
How about we try those And we see if they reduce collisions and and make it easier to coexist. And then we can learn. And either it goes well or it goes badly. But either way, we can iterate from there. And I i think we need to do the same with a lot of these issues.
01:11:42
Speaker
Yeah, absolutely. I mean, and there's some things that are just so obvious, you know, win-win situations. Like reforestation is a great example. You know, like just planting trees.
01:11:53
Speaker
you know, helps ah everyone, right? Like at every level. And yet, these kinds of externalities are really, really hard for us to get organized around.
01:12:05
Speaker
So how do we, how do we get better at just this general problem of, you know, I guess it's a couple things like one is, like, to your point of probabilities, I like this, you know, this framing of probabilities, because it gives us a way to kind of to have a conversation about the moral circle and in a way that like is quantifiable even if it's imprecise. so you know But we're so we're sort of at the moment you know with many issues, it's such dichotomous ends of the spectrum on these probability estimates where at least to your face,
01:12:50
Speaker
half the population will tell you that climate change is not real. And they'll say that they have 100% certainty that but it's not real. And, you know, I look at this you know the this things that I read in, like, scientific journals and being more of a like a scientist type and Yeah. Coming from that perspective. And I, and then I also, I've been alive for 50 years.
01:13:11
Speaker
I know that it's warmer than when I was a kid. Yeah. Like it's, it just seems like impossible. You know, my probability that climate change is real is like, you know, 95%. And so like you got zero and you got 95, like, yeah you know, how do we, how do we, how do we resolve these kinds of things?
01:13:30
Speaker
Yeah, here too, i think there are no easy and obvious answers. You really just have to do education and advocacy and politics and and be realistic and and go out in the world and engage with people. And and so it is not the case that you can write a book and frame the arguments just right. And now everyone from across the political spectrum and and farmers and and lab managers and everybody is going to be like, Yeah, moral standing for insects, legal personhood for for ai systems.
01:14:02
Speaker
but But I do think that a lot of different interventions by a lot of different actors for a lot of different audiences and a lot of different contexts can add up to something.
01:14:13
Speaker
And then if if all of that happens and then we all get a little lucky, we can make progress in the right direction. So if we have... people doing this research and shedding light on sentience and agency in in these beings and the interconnections between human and non-human and environmental health and well-being and and understanding how our lives and our feats are linked and how treating them better is not only better for them, but also like an investment in a better world for us.
01:14:39
Speaker
the The more researchers can shed light on that and the more educators and advocates can get the word out and the more governments can have informational policies get the word out. And then if governments and companies can work together to develop alternatives to industrial animal agriculture or invasive animal research, and then like subsidize and incentivize those, and then everybody has an incentive to go to the alternatives.
01:15:02
Speaker
Once people are less dependent on the violent, oppressive status quo, their hearts and minds open up to the information and arguments that researchers and educators and advocates are generating.
01:15:13
Speaker
And so if if all of that happens and people stay open and and you talk across political divides and people are willing to go on podcasts even though they disagree with the podcaster or or are willing to have conversations with the neighbor even if they disagree with the neighbor, if if we can keep those lines of communication open while this research and and education and policy is happening, incentives are shifting, then then I think we have a chance of going in that better direction. And all I want to do is invest in that chance, like improving the probability of of going in a better direction in those ways.
01:15:47
Speaker
Great. so i think that's I was just going to say that might be a great place to wrap up. But Rolf, do you want to get one more question in? Yeah, well, I wanted to ask, I mean, a question that we oftentimes ask is what what kind of research or ideas are you excited about right now? What's what's coming out that's exciting you?
01:16:04
Speaker
Well, we have talked a lot about what is exciting me right now, this this emerging research about invertebrate ah cognition and behavior, this this emerging research about ai cognition and behavior.
01:16:19
Speaker
I will highlight maybe one area that we have discussed less which which are the other parts of the biological world, the plants, the fungi, ah bacteria, the the microscopic world.
01:16:33
Speaker
there There is a sort of resurgence of interest in in plants and fungi cognition and behavior after after many decades of it being not an acceptable topic in in that area of science because it was seen as not credible, not legitimate.
01:16:51
Speaker
and And now people are starting to reclaim it as a credible, legitimate topic of of inquiry, which is exactly what happened with non-human animals. That was exactly the trajectory with non-human animals. Now, as people reinvest in in plant and fungi cognition and behavior research, and and then with bacteria and other microscopic organisms, are they going to find the same strong evidence as with macroscopic vertebrates and invertebrate animals?
01:17:17
Speaker
Probably not the same extent of evidence. we We know, for example, that plants and fungi lack neurons and and other clear markers of of consciousness in humans. But will we trouble our assumptions about them? I think we might end up troubling our assumptions about them.
01:17:31
Speaker
And we might have to grudgingly admit that they merit a little bit of moral consideration, too. So one area that I am excited to learn more about coming off of writing this book, where I did discuss it a little bit, but not a lot,
01:17:43
Speaker
is is plants, fungi, bacteria, and and these other parts of of the biological world. Well, I think we have to ease our way into that for us anyway, because we're still thinking about flies and it takes a bit for us to get there.
01:17:55
Speaker
but Yes, fair, fair, fair. But this is an absolutely fascinating conversation. Jeff Sebo, thanks so much for joining us. Oh yeah, thank you. definitely Definitely interesting conversation. I really appreciate you taking the time.