Introduction to Tobias Barman and the Podcast's Focus
00:00:00
Speaker
Tobias, welcome to the Future of Life Institute podcast. I'm glad to have you on. Thanks for having me. All right. So perhaps you could introduce yourself to our listeners. Cool. Yeah. So my name is Tobias Barman. I'm a researcher and a co-founder at the Center for Reducing Suffering, which is an organization that is working on trying to
00:00:26
Speaker
find out how we can best reduce suffering, taking into account all sentient beings and the long-term future. I'm also the author of a book, Avoiding the Worst, How to Prevent a Moral Catastrophe, which is talking about the question of worst-case futures and what we can do now to prevent them from happening. And I think this is going to be the ideas that we're going to explore in this podcast episode.
Understanding 'Asterisks': Worst-case Futures
00:00:53
Speaker
definitely. So let's start with the central idea here, which is suffering risks or asterisks. What's the best way to frame this idea? What's the best way to introduce it? So very broadly, asterisks are just worst case futures that contain suffering on an astronomical scale, suffering on a scale that vastly exceeds everything that has existed so far.
00:01:18
Speaker
So, I mean, now there's, of course, all sorts of technicalities about how much suffering does it have to be to count as an asterisk. But maybe I think that this is actually not so important because I think it should be clear what we're talking about. We're talking about worst case futures with a very high level of suffering. This is certainly a grim topic and a dark topic.
Background in Cost Prioritization and Effective Altruism
00:01:42
Speaker
So what attracted you to thinking about suffering risks in the first place?
00:01:47
Speaker
Yeah, I mean, I would say that my background is kind of in cost prioritization, sort of motivated by EA, like you're trying to do the most good. And I think that preventing those worst case scenarios is what it means to do the most good, from my point of view, at least. So that's sort of how I arrived at the topic.
00:02:11
Speaker
Perhaps we could give some examples of what you're talking about, so it's easier to understand.
Examples of Suffering Risks
00:02:17
Speaker
What would be some examples of potential suffering risks? This is actually a quite tricky question because
00:02:27
Speaker
When you are giving specific examples, there's always a risk that the specific example sounds quite far fetched. And in reality, most asterisks are probably unknowns, unknown unknowns, and it's not necessarily concentrated in like a single scenario. So there's that caveat to keep in mind. But I mean, of course, it's still helpful for the listeners to hear about something. So one scenario could be
00:02:52
Speaker
a malevolent dictator, like imagine a future Hitler that establishes a permanent global Earth-spanning totalitarian regime, perhaps even with access to advanced technology. You can see how that is a worst-case outcome that could result in a lot of suffering. You can look at Black Mirror episodes. That provides some inspiration and more examples of dystopian futures of all sorts.
00:03:21
Speaker
But actually, it doesn't have to be something speculative or sci-fi-ish. If you just imagine that humanity at some point expands into space, and then we just continue the forms of suffering that we have right now, for instance, our exploitation of animals. In fact, it comes with other houses. Then that would already qualify as an asterisk if it happens on a larger astronomical scale. There's some connection between
00:03:48
Speaker
human technology and the stakes at which we are operating.
Human Technology: Risks and Opportunities
00:03:53
Speaker
So would it be fair in your view to summarize human history as an ever increasing technological capacity and with that ever increasing stakes so we can now do more than we could in 1500 and in 1500 they had whatever human activity was going on was lower stakes than today. What's the connection there with suffering risks?
00:04:16
Speaker
Yeah, I think it is true what you're saying in broad terms, at least, that the more powerful our technology, the higher the stakes are, for better or worse, and use this technology to reduce suffering or to increase suffering. So, I mean, an example would be, I've already mentioned factory farming, that would just not be possible on the scale at which it is happening if you only have medieval technology, that's just not possible. So that would be one example of the sort of dynamic.
00:04:45
Speaker
It is not entirely straightforward because not all technology raises the strike. I mean, for instance, if you have widespread contraceptives that kind of might lower the stakes if it results in a long-term plateau or decline of human population. And there's also a lot of technology that doesn't directly have that much to do with how much suffering.
00:05:10
Speaker
And of course, it can also be used to reduce suffering. So there are those caveats to keep in mind. I'm generally careful that I'm not saying that technology is per se something bad. The point is merely that it raises the stakes and therefore increases the risk of those worst cases.
00:05:28
Speaker
Perhaps we are most interested in those technologies that are most asymmetrical, meaning that they have the potential to increase the risks more than they can increase the benefits for
Artificial Sentience as a Suffering Risk
00:05:40
Speaker
What technologies are you most worried about when you're thinking about suffering risks? So one example is artificial sentience. If it is possible to run complex computer programs that become sentient or sentient simulations of the sort, I mean, this is explored in the aforementioned Black Mirror episodes. For instance, they have some scenarios that go in this direction. This is
00:06:07
Speaker
Sort of an extra ski technology because it makes it. Much easier to create a lot of minds a lot of beings and because. There might be there will likely be a lack of moral concern for those beings, they might not have any power or political representation but those are those sort of risk factors.
00:06:28
Speaker
that could make estuaries more likely. I mean, of course, I'm not saying that this will happen or even that it's likely. This is an example of a technology that would be risky because of this dynamic of having potentially lots of beings that are easy to create and a possible, predictable lack of moral concern for the well-being of those.
00:06:52
Speaker
artificial minds, whatever you call it. So anything involving sentience or consciousness is speculative, and it's difficult to reason about. We don't have a good grasp of this concept as we have of, say, intelligence. In your view,
00:07:11
Speaker
How likely is it that artificial sentience will develop as we are, say, training large AI models? So is this something you're worried about coming along with increased intelligence in AI models? Or do you foresee a scenario in which we would try to engineer artificial sentience for there to be a problem?
00:07:35
Speaker
That's a very interesting question. I actually like the focus on whether or not it will be developed, because what a lot of the discussion in philosophy and so on centers around is whether it is possible in principle. I think the answer to that is likely, yes, that it is possible in principle to have artificial sentience, but the key question is whether or not future technology will actually evolve in a way that results in
00:08:03
Speaker
Such sent in entities i mean it's possible in principle to build underwater cities another water metropolis but that doesn't prove that we're gonna do it. So the key question is like. How will future technology evolve and i mean i think that's very very uncertain so i'm sort of neither confidently predicting.
00:08:25
Speaker
there will be artificial sentience, nor am I confidently predicting that there won't be artificial sentience.
Ethical Treatment of Sentient AI
00:08:31
Speaker
Generally, I think we have great uncertainty about what the future will be like. And I think that's probably going to be a recurring theme in this podcast. But what you were saying is that, like the question is, do people try to create artificial sentience? Currently, it seems to me that the focus is just on
00:08:49
Speaker
using AI to solve problems. And people are neither deliberately trying to grade our official sentience, nor are they deliberately avoiding it. I mean, I don't know what that tells us about how likely it is that the result will be conscious of computer programs or what you will.
00:09:10
Speaker
So the problem is, of course, that we only have sort of one example to draw from, which is biological animals and humans that are conscious as far as we know.
00:09:24
Speaker
Of course, the evolution of artificial intelligence is not at all the same as biological evolution, where you have animals that are used by evolution as a pain and pleasure or useful for learning. Then you can ask the question, is that analogous in the sort of AI that we currently have? If you have a language model that
00:09:46
Speaker
just predicts the next word or something like that, it seems quite different from what happened in natural evolution. The fact that it's different does not prove that such systems can't be sentient, but it's kind of difficult because we only have this one example to draw from, and so how exactly would we know? We can try to extrapolate from how we are already treating our computer programs to how we will treat them in the future, because you said something interesting, which is that
00:10:13
Speaker
you predict or perhaps you believe that we will predictably not care about artificial sentience, or at least that's a scenario that you fear would come about. On the one hand, we treat video game characters very badly. We engage in combat with them, but perhaps we do this because we believe that they aren't sentient. On the other hand, some chatbots we might
00:10:44
Speaker
might treat as if they're conscious, even though they aren't, in my opinion, perhaps they are. And so what can we learn from how we're already treating AIs about how we will treat them in the future? Yeah, that's a good question. I think what the sort of gut feeling that people have about whether or not a program like
00:11:08
Speaker
GPT or our deli or all of that, whether it's conscious is probably not a very good indicator of whether it actually is conscious. So in terms of whether or not people will care about it, it seems that this more
00:11:21
Speaker
depends on specific features that aren't actually about it. I don't know if the system is sentient. If you put the program into a robot that looks cute or something like that, then maybe people are more likely to care about it. Or if it sounds human-like or so, then it becomes more likely that people would care.
00:11:42
Speaker
But still, overall, it sounds like a situation that could be risky. If you look at our track record in terms of how we treat animals, I think it is at least reasonable to be concerned about how we're going to treat those artificial minds, especially since they might be basically completely at the mercy of whoever will run those systems. There's also, even if the average person cares about them, you might still have some minority or sadists or so that will try to
00:12:11
Speaker
use those systems for less pleasant purposes. I don't know.
00:12:16
Speaker
Perhaps the scenario you're worried about is that we will develop very smart AI systems that can help us achieve tasks, but that sentience comes along for the ride when we develop intelligence. And so we want care that these systems are perhaps suffering during their training or during their deployment when we're using them because they are simply so convenient to use. Is that the scenario you're worried about?
00:12:44
Speaker
Yes, I mean, the example of animal exploitation is a case where it's just kind of
00:12:51
Speaker
economically useful because people want to have cheap meat, it's kind of useful to factory farms and slaughterhouses. And a similar thing might be happening with artificial sentience. Maybe it's just useful to have your personal AI worker assistant and people are just not going to care much about the well-being of that. If it is sentient, it is also
00:13:19
Speaker
quite simply possible that we will be wrong in our assessment of whether or not such systems are sentient. That's like how there were lots of philosophers or so in the past that were confident that animals are not sentient. So that's also a possible risk that we might be wrong in our assessment. Of course, you can be wrong both ways. You can also mistakenly attribute sentience to a system like Lambda when it isn't sentient. I think it likely isn't, but yeah.
00:13:48
Speaker
This is perhaps also something we could be worried about if large companies are developing AI systems that to us, they say they're engineered to feel conscious to us, perhaps because
00:14:02
Speaker
It's a great product to sell an artificial friend or an artificial romantic partner. These AI models, these chatbots will then
Balancing Present and Future Suffering
00:14:12
Speaker
feel conscious to us. You can imagine even building their AIs into robots. And we would at least waste resources caring for these beings if they aren't conscious. If we put aside the question of,
00:14:27
Speaker
how we're treating them, what this tells us about ourselves and whether you can be a good person if you're mistreating even an unconscious robot. The wasting of resources because you wrongly think something is sentient is maybe not a very great concern from my point of view, but maybe a greater risk is that if people, you know, if you cry walls too often, then people will not take it seriously when they actually are a conscious systems or sentient.
00:14:57
Speaker
systems because then it will just be seen as something wacky, something like that uninformed and crazy people talk about how the systems are conscious when they clearly aren't and it's just ludicrous to think that they aren't. That sort of attitude might become widespread in the AI community and that would be a very bad thing. I think it's important to emphasize uncertainty about this when we're talking about it.
00:15:20
Speaker
There might be artificial sentience, there might not be. And of course, as long as we don't know, there's a good case to be made for some sort of precautionary stance. If there's a certain chance that it's sentient, I think we should still take that seriously.
00:15:36
Speaker
current take on this? What do you believe? Setting aside all of this uncertainty and setting aside that we don't have enough data to determine this, what do you believe about current AI models? Do you think they're sentient? What do you think about, say, AIs in the next 20 or 30 years? How likely do you think it is that these models will be sentient?
00:16:01
Speaker
Actually, I would probably say that this is quite unlikely, perhaps very unlikely. I'm pretty sure that the language models that we have right now, that Lambda is probably not sentient. Now, within the next 20 or 30 years, so that's kind of also a question about how rapidly will AI advance at all. If it's not too rapid, if we just extrapolate what's happening so far, then I think it's
00:16:30
Speaker
It's rather unlikely that these systems will be sentient in the foreseeable future. So we're more talking about something more distant, in my opinion, but some people might disagree. There's also, of course, philosophical questions here about the meaning of sentience and whether that is a binary thing or something gradual. Some people might be saying that they're sentient to a very small degree or something like that. Okay, that's maybe a different
00:16:59
Speaker
Yeah, so it raises very complicated philosophical and empirical questions, but I'm definitely not saying that the current systems are necessarily like I don't really.
00:17:11
Speaker
worry so much about how my laptop feels about me recording this podcast. Yeah, me neither. Okay, so if we return to the main topic, which is this issue of suffering risks, it's not really an objection, but it's a reaction that I have, and perhaps other people have to this problem, which is just to say, I think there's simply so much suffering in the world already,
00:17:39
Speaker
If we think about your example of animals in factory farms, perhaps we think about wild animals suffering in nature. We could even broaden it out to talk about alien civilizations that we haven't encountered yet with a lot of suffering. Perhaps even the universe is infinite and therefore could potentially contain an infinite amount of suffering.
00:18:04
Speaker
I understand that this might sound irrational, but do you think that there is simply too much suffering for us to do something about it? Are we overwhelmed by suffering such that we cannot help? Yeah, I mean, I can definitely understand that sentiment and sort of have felt that myself. So I sort of get where it's coming from.
00:18:25
Speaker
I do not think that it's objectively a very strong argument because the importance of helping one being sort of
00:18:35
Speaker
doesn't depend on how many other beings there are. If your actions can avert the suffering of, say, a thousand individuals, then the importance of that is not lessened by the fact that there is a million, a billion, a trillion that you weren't able to help because you still
00:18:56
Speaker
helped a thousand beings and reduced their suffering. So on this objective level, like if you're not emotional about it, I think this is not really a strong reason to stop caring about suffering. But I sort of understand how this might affect the personal motivation of many people when they think about this topic.
Staying Motivated in Tackling Grim Topics
00:19:20
Speaker
And how important do you think the issue of personal motivation is? Perhaps one problem with focusing on suffering risks is that it's very grim to think about and perhaps it's depressing to work with such that it might be difficult to stay motivated over the long term. How much of an issue do you think this is?
00:19:45
Speaker
I mean, I think this can be legitimate points, but it's also not objectively a strong reason not to work on estrus. Or rather, what I would say is that for those people who have the psychological disposition to remain motivated to do something about it,
00:20:01
Speaker
They should still continue to work on asterisks for people who really just find it too depressing or stressful and wouldn't be able to remain productive while they work on this topic. Yeah, maybe it's better to do something else. So one shouldn't force oneself to do something that feels
00:20:21
Speaker
that is like adults with psychological predispositions. But I would also note that reducing asterisks can mean many things. It doesn't have to mean that you constantly think about worst case scenarios. You can also just
00:20:35
Speaker
For instance, asterisks are arguably reduced if we manage to improve our political system, our political discourse, making it saner. Or asterisks are reduced if we spread better values. So if you find thinking about it too depressing, you can simply focus on those proxies, those risk factors for asterisks, and try to address them.
00:20:56
Speaker
without thinking so much, like sort of outsourcing the issue of thinking about the details of it to people like me and focus on those other things. So that can be one thing to go around it.
00:21:10
Speaker
What's the difference, do you think, motivationally and perhaps also objectively between framing this issue in terms of the upside or the downside? So what you're focusing on here is trying to avoid the worst possible downsides of what could happen in the future. Is it perhaps more motivating to think of developing or helping sustain a large flourishing civilization spreading in the universe?
00:21:36
Speaker
Or does that lead us astray somehow?
Preventing Downsides vs. Promoting Flourishing
00:21:41
Speaker
I can see how this is more motivating to some people, not so much for me personally. I would just say that people are all very different, so I don't have a single solution.
00:21:53
Speaker
those issues. I should note that my colleague Magnus Winding at the Center for Reducing Suffering is planning to write a book on, it's called Compassionate Purpose, which is sort of addressing those questions and trying to bridge the gap between
00:22:08
Speaker
rigorous ethics and these issues of personal motivation and self-development. So he might go into much more depth on that. I honestly do not feel that I have a very good solution to those issues of personal motivation. It works out for me, like I'm able to stay motivated to reduce risks. Although, I mean, I also do sometimes have doubts or find it depressing.
00:22:36
Speaker
I certainly get where the sentiment is coming from. In general, I think it's an interesting issue, this question of balancing our ethical theories and how we believe we should act, if we're thinking in optimal terms, with how we actually act in everyday life and all of the other considerations that impact how we think about ethics. So our personal motivations and
00:23:05
Speaker
And all of these things actually do matter and I think should practically speaking be taken into account. I don't know if you think of it the same way. No, that sounds right.
Individual Impact on Future Generations
00:23:17
Speaker
I mean, I think what can be frustrating for many people is not so much the aspect of it being depressing and more the aspect of that a single person is always going to have only a marginal impact on it.
00:23:32
Speaker
But this holds for any altruistic cause. I never said it was going to be easy. So any person trying to reduce suffering in the future will only have a marginal impact. What does this mean? Could you tell us about the numbers here?
00:23:49
Speaker
It's just that I am one person among 8 billion and so the impact that my actions have on how the future is going to go is going to be relatively small. But I mean, as I was saying earlier, I don't think that is a good argument to despair because the absolute number of beings that we can help is still actually very large because there are such large numbers of animals and
00:24:13
Speaker
and sentient beings now and in the future. So that means that the absolute number of beings that we can help is still very large, despite the fact that it's a small fraction of the total that we can affect.
00:24:26
Speaker
So there's that. How do you think about the distribution of impact per person? If we take a person such as George Washington or Jesus or someone like that, they seem to me to have enormous impact over the future. Do you think this is less likely to happen now because, say, more avenues are explored by people, it's more difficult to make a difference
00:24:53
Speaker
Or do you think we could see a Washington or a Jesus now with the same level of perhaps even a bigger impact on the future? Yeah, that's an interesting question because you would, of course, always only know in retrospect to people alive at Jesus' time, he was probably just one religious preacher among many or so. And it wasn't clear at the time that this was going to become a dominant
00:25:20
Speaker
religion. So likewise, it might just not be clear at the moment, what is going to become a dominant idea or way of thinking. It is, I think these issues of impact attribution can be very difficult because like, can you attribute all this impact to like Jesus or George Washington and so on? Or can you, like, is it just that people later on who endorsed those ideas
00:25:47
Speaker
that it is them that have sort of helped. Or like, is George Washington in conjunction with the later people who endorsed his ideas? So how do you divide the impact between those? That raises very complicated questions that I don't really have an answer to.
00:26:06
Speaker
Yeah, all right. Let's perhaps take an objection to suffering-focused ethics or to thinking about suffering risks that is more objective, in my opinion, at least.
Challenges in Reducing Future Suffering
00:26:18
Speaker
This is just the question of knowing what we should do. Say we're all aboard the program with trying to prevent future suffering. The next question, then, is having good knowledge about how to act such that we actually reduce future suffering?
00:26:33
Speaker
How much of a problem do you think this is? And do you think, because this is a general problem, do you think it's specifically a problem for trying to reduce suffering risks? I would say it's probably a problem for anything that is about influencing the long-term future, right? That entails both suffering risks and other forms of the long-term future. I would certainly agree that it is difficult to influence the long-term future for
00:27:01
Speaker
several reasons. One is simply that it is difficult to know what is going to happen in the future, that it's difficult to predict that. Another reason is that decision makers' future actors might undo anything that we do now and just move in a different direction.
00:27:18
Speaker
unless there is some sort of login. So I certainly agree that it's difficult to influence the long-term future. However, I mean, it is gradual, right? It's not all or nothing. It seems unlikely to me that there would be nothing whatsoever that we can do to reduce asterisks or influence the long-term future.
00:27:40
Speaker
And of course, then you can make an argument that the number of individuals that is affected with an asterisk, or more generally, the number of individuals that live in the future is just so much larger than the number of beings in the present. If it's like a million or a billion times as large, then if you use an expected value framework, that sort of outweighs the difficulty of knowing what exactly to do about it. I mean, I think I can understand why, to some people,
00:28:09
Speaker
that doesn't feel very satisfactory on a gut level. But I nevertheless think it is objectively a good argument.
00:28:16
Speaker
So what we're thinking about here is that we have such astronomical stakes
Balancing Speculative Risks and Fanaticism
00:28:21
Speaker
there. There could potentially be so much suffering that we could prevent in the future that even though we are very uncertain about how to prevent this suffering, the stakes makes up for our uncertainty in a sense, if you apply the expected value frameworks or if you take that calculation.
00:28:40
Speaker
Does that argument make you uneasy because you could you could also perhaps take it even further and think about well what is the probability that
00:28:55
Speaker
that there could be infinite happiness or infinite suffering. And that might then outweigh completely anything else you could think about. So is there a risk of becoming obsessed or fanatical when we think in this framework? Yeah, sort of Pascal's mugging. I mean, I definitely think that there is this risk. And so there's a balance to be struck here in several regards.
00:29:22
Speaker
You should take those more speculative ideas seriously, but also not too seriously, like taking them with a grain of salt and not putting too much faith in a particular speculative idea. I think there are good arguments for focusing on the long-term future, but I wouldn't go so far as to say that we should disregard the short term entirely. I mean, I think there is a lot of very valuable
00:29:45
Speaker
work that is focused on more shorter term issues. So the people that are advocating for wild animals now, I think they have a very valuable and worthwhile cause and should keep doing this. Likewise, you already mentioned the cause area of wild animals, animals living in nature, which they actually constitute the vast majority of sentient beings on Earth right now.
00:30:08
Speaker
And so doing something to help them is also a very worthwhile class. I mean, there are people doing important research on that at Animal Ethics at the White Animal Institute. And so, shout out to them.
00:30:21
Speaker
So in an effort to try to avoid becoming fanatical or obsessed with small probabilities of very large quantities of happiness or suffering, we spread out our activities such that we're doing something now and we're thinking about the future also.
Cognitive Biases in Ethical Considerations
00:30:41
Speaker
What I'm hearing you advocate for is simply that thinking about the risks of suffering in the future
00:30:49
Speaker
is relatively underweight in our portfolio of actions compared to how we should weigh it. Is that the right way to frame this? Yeah, that seems right. I mean, it just makes sense for me personally, for instance, to focus on specialized in this because there are already lots of people that work on those other courses. So I've picked thinking about asterisks as what I am working on.
00:31:17
Speaker
So what you're alluding to is that asterisks are quite neglected, which is something I agree with. And then, of course, the interesting question is what are the reasons for that?
00:31:28
Speaker
And some of them you already mentioned, it cannot be stressful to think about these scenarios. It's easier to look away. It's not a pleasant topic for small talk and dinner conversations. It has been my experience at least. And so there are cognitive biases that can cause us to mistakenly dismiss asterisks, which is wishful thinking about how the future is going to be great anyway, or
00:31:56
Speaker
I mean, there's a concept of denial of suffering, ongoing atrocities, and that sort of also applies to potential future atrocities. It's just more pleasant to look away, to flinch away and not bother with it, basically.
00:32:13
Speaker
Perhaps unpack that a little bit. So when you say wishful thinking and denial, how does that work? How does that cognitive bias influence our willingness to take suffering risks seriously?
00:32:28
Speaker
Yeah, so wishful thinking is just the tendency to endorse hypotheses that we wish to be true rather than that are objectively true. And of course, we would all wish for S risks to be extremely unlikely or speculative. We would all wish for the future to go well by default anyway, right? So that is...
00:32:51
Speaker
a factor that can bias people towards dismissing asterisks. Although, of course, I also want to be careful here because there might also simply be legitimate reasons.
00:33:03
Speaker
It might be a bit arrogant to just say like, okay, everyone else is biased, basically. Well, I mean, maybe I'm wrong. Maybe there are legitimate reasons why people, why asterisks have not gotten that much attention. Although the level of attention given to it, I think, is increasing and there is interest in it. So maybe it's true that these scenarios are very unlikely. Maybe it's true that you can't do anything about it.
00:33:27
Speaker
You strike me as a humble person. I don't think that you're just explaining away disagreements with accusing others of bias. I wrote down a list of potentially justified reasons to object here. But before we do that, perhaps explain how the bias of scope neglect could influence our unwillingness to take suffering risks seriously.
00:33:52
Speaker
Yeah, that is another major bias that can lead people to dismiss asterisks. We just don't really feel the numbers on an intuitive level, right? I mean, the suffering of a single individual can feel more emotionally.
00:34:08
Speaker
moving or touching than the potential hypothetical future suffering of a trillion or something like that, which is we don't tend to feel emotionally about that. In fact, there's also something called the proportion bias. It feels more satisfying to help 10 out of 10 than to help 11 out of 1000. But it's, I mean, objectively, of course, it is better to help 11, right? So that that's why I would
00:34:36
Speaker
call this a bias. And then you also mentioned at some point a belief digitization, which we could also call black and white thinking. And this perhaps is just if we believe that, say, suffering risks.
00:34:53
Speaker
are very unlikely. Say that there's a 1% risk that one of these suffering events will occur in the future. Then we round down to zero and then we say, well, then it's just completely unlikely that's impossible that it could happen. And you can see how that could influence our willingness to take it seriously.
00:35:14
Speaker
Do you think you have a disposition to be more objective in the way you handle probabilities? Or is it perhaps that you're educated in a certain way to understand small probabilities?
00:35:30
Speaker
I definitely think that it is very important in general to think in terms of nuances and avoid a black and white and all or nothing thinking of any sort, simply because there is so much uncertainty about all of those questions. I don't know if there's anything about me personally that does that, so I want to be careful about that.
00:35:50
Speaker
And of course, I mean, just to be fair, there might also be biases going the other direction. I don't know, something might, you might say that, you know, it's just sexy to think about those sci-fi scenarios. And that's just more intellectually exciting than getting your hands dirty and doing something in the real world. But that's why we're doing it. I don't know. That could be a potential counter bias. But on balance, I think,
00:36:18
Speaker
I do think that visceral thinking and the tendency to look away from these dark thoughts is a quite strong potential bite. What do you think is strongest here? Is it our tendency to look away, to deny suffering, or our unwillingness to engage with small probabilities of very bad events? What's most active here? I would probably say that the former has
00:36:47
Speaker
is probably what it would give more weight. I mean, there is this question of whether or not people are generally biased about small probabilities. I'm not entirely sure if I'd buy that. There are some, I mean, sometimes people also talk too much about
00:37:03
Speaker
things, some sort of catastrophes that are very unlikely. So, I mean, I think I could see this going both ways. I'm not sure if people are generally biased about small probabilities or if so, in what way exactly, that seems complicated to me. Whereas the point that people tend to look away from suffering, from atrocities, from dark things, unpleasant things, that seems like a fairly clear and strong bias to me.
00:37:31
Speaker
How do you view the issue of us taking actions that
Risks in Preventing Suffering
00:37:38
Speaker
where we unwillingly increase future suffering. One thing I was thinking about was this issue of artificial sentience that we touched upon earlier. Imagine that we, in an effort to try to prevent this research artificial sentience, but by researching it, we understand it better and we make it easier to create artificial sentience.
00:38:03
Speaker
So that might be a case in which we are against our will, increasing the risks of future suffering. Just because there's so much uncertainty in this area that we're trying to engage with, might we do harm when we're trying to do good?
00:38:20
Speaker
Yeah, that's a great question because there is definitely a risk that our actions can back up in many different ways. And therefore, we should definitely be careful. One should avoid doing research that makes it easier to create artificial minds or at least as long as we do not have enough moral progress to ensure that they're treated well. That should come first before you create them, ideally.
00:38:46
Speaker
Another conceivable risk is that one might inadvertently empower malicious actors and there are PR risks. Perhaps I think that the most important risk to be aware of is the risk of a backlash against certain ideas. I think there are certain ideas that we've discussed that might sound crazy to many people if it's not communicated in a careful way. And we don't want those things to become
00:39:16
Speaker
part of a polarized political debate, the culture war, I think that would be a very bad development if that were to happen. So that's perhaps the largest backfire risk to keep in mind. So it's important to be careful about how to communicate those ideas and to emphasize positive sound cooperation and personal integrity. Coming back to this
00:39:41
Speaker
idea of great uncertainty in general, I just think that it is a strong reason why we need to reflect carefully about our views, our priorities, and why we should do for our research before jumping to any premature actions.
00:39:57
Speaker
Yeah, we could imagine various examples of, say, if you were concerned about trying to prevent future suffering in the past, say in the 13th century, what might you have grasped onto there, where if you were prematurely certain about what you were trying to do, you could have done a lot of harm?
00:40:18
Speaker
And perhaps we're in the same situation now.
Building Capacity for Future Prevention
00:40:22
Speaker
Would it be true to say that one of the main ways to engage with trying to prevent suffering risks is to gain knowledge, is to do research, because we're at an early stage in our knowledge.
00:40:36
Speaker
Yes, I think that's definitely true. The way I would put it is that, well, what's most important now is to put futures, estuars per user suffering reduces in a better position to reduce estuars. Sort of, you could call it capacity building. Like one aspect of that is simply to get more people interested in the ideas to build a community. But another aspect is what you could call wisdom building to increase our knowledge of those issues.
00:41:06
Speaker
And that will put future people in a better position to reduce asterisks. It is precisely for that reason that I don't really share the sort of defeatism that we can't do anything about it now because what you can do about it now is to research it and to put future people in a better position to do something about it.
00:41:27
Speaker
In terms of people in the past, it is also worth noting that sometimes they did things that were very impactful, like enlightenment philosophers writing about democracy and the rule of law and early animal advocates and so on. So it's not true that nobody in the past was able to have a positive and arguably like a predictably robustly positive impact.
00:41:52
Speaker
on the future. So that's maybe one argument to push back on this idea that we can't do anything. In addition, it's worth noting that reducing short-term suffering also isn't always that easy. When you take into account the fact that the vast majority of sentient beings on Earth are animals living in nature, perhaps
00:42:17
Speaker
marine animals, perhaps small animals like invertebrates, perhaps insects, if insects are sentient. And so then it's not so easy to know what to do about that either.
Long-term Impact of Short-term Actions
00:42:29
Speaker
So you're thinking that even when we're trying to reduce suffering in the short term,
00:42:36
Speaker
our actions, the consequences of our actions will extend into the future. So such that perhaps it's difficult to avoid thinking about the actions long into the future, our actions long into the future when we're thinking about what to do.
00:42:52
Speaker
Yeah, I mean, that seems right to me. I don't think it's a very convincing approach to say that we just only look at the short-term suffering because, as you're saying, everything we do probably does have ripple effects on the longer term. The problem is more that it's not always easy to predict what those ripple effects are.
00:43:08
Speaker
So you could say that on average, it's sort of 50-50 and washes out. I mean, I think there's something to that, but again, it's hard to argue that this is entirely true for everything that we do. Again, like the people in the past that were, for instance, promoting human rights or the rule of law or something like that. I mean, was it really 50-50 whether or not that is good or bad? It seems to me that they could have, reasonably.
00:43:34
Speaker
bought and did think that it is good to spread those ideas and they were right about that. I think it's wrong to argue that just everything is 50-50. That seems to be taking it too far. When we're thinking about reducing our uncertainty about reducing future suffering, we should perhaps take into account that
00:43:57
Speaker
Or at least I think that the fields of human knowledge where there are direct feedback loops or short feedback loops are the fields of human knowledge where humans have advanced the most. And perhaps when we're thinking about influencing the long future, we can't get feedback because the results of our actions are, say, 100 or 1,000 years into the future. And so the feedback loops are very long.
00:44:22
Speaker
Is it perhaps a better approach to try to tackle problems that are more directly in front of us in time, so more short-term issues, than trying to learn from that and then be in a better position to influence the long-term future?
00:44:38
Speaker
That's an interesting argument, and I definitely agree that this lack of feedback loops, that it's a major problem in this endeavor to reduce asterisks or to have an impact on the long-term future in general. This is also something that applies to all of long-termism, basically, not just asterisk reduction. I'm less convinced that
00:45:04
Speaker
doing some work to reduce short-term suffering is necessarily a good solution to that. So that might have some feedback loops, but the question is like whether or not those are the relevant feedback loops. I mean, even there, the question is like, well, what sort of feedback are you getting exactly? So I mean, if you're, you can advocate for better animal welfare laws, for instance. So, I mean, maybe you're getting feedback about whether or not
00:45:30
Speaker
You're actually going to be able to make progress to getting like a certain specific law enacted. Okay. You can get feedback on that. But I mean, is that much feedback about like, how does that help us to reduce suffering in the longer term future? So is that, it's not necessarily a very clear feedback signal for that question of how we best reduce suffering in the long-term, right? So that's maybe a problem with this sort of approach.
00:45:56
Speaker
Makes sense. So for the listeners who are interested in learning more about these topics, where should they go?
Resources for Learning More about Suffering Risks
00:46:03
Speaker
Which websites or books or papers should they read? Oh, gee, I mean, there's lots of impossible things. So I would, of course, recommend the website of the Center for Reducing Suffering. The Center on Long-Term Risk also is interested in estrous production.
00:46:22
Speaker
a slightly different bent, more focused on AI, on cooperative AI. Maybe we're going to talk about that too later on.
00:46:32
Speaker
A classic would of course be Brian Tomacic's website on reducing suffering. I would recommend Magnus Winding's blog. But of course, one shouldn't just read stuff from people that you agree with. That's not a good practice. One should probably read a broader set of offers all over the internet on all sorts of topics.
00:46:58
Speaker
Definitely. Great. Tobias, thank you for coming on the podcast. It's been very interesting. Thanks for having me.