Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark image

On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

Future of Life Institute Podcast
Avatar
179 Plays5 years ago
Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us. Topics discussed include: -Max and Yuval's views and intuitions about consciousness -How they ground and think about morality -Effective altruism and its cause areas of global health/poverty, animal suffering, and existential risk -The function of myths and stories in human society -How emerging science, technology, and global paradigms challenge the foundations of many of our stories -Technological risks of the 21st century You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/31/on-consciousness-morality-effective-altruism-myth-with-yuval-noah-harari-max-tegmark/ Timestamps: 0:00 Intro 3:14 Grounding morality and the need for a science of consciousness 11:45 The effective altruism community and it's main cause areas 13:05 Global health 14:44 Animal suffering and factory farming 17:38 Existential risk and the ethics of the long-term future 23:07 Nuclear war as a neglected global risk 24:45 On the risks of near-term AI and of artificial general intelligence and superintelligence 28:37 On creating new stories for the challenges of the 21st century 32:33 The risks of big data and AI enabled human hacking and monitoring 47:40 What does it mean to be human and what should we want to want? 52:29 On positive global visions for the future 59:29 Goodbyes and appreciations 01:00:20 Outro and supporting the Future of Life Institute Podcast This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
Recommended
Transcript

Introduction with Yuval Noah Harari and Max Tegmark

00:00:04
Speaker
Welcome to the Future of Life Institute podcast. I'm Lucas Perry. Today, I'm excited to be bringing you a conversation between professor, philosopher, and historian Yuval Noah Harari, an MIT physicist and AI researcher, as well as Future of Life Institute president, Max Tegmark.
00:00:24
Speaker
Yuval is the author of popular science bestsellers, Sapiens, A Brief History of Humankind, Homodos, A Brief History of Tomorrow, and of 21 Lessons for the 21st Century. Max is the author of Our Mathematical Universe and Life 3.0, Being Human in the Age of Artificial Intelligence.

Morality, Consciousness, and Technology Risks

00:00:47
Speaker
This episode covers a variety of topics related to the interests and work of both Max and Yuval. It requires some background knowledge for everything to make sense, and so I'll try to provide some necessary information for listeners unfamiliar with the area of Max's work in particular here in the intro. If you already feel well acquainted with Max's work, feel free to skip ahead a minute or use the timestamps in the description for the podcast.
00:01:17
Speaker
Topics discussed in this episode include morality, consciousness, the effective altruism community, animal suffering, existential risk, the function of myths and stories in our world, and the benefits and risks of emerging technology.
00:01:35
Speaker
For those new to the podcast or effective altruism, effective altruism or EA for short is a philosophical and social movement that uses evidence and reasoning to determine the most effective ways of benefiting and improving the lives of others.
00:01:51
Speaker
And existential risk is any risk that has the potential to eliminate all of humanity, or at the very least, to kill large swaths of the global population and leave the survivors unable to rebuild society to current living standards.
00:02:08
Speaker
Advanced emerging technologies are the most likely source of existential risk in the 21st century. For example, through unfortunate uses of synthetic biology, nuclear weapons, and powerful future artificial intelligence that is misaligned with human values and objectives.

Support for Future of Life Institute

00:02:30
Speaker
The Future of Life Institute is a nonprofit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org. These contributions make it possible for us to bring you conversations like these and to develop the podcast further.
00:02:55
Speaker
You can also follow us on your preferred listening platform by searching for us directly or following the links on the page for this podcast found in the description.

Morality and Consciousness in AI

00:03:04
Speaker
And with that, here's our conversation between Max Tegmark and Yuval Noah Harari.
00:03:14
Speaker
Maybe to start them at a place where I think you and I both agree, even though it's controversial, I get the sense from reading your books that you feel that morality has to be grounded on experience, subjective experience, or it's just what I like to call consciousness.
00:03:29
Speaker
I love this argument you've given, for example, that people who think consciousness is just bullshit and irrelevant, you challenge them to tell you what's wrong with torture, if it's just a bunch of electrons and quarks moving around this way rather than that way. Yeah, I think that there is no morality without consciousness and without subjective experiences. At least for me, this is very, very obvious.
00:03:52
Speaker
One of my concerns, again, if I think about the potential rise of AI, is that AI will be super intelligent, but completely non-conscious, which is something that we never had to deal with before. As so much of the philosophical and theological discussions of what happens when there is a greater intelligence in the world,
00:04:11
Speaker
We've been discussing this for thousands of years with God, of course, as the object of discussion, but the assumption always was that this greater intelligence would be A, conscious in some sense and B, good or infinitely good.
00:04:28
Speaker
And therefore, I think that the question we are facing today is completely different. And to a large extent, I suspect that we are really facing philosophical bankruptcy, that what we have done for thousands of years didn't really prepare us for the kind of challenge that we have now.
00:04:45
Speaker
I certainly agree that we have a very urgent challenge there.

Intelligence vs. Consciousness in AI

00:04:49
Speaker
I think there is an additional risk which comes from the fact that, you know, I'm embarrassed as a scientist, but we actually don't know for sure which kinds of information processing are conscious and which are not.
00:05:00
Speaker
For many, many years, I've been told, for example, that it's okay to put lobsters in hot water to boil them alive before we eat them because they don't feel any suffering. And then I guess some guy asked the lobster, does this hurt? And it didn't say anything. And it was a self-serving argument. But then there was a recent study out that showed that actually lobsters do feel pain. And, you know, they banned lobster boiling in Switzerland now. I'm very nervous whenever we humans make these very self-serving arguments saying, oh, you know, don't worry about the slaves.
00:05:30
Speaker
It's okay. They don't feel they don't have a soul. They won't suffer or women don't have a soul or animals can't suffer. I'm very nervous that we're going to make the same mistakes with machines just because it's so convenient. When I feel the honest truth is, yeah, maybe future super intelligent machines won't have any experience, but maybe they will.
00:05:50
Speaker
I think we really have a moral imperative there to do the science to answer that question because otherwise we might be creating enormous amounts of suffering that we don't even know exists.
00:06:00
Speaker
For this reason and for several other reasons, I think we need to invest as much time and energy in researching consciousness as we do in researching and developing intelligence. If we develop sophisticated artificial intelligence before we really understand consciousness, there is a lot of really
00:06:23
Speaker
big ethical problems that we just don't know how to solve, one of them is the potential existence of some kind of consciousness in these AI systems, but there are many, many others.
00:06:35
Speaker
I'm so glad to hear you say this, actually, because I think we really need to distinguish between artificial intelligence and artificial consciousness. Some people just take for granted that they're the same thing. Yeah, I'm really amazed by it. I'm having quite a lot of discussions about these issues in the last two or three years, and I'm repeatedly amazed that a lot of brilliant people
00:06:56
Speaker
just don't understand the difference between intelligence and consciousness. I mean, it comes up in discussions about animals, but it also comes up in discussions about computers and about AI. To some extent, the confusion is understandable because in humans and other mammals and other animals, consciousness and intelligence, they really go together. But we can't assume that this is the love of nature and that it's always like that.
00:07:22
Speaker
In a very, very simple way, I would say that intelligence is the ability to solve problems. Consciousness is the ability to feel things like pain and pleasure and love and hate. Now in humans and chimpanzees and dogs and maybe even lobsters, we solve problems by having feelings. A lot of the problems we solve, who to mate with and where to invest our money and who to vote for in the elections, we rely on our feelings to make these decisions.
00:07:51
Speaker
But computers make decisions in a completely different way. At least today, very few people would argue that computers are conscious. And still, they can solve certain types of problems much, much better than we. They have high intelligence in a particular field without having any consciousness. And maybe they will eventually reach super intelligence without ever developing consciousness.
00:08:19
Speaker
And we don't know enough about these ideas of consciousness and superintelligence, but it's at least feasible that you can solve all problems better than human beings and still have zero consciousness.

Non-organic Consciousness and Morality

00:08:33
Speaker
You just do it in a different way, just like airplanes fly much faster than birds without ever developing feathers.
00:08:41
Speaker
Right. That's definitely one of the reasons why people are so confused. There are two other reasons I notice also among even very smart people why they are utterly confused on this. One is there's so many different definitions of consciousness. Some people define consciousness in a way that's almost equivalent to intelligence. But if you define it the way you did, the ability to feel things, simply having subjective experience,
00:09:04
Speaker
I think a lot of people get confused because they have always thought of subjective experience and intelligence for that matter is something mysterious that can only exist in biological organisms like us. Whereas what I think we're really learning from the whole last century of progress in science is that no, you know, intelligence and consciousness are all about information processing.
00:09:27
Speaker
People fall prey to this carbon chauvinism idea that it's only carbon or meat that can have these traits, whereas in fact it really doesn't matter whether the information is processed by a carbon atom in a neuron in the brain or by a silicon atom in a computer.
00:09:41
Speaker
I'm not sure I completely agree. I mean, we still don't have enough data on that. There doesn't seem to be any reason that we know of that consciousness would be limited to carbon-based life forms, but so far this is the case. So maybe we don't know something. My hunch is that it could be possible to have non-organic consciousness, but until we have better evidence,
00:10:07
Speaker
There is an open possibility that maybe there is something about organic biochemistry, which is essential and we just don't understand. And also in the other open case, we are not really sure that consciousness is just about information processing. I mean, at present, this is the dominant view in the life sciences, but we don't really know because we don't understand consciousness.
00:10:29
Speaker
My personal hunch is that non-organic consciousness is possible, but I wouldn't say that we know that for certain. And the other point is that really, if you think about it in the broadest sense possible, I think that there is an entire potential universe of different conscious states, and we know just a tiny, tiny bit of it.
00:10:52
Speaker
Again, I'm thinking a little about different life forms. So human beings are just one type of life form and there are millions of other life forms that existed and billions of potential life forms that never existed but might exist in the future.
00:11:09
Speaker
And it's a bit like that with consciousness that we really know just human consciousness. We don't understand even the consciousness of other animals. And beyond that, potentially there is an infinite number of conscious states or traits that never existed and might exist in the future.
00:11:29
Speaker
I agree with all of that. And I think if you can have non-organic consciousness, artificial consciousness, which would be my guess, although we don't know it, I think it's quite clear then that the mind space of possible artificial consciousness is vastly larger than anything that evolution has given us.

Effective Altruism and Focused Causes

00:11:44
Speaker
So we have to have a very open mind.
00:11:46
Speaker
If we simply take away from this that we should understand which entities, biological and otherwise, are conscious and can experience suffering, pleasure and so on, and we try to base our morality on this idea that we want to create more positive experiences and eliminate suffering, then this leads straight into what I find very much at the core of the so-called effective altruism community.
00:12:09
Speaker
which we with the Future of Life Institute view ourselves as part of where the idea is we want to help do what we can to make a future that's good in that sense. Lots of positive experiences, not negative ones. And we want to do it effectively. We want to put our limited time and money and so on into those efforts, which will make the biggest difference. And the EA community has for a number of years been highlighting a top three list of issues that they feel are the ones that are most worth putting effort into in this sense.
00:12:36
Speaker
One of them is global health, which is very, very non-controversial. Another one is animal suffering and reducing it. And the third one is preventing life from going extinct by doing something stupid with technology. I'm very curious whether you feel that the EA movement has basically picked out the correct three things to focus on or whether you have things you would subtract from that list or add to it. Global health, animal suffering, x-risk.
00:13:05
Speaker
I think that nobody can do everything. Whether you're an individual organization, it's a good idea to pick a good cause and then focus on it and not spend too much time wondering about all the other things that you might do. These three causes are certainly some of the most important in the world.
00:13:27
Speaker
I would just say that about the first one, it's not easy at all to determine what are the goals. I mean, as

Future of Healthcare and Ethical Challenges

00:13:36
Speaker
long as health means simply fighting illnesses and sicknesses and bringing people up to what is considered as a normal level of health, then that's not very problematic. But in the coming decades, I think that the healthcare industry would focus more and more not on fixing problems,
00:13:57
Speaker
but rather on enhancing abilities, enhancing experiences, enhancing bodies and brains and minds and so forth. And that's much, much more complicated, both because of the potential issues of inequality and simply that we don't know where to aim for.
00:14:16
Speaker
One of the reasons that when you ask me at first about morality, I focus on suffering and not on happiness, is that suffering is a much clearer concept than happiness. And that's why when you talk about healthcare, if you think about this image of the line of normal health, like the baseline of what's a healthy human being,
00:14:38
Speaker
it's much easier to deal with things falling under this line than things that potentially are above this line. So I think even this first issue, it will become extremely complicated in the coming decades.
00:14:54
Speaker
And then for the second issue on animal suffering, you've used some pretty strong words before. You've said that industrial farming is one of the worst crimes in history, and you've called the faith of industrially farmed animals one of the most pressing ethical questions of our time.
00:15:09
Speaker
A lot of people would be quite shocked when they hear you using such strong words about this, since they routinely eat factory-farmed meat. How do you explain to them? This is quite straightforward. I mean, we are talking about billions upon billions of animals. The majority of large animals today in the world are either humans or our domesticated animals, cows and pigs and chickens and so forth. And so we're talking about a lot of animals and we are talking about a lot of pain and misery.
00:15:38
Speaker
The industrially farmed cow and chicken are probably competing for the title of the most miserable creature that ever existed. They are capable of experiencing a wide range of sensations and emotions, and in most of these industrial facilities, they are experiencing the worst possible sensations and emotions.
00:16:00
Speaker
In my case, you're preaching to the choir here. I find this so disgusting that my wife and I just decided to mostly be vegan. I don't go preach to other people about what they should do, but I just don't want to be part of this. It reminds me so much also of things you've written about yourself, about how people used to justify having slaves before, by saying, oh, it's the white man's burden. We're helping the slaves. It's good for them. And much the same way now, we make these very self-serving arguments for why we should be doing this.
00:16:30
Speaker
What do you personally take away from this? Do you eat meat now, for example? Personally, I define myself as vegan-ish. I mean, I'm not strictly vegan. I don't want to make kind of religion out of it and start thinking in terms of purity and whatever. I try to limit as far as possible my involvement with industries that harm animals for no good reason. And it's not just meat and dairy and eggs. It can be other things as well.
00:16:58
Speaker
The chains of causality in the world today are so complicated that you cannot really extricate yourself completely.

Existential Risks and Human Short-sightedness

00:17:06
Speaker
It's just impossible.
00:17:08
Speaker
So for me and also what I tell other people is just do your best and don't make it into a kind of religious issue. If somebody comes and tells you that, you know, I'm now thinking about this animal suffering and I decided to have one day a week without meat, then don't start blaming this person for eating meat the other six days. Just congratulate them on making one step in the right direction.
00:17:31
Speaker
Yeah, that sounds not just like good morality, but also like good psychology, if you actually want to nudge things in the right direction. And then coming to the third one, existential risk. There, I love how Nick Bostrom asks us to compare these two scenarios, one in which some calamity kills 99% of all people, and another where it kills 100% of all people. And he asks, how much worse is the second one?
00:17:56
Speaker
The point being, obviously, as you know, that if we kill everybody, we might actually forfeit having billions or quadrillions or more of future minds in the future, experiencing all these amazing things for billions of years. This is not something I've seen you talk as much about in your writing. So I'm very curious how you think about this morally, how you weigh future experiences that could exist versus the ones that we know exist now.
00:18:20
Speaker
I don't really know, I don't think that we understand consciousness and experience well enough to even start making such calculations. In general my suspicion, at least based on our current knowledge, is that it's simply not a mathematical entity that can be calculated.
00:18:38
Speaker
So, you know, all these philosophical riddles that people sometimes enjoy so much debating about whether, you know, you have five people of this kind and a hundred people of that kind and who should you save and so forth and so on. It's all based on the assumption that experience is a mathematical entity that can be added and subtracted. And my suspicion is that it's just not like that.
00:19:00
Speaker
To some extent, yes, we make these kinds of comparisons and calculations all the time. But on a deeper level, I think it's taking us in the wrong direction. At least at our present level of knowledge, it's not like eating ice cream is one point of happiness, killing somebody is a million points of misery. So if by killing somebody we can allow one million and one persons to enjoy ice cream, it's worth it.
00:19:28
Speaker
I think the problem here is not that we've given the wrong points to the different experiences, it's just that it's not a mathematical entity in the first place. I know that in some cases we have to do these kinds of calculations, but I would be extremely careful about it, and I would definitely not use it as the basis for building entire moral and philosophical projects.
00:19:53
Speaker
I certainly agree with you that it's an extremely difficult set of questions you get into if you try to trade off positives against negatives, like we mentioned in the ice cream versus murder case there. But I still feel that all in all as a species, we tend to be a little bit too sloppy and flippant about the future. And maybe partly because we haven't evolved to think so much about what happens in billions of years anyway. And if we look at how reckless we've been with nuclear weapons, for example,
00:20:20
Speaker
I recently was involved with our organization giving this award to honor Vasily Arkhipov, who quite likely prevented nuclear war between the US and the Soviet Union. And most people hadn't even heard about that for 40 years. More people have heard of Justin Bieber than Vasily Arkhipov, even though I would argue that that would really unambiguously have been a really, really bad thing, and that we should celebrate
00:20:42
Speaker
people who do courageous acts to prevent nuclear war, for instance. And in the same spirit, I often feel concerned that there's so little attention even paid to risks that we drive ourselves extinct or cause giant catastrophes compared to how much attention we pay to the Kardashians or whether we can get 1% less unemployment next year. So I'm curious if you have some sympathy for my angst here or whether you think I'm overreacting.
00:21:09
Speaker
I completely agree. I often define it that we are now kind of irresponsible gods. Certainly with regard to the other animals and the ecological system, and with regard to ourselves, we have really divine powers of creation and destruction, but we don't take our job seriously enough. We tend to be very irresponsible in our thinking and in our behavior.
00:21:32
Speaker
On the other hand, part of the problem is that the number of potential apocalypses is growing exponentially over the last 50 years. And as a scholar and as a communicator, I think it's part of our job to be extremely careful in the way that we discuss this issue with the general public.
00:21:54
Speaker
And it's very important to focus the discussion on the more likely scenarios, because if we just go on bombarding people with all kinds of potential scenarios of complete destruction, very soon we just lose people's attention.
00:22:11
Speaker
they become extremely pessimistic that everything is hopeless. So why worry about all that? So I think part of the job of the scientific community and people who deal with these kinds of issues is to really identify the most likely scenarios and focus the discussion on that. Even if there are some other scenarios which have a small chance of occurring and completely destroying all of humanity and maybe all of life, but we just can't deal with everything at the same time.
00:22:41
Speaker
I completely agree with that. With one caveat, I think it's very much in the spirit of effective altruism, what you said. We want to focus on the things that really matter the most and not turn everybody into hypochondriac, paranoid, getting worried about everything. The one caveat I would give is we shouldn't just look at the probability of each bad thing happen, but we should look at the expected damage it'll do. So the probability times how bad it is. I agree.
00:23:07
Speaker
Because nuclear war, for example, maybe the chance of having an accidental nuclear war between the US and Russia is only 1% per year, or 10% per year, or 1 in 1,000 per year. But if you have the nuclear winter caused by that, by soot and smoke in the atmosphere and are blocking out the sun for years, that could easily kill 7 billion people, most people on Earth in mass starvation, because it would be about 20 or Celsius colder.
00:23:30
Speaker
That means that on average, if it's 1% chance per year, which seems small, you're still killing, on average, 70 million people. That's the number that sort of matters, I think. That means we should make it a higher priority to reduce that more.
00:23:43
Speaker
With nuclear war, I would say that we are not concerned enough. I mean, too many people, including politicians, have this weird impression that, well, nuclear war, that's history. No, that was in the 60s and 70s, people worried about it. Exactly. It's not a 21st century issue. You know, this is ridiculous. I mean, we are now in even greater danger, at least in terms of the technology, than we were in the Cuban Missile Crisis.
00:24:07
Speaker
But you must remember this in Stanley Kubrick's Doctor Strange Love. Oh, one of my favorite films of all time. Yes. So the subtitle of the film is How I Stopped Fearing and Learned to Love the Bomb. Exactly. And the funny thing is, it actually happened. People have stopped fearing them. Maybe they don't love it very much. But compared to the 50s and 60s, people just don't talk about it. Like you look at the Brexit debate in Britain, and Britain is one of the leading nuclear powers in the world.
00:24:37
Speaker
And it's not even mentioned. It's not part of the discussion anymore. And that's very problematic because I think that this is a very serious existential threat. But I'll take a counter example, which is in the field of AI. Even though I understand the philosophical importance of discussing the possibility of general AI emerging in the future, and then rapidly taking over the world, and you know, all the paperclip scenarios and so forth,
00:25:07
Speaker
I think that at the present moment it really distracts attention of people from the immediate dangers of the AI arms race, which have a far, far higher chance of materializing in the next, say, 10, 20, 30 years,
00:25:23
Speaker
And we need to focus people's minds on these short-term dangers. And I know that there is a small chance that general AI would be upon us, say, in the next 30 years. But I think it's a very, very small chance.
00:25:42
Speaker
Whereas the chance that primitive AI would completely disrupt the economy, the political system, and human life in the next 30 years is about 100%. It's bound to happen.
00:25:58
Speaker
And I worry far more about what primitive AI will do to the job market, to the military, to people's daily lives than about general AI appearing in the more distant future.

Technology's Dual Potential

00:26:12
Speaker
Yeah, a few reactions to this. We can talk more about artificial general intelligence and super intelligence later if we get time. But there was a recent survey of AI researchers around the world asking what they thought. And I was interested to note that actually most of them guessed that we will get artificial general intelligence within decades. So I wouldn't say that the chance is small, but I would agree with you that it's certainly not going to happen tomorrow. But if we eat our vitamins, you and I, meditate, go to the gym.
00:26:40
Speaker
It's quite likely we will actually get to experience it. But more importantly, coming back to what you said earlier, I see all of these risks as really being one and the same risk in the sense that what's happened is, of course, that science has kept getting ever more powerful and science therefore gives us ever more powerful technology.
00:26:59
Speaker
And you know i love technology i'm a nerd i work in the university that has technology in its name and i'm optimistic we can trade inspiring high-tech future for life. If we win what i like to call the wisdom race the race between the growing power of the technology and the growing wisdom with which manager putting in your words that you just use that.
00:27:18
Speaker
if you can basically learn to take more seriously our job as stewards of this planet. You can look at every science and see exactly the same thing happening. We physicists are kind of proud that we gave the world cell phones and computers and lasers, but our problem child has been nuclear energy obviously, nuclear weapons in particular.
00:27:38
Speaker
Chemists are proud that they gave the world all these great new materials, and their problem child is climate change. Biologists in my book actually have done the best so far. They actually got together in the 70s and persuaded leaders to ban biological weapons and draw a clear red line more broadly between what was acceptable and unacceptable uses of biology, and that's why today
00:28:00
Speaker
Most people think of biology as really a force for good, something that cures people or helps them live healthier lives. And I think AI is right now lagging a little bit in time. It's finally getting to the point where they're starting to have an impact and they're grappling with the same kind of question.

New Narratives for Modern Challenges

00:28:16
Speaker
They haven't had big disasters yet. So they're in the biology camp there, but they're trying to figure out where do they draw the line between acceptable and unacceptable uses. So you don't get a crazy military AI arms race and lethal arms weapons.
00:28:29
Speaker
So you don't create very destabilizing income inequality so that AI doesn't create 1984 on steroids, et cetera. And I wanted to ask you about what sort of new story as a society you feel we need in order to tackle these challenges. I've been very, very persuaded by your arguments that stories are so central to society for us to collaborate and accomplish stuff. But you've also made a really compelling case. I think that the most popular recent stories
00:28:58
Speaker
are all getting less powerful or popular. Communism, there's a lot of disappointment and liberalism and it feels like a lot of people are kind of craving for a new story that involves technology somehow and that can help us get our act together and also help us feel meaning and purpose in this world. But I've never in your books seen a clear answer to what you feel that this new story should be.
00:29:23
Speaker
because I don't know. If I knew the new story, I would tell it. I think we are now in a kind of double bind or we have to fight on two different fronts. On the one hand, we are witnessing in the last few years the collapse of the last big modern story of liberal democracy and liberalism more generally, which has been, I would say, the best story humans ever came up with.
00:29:47
Speaker
And it did create the best world that humans ever enjoyed. I mean, the world of the late 20th century and early 21st century, with all its problems, it's still better for humans, not for cars or chickens. For humans, it's still better than at any previous moment in history.
00:30:05
Speaker
There are many problems, but anybody who says that this was a bad idea, I would like to hear which year are you thinking about as a better year? Now, in 2019, when was it better? In 1919, in 1719, in 1219? I mean, for me, it's obvious. This has been the best story we have come up with.
00:30:26
Speaker
That's so true i have to just admit that whenever i read the news for too long i start getting depressed but then i always cheer myself up by reading history and remind myself that never fails in the last four years of being quite bad. Things are deteriorating but we are still better off than in any previous year.
00:30:44
Speaker
But people are losing faith. In this story, we are reaching really a situation of zero story. All the big stories of the 20th century have collapsed or are collapsing. And the vacuum is currently filled by nostalgic fantasies, nationalistic and religious fantasies, which simply don't offer any real solutions to the problems of the 21st century. So on the one hand, we have the task
00:31:12
Speaker
of supporting or reviving the liberal democratic system, which is so far the only game in town. I keep listening to the critics and they have a lot of valid criticism, but I'm waiting for the alternative
00:31:27
Speaker
And the only thing I hear is completely unrealistic nostalgic fantasies about going back to some past golden era that, as a historian, I know was far, far worse. And even if it was not so far worse, you just can't go back there. You can't recreate the 19th century or the Middle Ages under the conditions of the 21st century. It's impossible.
00:31:50
Speaker
So we have this one struggle to maintain what we have already achieved, but then at the same time, on a much deeper level, my suspicion is that the liberal stories, we know it at least, is really not up to the challenges of the 21st century because it's built on foundations that the new science and especially the new technologies of artificial intelligence and bioengineering are just destroying.
00:32:17
Speaker
The belief we are inherited in the autonomous individual, in free will, in all these basically liberal mythologies, they will become increasingly untenable in contact with new powerful bioengineering and artificial intelligence.
00:32:34
Speaker
To put it in a very, very concise way, I think we are entering the era of hacking human beings, not just hacking smartphones and bank accounts, but really hacking Homo sapiens, which was impossible before. AI gives us the computing power necessary, and biology gives us the necessary biological knowledge. When you combine the two, you get the ability to hack human beings.
00:32:58
Speaker
And if you continue to try and build society on the philosophical ideas of the 18th century about the individual and free will and all that, in a world where it's feasible, technically, to hack millions of people systematically, it's just not going to work.
00:33:14
Speaker
And we need an updated story. I'll just finish with this thought. And our problem is that we need to defend the story from the nostalgic fantasies at the same time that we are replacing it by something else. And it's just very, very difficult. When I began writing my books, like five years ago, I thought the real project was to really go down to the foundations of the liberal story, expose the difficulties and build something new.
00:33:44
Speaker
And then you had all these nostalgic populist eruptions over the last four or five years, and I personally find myself more and more engaged in defending the old-fashioned liberal story instead of replacing it.
00:34:01
Speaker
Intellectually, it's very frustrating because I think the really important intellectual work is finding out a new story. But politically, it's far more urgent. If we allow the emergence of some kind of populist authoritarian regimes, then whatever comes out of it will not be a better story.
00:34:22
Speaker
Yeah. Unfortunately, I agree with your assessment here. I love to travel. I work in basically the United Nations-like environment of my university with students from all around the world. And I have this very strong sense that people are feeling increasingly lost around the world today.
00:34:39
Speaker
because the stories that used to give them a sense of purpose and meaning and so on are sort of dissolving in front of their eyes. And of course, we don't like to feel lost than likely to jump on whatever branches are held out for us. And they are often just retrograde things. Let's go back to the good old days and also to other unrealistic things. But I agree with you that the rise in populism we're seeing now is not the cause, it's a symptom of people feeling lost.
00:35:06
Speaker
So I think it was a little bit unfair to ask you in a few minutes to answer the toughest question of our time. What should our new story be? But maybe we could break it into pieces a little bit and say, what are at least some elements that we would like the new story to have? For example, it should accomplish, of course, multiple things. It has to incorporate technology in a meaningful way, which our past stories did not. It has to incorporate AI, process, progress, and biotech, for example. It also has to be a truly global story, I think, this time.
00:35:36
Speaker
which isn't just a story about how america's gonna get better off or china's gonna get better off but one about how we're all gonna get better off together and we can put up a whole bunch of other requirements if we start maybe with this part about the global nature of the story people disagree violently about so many things around the world but are there any ingredients at all of the story that you think people around the world would already agree to some principles or ideas
00:36:02
Speaker
Again, I don't really know. I mean, I don't know what the new story would look like. Historically, these kinds of really grand narratives, they aren't created by two or three people having a discussion and thinking, okay, what new stories should we tell? It's far deeper and more powerful forces that come together to create these new stories.
00:36:25
Speaker
I mean, even trying to say, okay, we don't have the full view, but let's try to put a few ingredients in place. The whole thing about the story is that the whole comes before the parts. The narrative is far more important than the individual facts that build it up. So I'm not sure that we can start creating the story by just, okay, let's put the first few sentences and who knows how it will continue.
00:36:50
Speaker
You wrote books. I write books. We know that the first few sentences are the last sentences that you usually write. That's right. Only when you know how the whole book is going to look like, but then you go back to the beginning and you write the first few sentences.
00:37:03
Speaker
Yeah, sometimes the very last thing you write is the new title. So I agree that whatever the new story is going to be, it's going to be global. The world is now too small and too interconnected to have just a story for one part of the world. It won't work. And also it will have to take very seriously both the most updated science and the most updated technology.
00:37:27
Speaker
something that, you know, liberal democracy as we know it, it's basically still in the 18th century. It's taking an 18th century story and simply following it to its logical conclusions. For me, maybe the most amazing thing about liberal democracy is it really completely disregarded all the discoveries of the life sciences over the last two centuries. And of the technical sciences. I mean, as if Darwin never existed.
00:37:55
Speaker
And we know nothing about evolution. I mean, you can basically meet these folks from the middle of the 18th century, whether it's Rousseau or Jefferson and all these guys. And they will be surprised by some of the conclusions we have drawn from the basis they provided us.
00:38:12
Speaker
But fundamentally, nothing has changed. Darwin didn't really change anything. Computers didn't really change anything.

Systems Knowing Individuals Better Than Themselves

00:38:21
Speaker
And I think the next story won't have that luxury of being able to ignore the discoveries of science and technology. The number one thing that we'll have to take into account is how do humans live in a world when there is somebody out there that knows you better than you know yourself
00:38:41
Speaker
But that somebody isn't God. That somebody is a technological system which might not be a good system at all. That's a question we never had to face before. We could always comfort ourselves with the idea that we are kind of a black box to the rest of humanity. Nobody can really understand me better than I understand myself.
00:39:02
Speaker
the king, the emperor, the church, they don't really know what's happening within me. Maybe God knows, so we had a lot of discussions about what to do with that, the existence of a God who knows us better than we know ourselves. But we didn't really have to deal with a non-divine system that can hack us. And this system is emerging, I think it will be in place within our lifetime.
00:39:27
Speaker
In contrast to generally artificial intelligence that I'm skeptical whether I'll see it in my lifetime, I'm convinced we will see, if we live long enough, a system that knows us better than we know ourselves. And the basic premises of democracy, of free market capitalism, even of religion, just don't work in such a world.
00:39:49
Speaker
How does democracy function in a world when somebody understands a voter better than the voter understands herself or himself? And the same with the free market. I mean, if the customer is not right, if the algorithm is right, then we need a completely different economic system. That's the big question that I think we should be focusing on. I don't have the answer, but whatever story will be relevant to the 21st century, we'll have to answer this question.
00:40:19
Speaker
I certainly agree with you that democracy has totally failed to adapt to the developments in the life sciences, and I would add to that, to the developments in the natural sciences too. I watched all of the debates between Trump and Clinton in the last election here in the US, and I didn't notice artificial intelligence getting mentioned even a single time, not even when they talked about jobs.
00:40:38
Speaker
And the voting system we have an electoral college system here where it doesn't even matter how people vote except in a few swing states where there's so little influence from the voter to what actually happens. Even though we don't have blockchain and could easily implement technical solutions where people will be able to have much more influence just reflects that we basically declared victory on our democratic system.
00:41:01
Speaker
hundreds of years ago and have updated it. And I'm very interested in how we can dramatically revamp it if we believe in some form of democracy so that we actually can have more influence on how our society is run as individuals and how we can have good reason to actually trust the system if it is able to hack us, you know, that is actually working in our best interest.
00:41:20
Speaker
There is a key tenet in religions that you're supposed to be able to trust the God as having your best interest in mind,

Ethical Technology and Societal Change

00:41:26
Speaker
right? And I think many people in the world today do not trust that their political leaders actually have their best interest in mind. Certainly, I mean, that's the issue. You give really divine powers to far from divine systems.
00:41:42
Speaker
We shouldn't be too pessimistic. I mean, the technology is not inherently evil either. And what history teaches us about technology is that technology is also never deterministic. You can use the same technologies to create very different kinds of societies.
00:41:58
Speaker
We saw that in the 20th century, when the same technologies were used to build communist dictatorships and liberal democracies, there was no real technological difference between the USSR and the USA. It was just people making different decisions what to do with the same technology. I don't think that the new technology is inherently anti-democratic or inherently anti-liberal. It really is about choices that people make, even in what kind of technological tools to develop.
00:42:27
Speaker
If I think about, again, AI and surveillance at present, we see all over the world that corporations and governments are developing AI tools to monitor individuals. But technically, we can do exactly the opposite. We can create tools that monitor and surveying government and corporations in the service of individuals, for instance, to fight corruption in the government.
00:42:53
Speaker
As an individual, it's very difficult for me to, say, monitor nepotism, politicians appointing all kinds of family members to lucrative positions in the government or in the civil service. But it should be very easy to build an AI tool that goes over the immense amount of information involved. And in the end, you just get a simple application on your smartphone, you enter the name of a politician, and you immediately see within two seconds
00:43:22
Speaker
who he appointed or she appointed from their family and friends to what positions. It should be very easy to do it. I don't see the Chinese government creating such an application anytime soon, but people can create it. Or if you think about the fake news epidemic, basically what's happening is that corporations and governments are hacking us in their service.
00:43:45
Speaker
But the technology can work the other way around. We can develop an antivirus for the mind. The same way we developed antivirus for the computer, we need to develop an antivirus for the mind, an AI system that serves me, and not a corporation or a government.
00:44:04
Speaker
and it gets to know my weaknesses in order to protect me against manipulation. At present, what's happening is that the hackers are hacking me, they get to know my weaknesses, and that's how they are able to manipulate me, for instance, with fake news.
00:44:21
Speaker
If they discover that I already have a bias against immigrants, they show me one fake news story, maybe about a group of immigrants raping local women, and I easily believe that because I already have this bias.
00:44:34
Speaker
My neighbor may have an opposite bias. She may think that anybody who opposes immigration is a fascist. And the same hackers will find that out and will show her a fake news story about, I don't know, right-wing extremists murdering immigrants. And she will believe that. And then if I meet my neighbor, there is no way we can have a conversation about immigration.
00:44:58
Speaker
Now, we can and should develop an AI system that serves me and my neighbor and alerts us, look, somebody's trying to hack you, somebody's trying to manipulate you. And if we learn to trust this system that it serves us, it doesn't serve any corporation and government, it's an important tool in protecting our minds from being manipulated.
00:45:24
Speaker
Another tool in the same field, we are now basically feeding enormous amounts of mental junk food to our minds.
00:45:31
Speaker
We spend hours every day basically feeding our hatred, our fear, our anger. And that's a terrible and stupid thing to do. The thing is that people discovered that the easiest way to grab our attention is by pressing the hate button in the mind or the fear button in the mind. And we are very vulnerable to that. Now, just imagine that somebody develops a tool
00:45:57
Speaker
that shows you what's happening to your brain or to your mind as you are watching these YouTube clips. Maybe it doesn't block you. It's not Big Brother that blocks all these things. It's just like when you buy a product and it shows you how many calories are in the product.
00:46:17
Speaker
and how much saturated fat and how much sugar there is in the product. So at least in some cases, you learn to make better decisions. Just imagine that you have this small window in your computer which tells you what's happening to your brain as you're watching this video.
00:46:36
Speaker
And what's happening to your levels of hatred or fear or anger? And then make your own decision. But at least you are more aware of what kind of food you're giving to your mind. Yeah, this is something I am also very interested in seeing more of AI systems that empower the individual in all the ways that you mentioned.
00:46:57
Speaker
We're very interested in the Future Life Institute actually in supporting this kind of thing on the nerdy technical side. And I think this also drives home this very important fact that technology is not good or evil. Technology is an amoral tool that can be used both for good things and for bad things. That's exactly why I feel it's so important that we develop the wisdom to use it for good things rather than bad things.
00:47:19
Speaker
For that sense, AI is no different than fire, which can be used for good things

Steering Technology for Good

00:47:24
Speaker
and for bad things. But we as a society have developed a lot of wisdom now in fire management. We educate our kids about it. We have fire extinguishers and fire trucks and with artificial intelligence and other powerful tech, I feel we need to do better and similarly developing the wisdom that can steer the technology towards better uses.
00:47:40
Speaker
Now we're reaching the end of the hour here. I'd like to just finish with two more questions. One of them is about what we want it to ultimately mean to be human as we get ever more tech. You put it so beautifully. And I think it was sapience that tech progress is gradually taking us beyond asking what we want to asking instead what we want to want. And I guess even more broadly, how we want to brand ourselves, how we want to think about ourselves as humans in a high tech future.
00:48:08
Speaker
I'm quite curious, first of all, you personally, if you think about yourself in 30 years, 40 years, what do you want to want? And what sort of society would you like to live in, in say 2060, if you could have it your way? It's a profound question. It's a difficult question. My initial answer is that I would really like not just to know the truth about myself, but to want to know the truth about myself.
00:48:38
Speaker
Usually the main obstacle in knowing the truth about yourself is that you don't want to know it. It's always accessible to you. I mean, we've been told for thousands of years by, you know, all the big names in philosophy and religion, almost all of them, say the same thing, get to know yourself better. It's maybe the most important thing in life.
00:48:59
Speaker
We haven't really progressed much in the last thousands of years. And the reason is that, yes, we keep getting this advice, but we don't really want to do it. Working on our motivation in this field, I think would be very good for us. It will also protect us from all the naive utopias, which tend to draw far more of our attention. I mean, especially as technology will give us, or at least some of us,
00:49:26
Speaker
more and more power, the temptations of naive utopias are going to be more and more irresistible. And I think the really most powerful check on these naive utopias is really getting to know yourself better.
00:49:41
Speaker
Would you like what it means to be Yuval 2060 to be more on the hedonistic side, that you have all these blissful experiences and serene meditation and so on? Or would you like there to be a lot of challenges in there that gives you a sense of meaning or purpose? Would you like to be somehow upgraded with technology?
00:50:03
Speaker
None of the above. I mean, at least if I think deeply enough about these issues. Yes, I would like to be upgraded, but only in the right way. And I'm not sure what the right way is.
00:50:14
Speaker
I'm not a great believer in blissful experiences in meditation or otherwise. They tend to be traps. That this is what we've been looking for, you know, for all our lives and for millions of years, all the animals, they just constantly look for blissful experiences. And after a couple of millions of years of evolution, it doesn't seem that it brings us anywhere.
00:50:36
Speaker
And especially in meditation you learn these kind of blissful experiences can be the most deceptive because you fall under the impression that this is the goal that you should be aiming at. Like this is a really good meditation, this is a really deep meditation simply because you're very pleased with yourself. And then you spend countless hours later on trying to get back there or regretting that you're not there. And in the end it's just another experience.
00:51:04
Speaker
What we experience right now when we are now talking on the phone to each other and I feel something in my stomach and you feel something in your head, this is as special and amazing as the most blissful experience of meditation. The only difference is that we've gotten used to it, so we are not amazed by it. But right now we are experiencing the most amazing thing in the universe, and we just take it for granted.
00:51:30
Speaker
partly because we are distracted by this notion that out there, there is something really, really special that we should be experiencing. So I'm a bit suspicious of blissful experiences. Again, I would just basically repeat that to really understand yourself also means to really understand the nature of these experiences.
00:51:51
Speaker
And if you really understand that, then so many of these big questions will be answered. Similarly, the question that we dealt with in the beginning of how to evaluate different experiences and what kind of experiences should we be creating for humans or for artificial consciousness. For that, you need to deeply understand the nature of experience. Otherwise, there are so many naive utopias that can tempt you. So I would focus on that.
00:52:21
Speaker
When I say that I want to know the truth about myself, it's really also it means to really understand the nature of these experiences. To my very last question, coming back to the story and ending on a positive, inspiring note, I've been thinking back about when new stories led to very positive change. And I started thinking about a particular Swedish story. So the year was 1945. People were looking at each other all over Europe saying, we screwed up again.
00:52:50
Speaker
How about we instead of using all this great technology, people were saying, to build ever more powerful weapons, how about we instead use it to create a society that benefits everybody, where we can have free healthcare, free university for everybody, free retirement, and build a real welfare state?
00:53:08
Speaker
And i'm sure there were a lot of questions around is that i know that's just hopeless naive dreamer you know go smoke some weed and hug a tree because it's never gonna work right but this story this optimistic vision was sufficiently concrete and sufficiently bold and realistic seeming that it actually caught on we did this in sweden and.
00:53:29
Speaker
It actually conquered the world, not like when the Vikings tried and failed to do it with swords, but this idea conquered the world, right? So now so many rich countries have copied this idea. I keep wondering if there is another new vision or story like this, some sort of welfare 3.0, which incorporates all of the exciting new technology that's happened since 45.
00:53:49
Speaker
on the biotech side, on the AI side, et cetera, to envision a society which is truly bold and sufficiently appealing to people around the world that people could rally around this. I feel that the shared positive experience is something that more than anything else can really help foster collaboration around the world. And I'm curious what you would say in terms of what you think of as a bold positive vision for the planet now, going away from what you spoke about earlier with yourself personally, getting to know yourself and so on.
00:54:20
Speaker
I think we can aim towards what you define as welfare 3.0, which is again based on a better understanding of humanity. The welfare state which many countries have built over the last decades have been an amazing human achievement.
00:54:36
Speaker
And it achieved many concrete results in fields that we knew what to aim for, like in healthcare. So, okay, let's vaccinate all the children in the country and let's make sure everybody has enough to eat. We succeeded in doing that.
00:54:52
Speaker
And the kind of welfare 3.0 program would try to expand that to other fields in which our achievements are far more moderate simply because we don't know what to aim for. We don't know what we need to do. If you think about mental health, it's much more difficult than providing food to people because we have a very poor understanding of the human mind and of what mental health is.
00:55:16
Speaker
Even if you think about food, one of the scandals of science is that we still don't know what to eat. So we basically solve the problem of enough food. Now actually we have the opposite problem of people eating too much and not too little. But beyond the medical quantity, it's I think one of the biggest scandals of science that after centuries we still don't know what we should eat.
00:55:38
Speaker
And mainly because so many of these miracle diets, they are a one size fits all, as if everybody should eat the same thing. Whereas obviously it should be tailored to individuals. So if you harness the power of AI and big data and machine learning and biotechnology, you could create the best dietary system in the world that tell people individually what would be good for them to eat. And this will have enormous side benefits.
00:56:06
Speaker
in reducing medical problems, in reducing waste of food and resources, helping the climate crisis and so forth. So this is just one example. Yeah, just on that example, I would argue also that part of the problem is beyond that we just don't know enough, that actually there are a lot of lobbyists who are telling people what to eat, knowing full well that that's bad for them.
00:56:30
Speaker
just because that way they'll make more of a profit, which gets back to your question of hacking, how we can prevent ourselves from getting hacked by powerful forces that don't have our best interest in mind. But the things you mentioned seemed like a little bit of first world perspective, which it's easy to get when we live in Israel or Sweden. But of course, there are many people on the planet who still live in pretty miserable situations where we actually can quite easily articulate how to make things at least a bit better.
00:56:56
Speaker
But then also in our societies, when you touched on mental health, there's a significant rise in depression in the United States. Life expectancy in the US has gone down three years in a row, which does not suggest the people are getting happier here.
00:57:11
Speaker
I'm wondering if you also in your positive vision of the future that we can hopefully end on here, would want to throw in some ingredients about a sort of society where we don't just have the lowest rung of the Maslow period of med taking care of food and shelter and stuff, but also feel meaning and purpose and meaningful connections with our fellow life forms.
00:57:31
Speaker
I think it's not just a first world issue. Again, even if you think about food, even in developing countries, more people today die from diabetes and diseases related to overeating or to overweight than from starvation. And mental health issues are certainly not just the problem for the first world. People are suffering from that in all countries. Part of the issue is that mental health is far, far more expensive.
00:57:57
Speaker
Certainly if you think in terms of going to therapy once or twice a week, then just giving vaccinations or antibiotics. So it's much more difficult to create a robust mental health system in poor countries, but we should aim there. It's certainly not just for the first world. And if we really understand humans better, we can provide much better health care, both physical health and mental health for everybody on the planet. Not just for Americans or Israelis or, or Swedes.
00:58:26
Speaker
In terms of physical health, it's usually a lot cheaper and simpler to not treat the diseases, but to instead prevent them from happening in the first place by reducing smoking, reducing people eating extremely unhealthy foods, et cetera. And the same way with mental health, presumably a key driver of a lot of the problems we have is that we have put ourselves in a human-made environment, which is incredibly different from the environment that we evolved to flourish in.
00:58:55
Speaker
I'm wondering, rather than just trying to develop new pills to help us live in this environment, which is often optimized for the ability to produce stuff rather than for human happiness, if you think that by deliberately changing your environment to be more conducive to human happiness might improve our happiness a lot without having to treat it, treat mental health disorders.
00:59:16
Speaker
It will demand the enormous amounts of resources and energy, but if you're looking for a big project for the 21st century, then yeah, that's definitely a good project to undertake.
00:59:29
Speaker
Okay. That's probably a good challenge from you on which to end this conversation. I'm extremely grateful for having had this opportunity to talk to you about these things. These are ideas. I'm going to continue thinking about great enthusiasm for a long time to come. And I very much hope we can stay in touch and actually meet in person even before too long. Yeah. Thank you for hosting me. They really can't think of anyone on the planet who thinks more profoundly about the big picture of the human condition here than you. And it's such an honor.
00:59:58
Speaker
Thank you, it was a pleasure for me too. Not a lot of opportunities to really go deeply about these issues. I mean, usually we could pull the way to questions about the 2020 presidential elections and things like that, which is important, but you know, we still have also to give some time to the big picture. Yeah, wonderful. So once again, thank you so much.
01:00:23
Speaker
Thanks so much for tuning in and being a part of our final episode of 2019. Many well and warm wishes for a happy and healthy new year from myself and the rest of the Future of Life Institute team. This podcast is possible because of the support of listeners like you. So if you found this podcast and conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org slash donate.
01:00:52
Speaker
Contributions like yours make these conversations possible.