Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
On the Future: An Interview with Martin Rees image

On the Future: An Interview with Martin Rees

Future of Life Institute Podcast
Avatar
49 Plays6 years ago
How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks? In this special podcast episode, Ariel speaks with cosmologist Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Topics discussed in this episode include: - Why Martin remains a technical optimist even as he focuses on existential risks - The economics and ethics of climate change - How AI and automation will make it harder for Africa and the Middle East to economically develop - How high expectations for health care and quality of life also put society at risk - Why growing inequality could be our most underappreciated global risk - Martin’s view that biotechnology poses greater risk than AI - Earth’s carrying capacity and the dangers of overpopulation - Space travel and why Martin is skeptical of Elon Musk’s plan to colonize Mars - The ethics of artificial meat, life extension, and cryogenics - How intelligent life could expand into the galaxy - Why humans might be unable to answer fundamental questions about the universe
Recommended
Transcript

Introduction to Existential Risks

00:00:08
Speaker
Hello, I am Arielle Kahn with the Future of Life Institute. Now, our podcasts lately have dealt with artificial intelligence in some way or another, with a few focusing on nuclear weapons. But FLI is really an organization about existential risks, and especially x risks that are the result of human action. And these cover a much broader field than just artificial intelligence.

Martin Rees and the Future of Technology

00:00:31
Speaker
So I'm excited to be hosting a special segment of the FLI podcast with Martin Rees, who has just come out with a book that looks at the ways technology and science could impact our future, both for good and bad. Martin is a cosmologist and space scientist. His research interests include galaxy formation, active galactic, nuclei, black holes, gamma ray bursts, and more speculative aspects of cosmology. He's based in Cambridge, where he has been director of the Institute of Astronomy.
00:01:00
Speaker
and Master of Trinity College. He was president of the Royal Society, which is the UK's Academy of Science from 2005 to 2010. In 2005, he was also appointed to the UK's House of Lords. He holds the honorary title of Astronomer Royal. He has received many international awards for his research and belongs to numerous academies, including the National Academy of Sciences, the Russian Academy, the Japan Academy, and the Pontifical Academy.
00:01:27
Speaker
He's on the board of the Princeton Institute for Advanced Study and has served on many bodies connected with international collaboration in science, especially threats stemming from humanity's ever heavier footprint on the planet and the runaway consequences of ever more powerful technologies. He's written seven books for the general public and his most recent book is about these threats. It's the reason that I've asked him to join us today. So first, Martin, thank you so much for talking with me today.
00:01:58
Speaker
Good to be in touch. So your new book is called On the Future, Prospects for Humanity.

Science and Civilization's Challenges

00:02:03
Speaker
And in his endorsement of the book, Neil deGrasse Tyson says, from climate change to biotech to artificial intelligence, science sits at the center of nearly all decisions that civilization confronts to assure its own survival.
00:02:16
Speaker
And I really liked this quote because I felt like it sums up what your book is about. Basically, science and the future are too intertwined to really look at one without the other. And whether the future turns out well, or whether it turns out to be the destruction of humanity, science and technology will likely have had some role to play. So first off, do you agree with that sentiment? Again, am I accurate in that description?
00:02:41
Speaker
I certainly agree and that's true of this century and ever before because of the greatest scientific knowledge we have and the greater party use it for good or ill because the technologies allow tremendously advanced technologies which could be misused by a small number of people.
00:02:57
Speaker
You've written in the past about how you think we have essentially a 50-50 chance of some sort of existential risk. One of the things that I noticed about this most recent book is you talk a lot about the threats, but to me it felt still like an optimistic book. So I was wondering if you could talk a little bit about, this might be jumping ahead a bit, but maybe with the overall message you're hoping that people take away is.

Technological Optimism vs. Political Reality

00:03:21
Speaker
Well, I'd describe myself as a technical optimist, but political pessimist because it is clear that we wouldn't be living such good lives today with seven and a half billion people on the planet if we didn't have the technology which has been developed in the last hundred years. And clearly there's a tremendous prospect of better technology in the future. But on the other hand, what is depressing is the very big gap between the way the world could be and the way the world actually is.
00:03:49
Speaker
In particular even though we have the power to give everyone decent life the lot of the bottom billion people in the world is pretty miserable and could be alleviated lots simply by the money owned by the thousand richest people in the world so we have a very unjust society.
00:04:07
Speaker
And the politics is not optimizing the way technology is used for human benefits. So my view is that it's the politics, which is an impediment to the best use of technology. And the reason this is important is that as time goes on, we're going to have a growing population, which is ever more demanding of energy and resources, putting more pressure on the planet and its environment and its climate, but we are also
00:04:33
Speaker
going to have to deal with this if we are to allow people to survive and avoid some serious tipping points being crossed. So that's the problem of the collective effect of us on the planet. But there's another effect, which is that these new technologies, especially bio-cyber and AI, allow small groups or even individuals to have an effect by error or by design, which could cascade very broadly even globally. And this I think
00:05:01
Speaker
makes our society very brittle because we're very interdependent. And on the other hand, it's easy for there to be a breakdown. That's what the precedent is, a gap between the way things could be and the downsides if we collectively overreach ourselves or if individuals cause disruption. So you mentioned actually quite a few things that I'm hoping to touch on as we continue to talk.

Ethical and Economic Debate on Climate Change

00:05:24
Speaker
I'm almost inclined, before we get too far into some of the specific topics, to bring up an issue that I personally have, and it's connected to a comment that you make in the book. I think you were talking about climate change at the time, and you say that if
00:05:40
Speaker
We heard that there was a 10% chance that an asteroid would strike in 2100. People would do something about it. We wouldn't say, oh, technology will be better in the future, so let's not worry about it now. And apparently I'm very cynical because I think that's exactly what we would do. And I'm curious what makes you feel more hopeful that even if it was something really specific like that, we would actually do something and not just constantly postpone the problem to some future generation.
00:06:08
Speaker
Well, I mean, I agree. We might not even in that case, but the reason I gave that as a contrast to our response to climate change is that there you could imagine a really sudden catastrophe happening if the asteroid does hit. Whereas the problem with climate change is really that first of all, the effect is mainly going to be several decades in the future. It started to happen, but the really severe consequences are decades away. But also there's an uncertainty.
00:06:33
Speaker
And it's not a sort of sudden event we can easily visualize. And it's not at all clear, therefore, how we are actually going to do something about it. In the case of the asteroid, it would be clear what the strategy would be to try and deal with it. Whereas in the case of climate, there are lots of ways. And the problem is that the consequences are decades away and they're global.
00:06:56
Speaker
And most of the political focus obviously is on short-term worries, short-term problems, and on national or more local problems. And anything we do about climate change will have an effect which is mainly for the benefit of people in quite different parts of the world 50 years from now. And it's hard to keep those issues up the agenda when there are so many urgent things to worry about. I think you're maybe right that even if there was a threat of an asteroid that may be the same sort of torpor,
00:07:24
Speaker
and tend to deal with it, but I thought that's an example of something where it would be easier to appreciate that it would really be a disaster. In the case of the climbers, it's not so obviously going to be a catastrophe that people are motivated now to start thinking about it.
00:07:39
Speaker
I've heard it go both ways that either climate change is, yes, obviously going to be bad, but it's not an existential risk. Therefore, those of us who are worried about existential risk don't need to worry about it. But then I've also heard people say, no, this could absolutely be an existential risk if we don't prevent runaway climate change. I was wondering if you could talk a bit about what worries you most regarding climate. First of all, I don't think it is an existential risk, but it's only we should worry about. And one point I make in my book is that I think the debate
00:08:09
Speaker
which makes it hard to have an agreed policy on climate change, stems not so much from differences about the science, although, of course, there's some complete deniers, but differences about ethics and economics. There's some people, of course, who completely deny the science, but most people accept that CO2 is warming the planet, and most people accept there's quite a big uncertainty, by the fact of two uncertainty, about how much warming you get for a given increase in CO2.
00:08:35
Speaker
But even among those who accept the IPCC projections of climate change and the uncertainties therein, I think there's a big debate, and the debate is really between people who apply a standard economic discount rate where you discount the future to rate, say, 5%, and those who think we shouldn't do it in this context. If you apply a 5% discount rate, as you would if you were deciding whether it's worth putting up an office building or something like that,
00:09:01
Speaker
Then, of course, you don't give any weight to what happens after about, say, 2050. And as Paul Lombard, the well-known environmentalist, argues, we should therefore give a lower priority to dealing with climate change than to helping the world's poor in other more immediate ways. And he is consistent given his assumptions about the discount rate. But many of us would say that in this context, we should not discount the future so heavily, we should
00:09:29
Speaker
care about the life chances of a baby born today as much as we should care about the life chances of those of us who are now middle aged and won't be alive at the end of a century. And we should also be prepared to pay an insurance premium now in order to remove or reduce the risk of the worst case climate scenarios. So I think the debate about what to do about climate change is essentially ethics. Do we want to discriminate on grounds of date of birth and not care about the life chances of those who are now babies?
00:09:59
Speaker
or are prepared to make some sacrifices now in order to reduce a risk which they might encounter in later life.
00:10:06
Speaker
And do you think the risks are only going to be showing up that much later?

Global Inequality and Technology

00:10:11
Speaker
I mean, we are already seeing these really heavy storms striking. You know, we've got Florence in North Carolina right now. There's a super typhoon hit southern China. In the Philippines, we had Maria and I'm losing track of all the hurricanes that we've had. We've had these huge hurricanes over the last couple of years. We saw California and much of the west coast of the US just on flames this year.
00:10:34
Speaker
Do you think we really need to wait that long? I think it's generally agreed that extreme weather is now happening more often as a consequence of climate change and the warming of the ocean.
00:10:44
Speaker
and that this will become a more serious trend. But by the end of a century, of course, it could be very serious indeed. And the main threat is, of course, to people in the disadvantaged parts of the world. I mean, if we take these recent events, it's been far worse in the Philippines than in the United States because they're not prepared for it and their houses are more fragile, etc.
00:11:07
Speaker
I don't suppose you have any thoughts on how we get people to care more about others, because it does sort of seem to be in general that sort of worrying about myself versus worrying about other people. The richer countries are the ones who are causing more of the climate change, and it's the poorer countries who seem to be suffering more. And then, of course, there's the issue of the people who are alive now versus the people in the future. That's right, yes. I think most people do care about their children and grandchildren.
00:11:33
Speaker
And so to that extent, they do care about what things will be like at the end of a century. But as you say, the extra political problem is that the cause of the CO2 emissions is mainly what's happened in the advanced countries. And the downside is going to be more seriously felt by those in remote parts of the world. And it's easy to overlook them and hard to persuade people that we ought to make a sacrifice, which will be mainly for their benefit.
00:12:00
Speaker
I think incidentally that's one of the other things that we have to ensure happens is a narrowing of the gap between the lifestyles and the economic advantages in the advanced and the less advanced part of the world. I think that's going to be everyone's interest because if there continues to be great inequality, not only will the poorer people be more subject to threats like climate change, but I think there's going to be massive and well justified discontent.
00:12:28
Speaker
Because unlike in the earlier generations, they are aware of what they're missing. They all have mobile phones. They all know what it's like. And I think there's going to be an embitement leading to a conflict if we don't narrow this gap. And this requires, I think, a sacrifice on the part of the wealthy nations to subsidize developments in these poorer countries, especially in Africa.
00:12:49
Speaker
So that sort of ties into another question that I had for you. And that is, what do you think is the most underappreciated threat that maybe isn't quite as obvious? I mean, you mentioned the fact that we have these people in poorer countries who are able to more easily see what they're missing out on. I mean, inequality is a problem in and of itself, but also just that people are more aware of the inequality seems like a threat that we might not be as aware of. Are there others that you think are sort of underappreciated? Yes.
00:13:19
Speaker
Just to go back, that threat is, of course, very, very serious because by the end of a century, there might be 10 times as many people in Africa as in Europe. And of course, they would then have every justification in migrating towards Europe with the result of huge disruption. And so we do have to care about those sorts of issues.
00:13:39
Speaker
I think there are all kinds of reasons, apart from straight ethics, why we should ensure that the less developed countries, especially in Africa, do have a chance to close the gap. And incidentally, one thing which is a handicap for them is that they won't have the route to prosperity followed by the so-called Asian Tigers.
00:13:56
Speaker
which were able to have high economic growth by undercutting the labor costs in the West. Now what's happening is that with robotics, it's possible to, as it were, re-shore lots of manufacturing industry back to wealthy countries. And so Africa and the Middle East won't have the same opportunity the far eastern countries did to catch up by undercutting the cost of production in the West.
00:14:19
Speaker
So this is another reason why it's going to be a big challenge and so that's something which i think we don't worry about enough and need to worry about because if the end causes persist when everyone is able to move easily and knows exactly what they're missing then that's a recipe for a very dangerous and disrupted world so i would say that is under appreciated threats.
00:14:39
Speaker
Another thing i would count as important is that we are as a society very brittle and very unstable because of high expectations. Let me give you another example suppose there were to be a pandemic not necessarily a genetically engineered terrorist one but a natural one.
00:14:58
Speaker
In contrast to what happened in the 14th century when the bubonic plague, black death occurred and killed nearly half the people in certain towns and the rest went on fatalistically, if we had some sort of plague which affected even 1% of the population of the United States, there'd be complete social breakdown because that would overwhelm the capacity of hospitals and people, unless they were wealthy, would feel they weren't getting their entitlement of health care.
00:15:25
Speaker
And if that is a matter of life and death, that's a recipe for social breakdown. So I think given the high expectations of people in the world, then we are far more vulnerable to the consequences of these breakdowns and pandemics and the failures of electricity grids, etc., than in the past when people were more robust and more fatalistic.
00:15:46
Speaker
That's really interesting. So, I mean, essentially because we expect to be leading these better lifestyles, just that expectation could be our downfall if something goes wrong.
00:15:57
Speaker
That's right and of course if we know that there are cures available for some disease and there's not the hospital capacity to offer it to all the people who are victims of the disease, then naturally that's a matter of life and death and that is going to promote social breakdown. And this is a new threat which is of course the downside of the fact that we can at least cure some people.
00:16:17
Speaker
So there's two directions that I want to go with this.

Biotechnology Risks and Regulations

00:16:21
Speaker
I'm going to start with just transitioning now to biotechnology. I want to come back to issues of overpopulation and improving healthcare in a little bit. But first, I want to touch on biotech threats. One of the things that's been a little bit interesting for me is when I first started at FLI three years ago, we were very concerned about biotechnology. CRISPR was really big. It had just sort of exploded onto the scene.
00:16:47
Speaker
And now three years later, I'm not hearing quite as much about the biotech threats. And I'm not sure if that's because something has actually changed, or if it's just because at FLI, I've become more focused on AI and therefore stuff is happening, but I'm not keeping up with it. I was wondering if you could sort of talk a bit about what some of the risks you see today are with respect to biotech. Yes. Well, let me say I think we worry far more about bio threats and about AI, in my opinion.
00:17:15
Speaker
And I think as far as the biosphere are concerned, then there are these new techniques. I mean CRISPR, of course, is a very benign technique if it's used to remove a single damaging gene that gives you a particular disease. And also it's less objectionable than traditional GM because it doesn't cross the species barrier in the same way. But it does allow things like gene drive where you make species extinct by making it sterile.
00:17:41
Speaker
That's good if you're wiping out a mosquito that carries a deadly virus but there's a risk of some effect which distorts the ecology and has a cascading consequence so there are risks of that kind but more important i think there is.
00:17:56
Speaker
a risk of the misuse of these techniques. I mean, not just CRISPR, but for instance, the gain of function techniques to be used in 2011 in Wisconsin and in Holland to make influenza virus both more virulent and more transmissible. Things like that, which can be done in a more advanced way now, I'm sure. These are clearly potentially dangerous, even if experimenters have a good motive, then the viruses might escape. And of course, they are the kind of things which could be misused.
00:18:24
Speaker
And there have, of course, been lots of meetings. You've probably been at some to discuss among scientists what the guidelines should be, how can we ensure responsible innovation in these technologies. These are modeled on the famous conference in Asilomar in the 1970s, when recombinant DNA was first being discussed. And the academics who worked in that area, they agreed on sort of a cautious stance and a moratorium on some kinds of experiments.
00:18:51
Speaker
But now they're trying to do the same thing and there's a big difference. One is that these sciences are now more global. It's not just a few people in North America and Europe. They're global and there are strong commercial pressures. And they have far more widely understood that biohacking is almost a student recreation.
00:19:09
Speaker
And so this means, in my view, that is a big danger because even if we have regulations about certain things that can't be done because they're dangerous, enforcing those regulations globally is going to be as hopeless as it is now to enforce the drug laws or to enforce the tax laws globally. Something which can be done will be done by someone somewhere, whatever the regulations say. And I think this is very scary. Consequences could cascade globally.
00:19:37
Speaker
Do you think that the threat is more likely to come from something happening accidentally or intentionally? I don't know. I think it could be either. Certainly there could be something accidental from gene drive or releasing some dangerous virus. But I think if we imagine it happening intentionally, then we've got to ask what sort of people might do it.
00:19:59
Speaker
Governments don't use biological weapons because you can't predict how they will spread and who they actually kill and that would be an inhibiting factor for any terrorist group that had well defined aims. But my worst nightmare is some person and there are some who think that there are too many human beings on the planet.
00:20:19
Speaker
And if they combine that view with the mindset of extreme animal rights people etc, they might think it would be a good thing for Gaia for Mother Earth to get rid of a lot of human beings. And they're the kind of people who with access to this technology might have no compunction in releasing a dangerous pathogen. So this is the kind of thing that worries me.
00:20:41
Speaker
I find that interesting because it ties into the other question that I'd wanted to ask you about, and that is the idea of overpopulation. I've read it both ways, that overpopulation is in and of itself something of an existential risk or a catastrophic risk because we just don't have enough resources on the planet.

Overpopulation and Sustainable Living

00:20:57
Speaker
You actually made an interesting point, I thought, in your book where you point out that we've been thinking that there aren't enough resources for a long time and yet we keep getting more people and we still have plenty of resources.
00:21:08
Speaker
So I thought that was sort of interesting and reassuring. But I do think at some point that does become an issue. And then at the same time, we're seeing this huge push, understandably, for improved health care and expanding lifespans and trying to save as many lives as possible and making those lives last as long as possible. How do you resolve those two sides of the issue?
00:21:30
Speaker
It's true, of course, as you imply, that the population has risen, double in the last 50 years, and there were doomsday who in the 1960s and 70s thought that mass starvation by now, and there hasn't been because food production has more than kept pace. And if there are famines today, as of course there are, it's not because of overall food shortages, it's because of wars or maldiscribution of money to buy the food. So up to now, things have gone fairly well.
00:21:58
Speaker
But clearly, there are limits to the food that can be produced on the Earth. And all I would say is that we can't really say what the carrying capacity of the Earth is, because it depends so much on the lifestyle of the people. As I say in the book, the world couldn't sustainably have two billion people if they all lived like present-day Americans, using as much energy and burning as much fossil fuels and eating as much beef.
00:22:23
Speaker
On the other hand, you can imagine lifestyles, which are very sort of austere, where the earth could carry 10 or even 20 billion people. So we can't set up a limit, but all we can say is that given that it's fairly clear that the population is going to rise to about 9 billion by 2050, and you may gone rising still after that, we've got to ensure that the way in which the average person lives is less profligate in terms of energy and resources.
00:22:52
Speaker
Otherwise, there will be problems. And I think we ought to do what we can to ensure that after 2050, the population turns around and goes down. The best scenario is when it goes on, rising as it may if people choose to have large families, even when they have the choice.
00:23:09
Speaker
That could happen. And of course, as you say, life extension is going to have an effect on society generally, but obviously on the overall population too. So I think it would be more benign if the population of 9 billion in 2050 was a peak and it started going down after that.
00:23:24
Speaker
I'm not hopeless because the actual number of birds per year has already started going down the reason the population going up is because more babies survive and most of the people in the developing world are still young and if they live as long as people in the past countries do then of course that's going to increase the population even for a steady birth rate.
00:23:45
Speaker
So that's why, unless there's a real disaster, we can't avoid the population rising to about nine billion. But I think policies can have an effect on what happens after that. And I think we do have to try to make people realize that having large numbers of children has negative externalities that work in economic jargon and is going to be something that puts extra pressure on the world and affects our environment in a detrimental way.

Space Travel Myths

00:24:11
Speaker
So as I was reading this, especially as I was reading your section about space travel, I want to ask you about your take on whether we can just start sending people to Mars or something like that to address issues of overpopulation. As I was reading your section on that, news came out that Elon Musk and SpaceX had their first passenger for a trip around the moon, which is now scheduled for 2023.
00:24:36
Speaker
And the timing was just entertaining to me because, like I said, you have a section in your book about why you don't actually agree with Elon Musk's plan for some of this stuff. So I was hoping you could talk a little bit about why you're not as big a fan of space tourism and what you think of humanity expanding into the rest of the solar system and universe. Well, let me say that I think it's a dangerous delusion to think we can solve the Earth's problems by escaping to Mars or elsewhere.
00:25:05
Speaker
Mass emigration is not feasible. There's nowhere in the solar system which is as comfortable to live in as the top of Everest or the south pole. So I think the idea which was promulgated by Elon Musk and Stephen Hawking of mass emigration is, I think, a dangerous delusion. The world's problems have to be solved here. Dealing with climate change is a doddle compared to terraforming Mars. So I don't think that's true.
00:25:29
Speaker
Now, the two other things about the space. The first is that the practical need for sending people into space is getting less as robots get more advanced. Everyone has seen pictures of the Curiosity probe coming across the surface of Mars and maybe missing things that a geologist would notice. But future robots will be able to do much of what a human will do and manufacture large structures in space, etc. So the practical need to send people to space is going down.
00:25:56
Speaker
On the other hand, some people may want to go simply as an adventure.
00:26:01
Speaker
It's not really tourism because tourism implies it's safe and routine. It'll be an adventure like Steve Fawcett or the guy who fell supersonically from an L2 balloon. It could be crazy people like that. And maybe this Japanese tourist is in the same style who want to have a thrill. And I think we should cheer them on because I think it would be good to imagine a few people living on Mars, but it's never going to be as comfortable as our Earth. And we should just cheer on people like this.
00:26:27
Speaker
I personally think it should be left to private money if i was an american i would not support the nasa space program it's very expensive and it be undercut by private companies which can afford to take high risks that nasa could inflict on publicly funded civilians so i don't think nasa should be doing man space flight at all. Of course some people would say well it's a national aspiration national goal to show super power preeminence by a massive space project that is of course what do the apollo program,
00:26:57
Speaker
And the Apollo program cost about 4% of the US federal budget. Now NASA has 0.6% of thereabouts. I'm old enough to remember the Apollo moon landings. And of course, if you asked me back then, I would have expected that there might have been people on Mars within 10 or 15 years of that time. And there would have been had the program been funded.
00:27:20
Speaker
But of course, there was no motive because the Apollo program is driven by superpower rivalry and have beaten the Russians. It wasn't pursued with the same intensity. It could be that the Chinese will, for prestige reasons, want to have a big national space program and leapfrog what the Americans did by going to Mars.
00:27:37
Speaker
that could happen. But otherwise, I think the only manned spaceflight will and indeed should be privately funded by adventurers prepared to go on cut price and very risky missions.

Human Evolution and Future Technologies

00:27:49
Speaker
But we should cheer them on. The reason we should cheer them on is that if in fact a few of them do provide some sort of settlement on Mars, then they will be important for life's long-term future. Because whereas we are, as humans, fairly well adapted to the Earth, they will be in a place, Mars or asteroid or somewhere, for which they are badly adapted.
00:28:08
Speaker
And therefore, they would have every incentive to use all the techniques of genetic modification and cyber technology to adapt to this hostile environment. So a new species, perhaps quite different humans, may emerge as progeny of those pioneers within two or three centuries. I think this is quite possible. And they, of course, may download themselves to be electronic. We don't know how that will happen. We all know about the possibilities of advanced intelligence in electronic form.
00:28:37
Speaker
But i think this will happen on mars or in space and of course if we think about going further and exploring beyond our solar system then of course that's not really human enterprise because of human life has been limited, but it is a goal which will be feasible if you were near a mortal electronic entity so that's the way in which are remote descendants will perhaps penetrate beyond our solar system.
00:29:02
Speaker
So as you're looking towards these longer term futures, what are you hopeful that we'll be able to achieve? Well, you say we, I think we humans will mainly want to stay on the earth, but I think intelligent life, even if it's not out there already in space, could spread through the galaxy as a consequence of what happens when a few people who go into space and are away from the regulators adapt themselves to that environment. And of course, one thing which is very important is to be aware of different timescales.
00:29:33
Speaker
Sometimes you hear people talk about humans watching the death of the sun in five billion years. That's nonsense because the time scale for biological evolution by Darwinian selection is about a million years. It's about a time shorter than the lifetime of the sun. But more importantly, the time scale for this new kind of intelligent design, when we can redesign humans and make new species, that time scale is a technological time scale. It could be only a century.
00:30:00
Speaker
So it would only take one or two or three centuries before we have entities which are very different from human beings, if they are created by genetic modification or downloading to electronic entities. They won't be normal humans. I think this will happen. And this, of course, will be a very important stage in the evolution of complexity in our universe, because we will go from the kind of complexity which has emerged by doing an selection to something quite new.
00:30:28
Speaker
This century is very special. This is a century where we might be triggering and jump-starting a new kind of technological evolution which could spread from our solar system far beyond on the timescale very short compared to the timescale for the winning evolution and the timescale for astronomical evolution.
00:30:47
Speaker
So in the book, you spend a lot of time also talking about current physics theories and how those could evolve. And you spend a little bit of time talking about multiverses. I was hoping you could talk a little bit about why you think understanding that is important for ensuring this hopefully better future. Well, it's only perfectly linked to it. I put that in the book because I was thinking about what are the challenges
00:31:14
Speaker
Not just challenges, the practical kind, but intellectual challenges. And one point I make is that there are some scientific challenges, which we are now confronting, which may be beyond human capacity to solve. Because there's no particular reason to think that the capacity of our brains is matched to understanding all aspects of reality any more than a monkey can understand quantum theory. It's possible that there'll be some fundamental aspects of nature that humans will never understand. And they will be a challenge for post humans.
00:31:44
Speaker
And I think those challenges are perhaps more likely to be in the realm of complexity, understanding the brain, for instance, than in the context of cosmology. Although there are challenges in cosmology, which is to understand the very early universe where we may need a new theory like string theory with extra dimensions, et cetera. And we need a theory like that in order to decide whether our big bang was the only one or whether there were other big bangs in the kind of multiverse. And it's possible that
00:32:12
Speaker
In 50 years from now, we will have such a theory. We know the answer to those questions, but it could be that there is such a theory. It's just too hard for anyone to actually understand and make predictions from. So I think these issues are relevant to the intellectual constraints on humans.
00:32:30
Speaker
Is that something that you think or hope that things like more advanced artificial intelligence or however we evolve in the future, that that evolution will allow us in quotes to understand some of these more complex ideas? Well, I think it's certainly possible that machines could actually in a sense create entities based on physics, which we can't understand. This is perfectly possible because obviously we know they can vastly out compute us at the moment.
00:32:59
Speaker
So it could very well be, for instance, that there is a major string theory which is correct, and it's just too difficult for any human mathematician to work out, but it could be that computers could work it out and get some answers. But of course, you then come up against a more philosophical question about whether competence implies comprehension, whether a computer with superhuman capabilities is necessarily going to be self-aware and conscious, or whether it is going to be just a zombie. I mean, that's a separate question.
00:33:27
Speaker
which may not affect what you can actually do, but I think it does affect how we react to the possibility that the far future will be dominated by such things. I remember when I wrote an article in a newspaper about these possibilities, the reaction was bimodal. Some people thought, isn't it great to be these even deeper intellects than human beings out there, but others who thought these might just be zombies thought it was very sad if there was no entity which could actually
00:33:54
Speaker
appreciate the beauties and wonders of nature in the way we can. So it does matter in a sense to our perception of this far future if we think that these entities, which may be electronic rather than organic, will be conscious and will have the kind of awareness that we have and which makes us wonder at the beauty of the environment in which we've emerged. So I think that's a very important question.
00:34:19
Speaker
I want to pull things back to a little bit more shorter term, I guess, but still considering this idea of how technology will evolve.

Balancing Optimism with Present Challenges

00:34:28
Speaker
You mentioned that you don't think it's a good idea to count on going to Mars as a solution to our problems on Earth because all of our problems on Earth are still going to be easier to solve here than it is to populate Mars.
00:34:39
Speaker
And I think in general, we have this tendency to say, oh, well, in the future, we'll have technology that can fix whatever issue we're dealing with now, so we don't need to worry about it. And I was wondering if you could comment on that approach. To what extent can we say, well, most likely, technology will have improved and can help us solve these problems. And to what extent is that a dangerous approach to take?
00:35:02
Speaker
Well, clearly, technology has allowed us to live much better, more complex lives than we could in the past. And on the whole, the net benefits outweigh the downsides. But of course, there are downsides. And they stem from the fact that we have some people who are disruptive and some people who can't be trusted. I mean, if we had a world where everyone could trust everyone else, we could get rid of about a third of the economy, I would guess.
00:35:27
Speaker
But i think the main point is that we are very vulnerable. We have huge advances clearly in networking via the internet and computers, etc. And we may have the internet of things within a decade. But of course, people worry that this opens up a new kind of even more catastrophic potential for cyber terrorism. So that's just one example. And ditto for biotech.
00:35:52
Speaker
which may allow the development of pathogens which kill people of particular races or have other effects so there are these technologies are developing fast and they can be used for great benefit.
00:36:04
Speaker
But they can be misused in ways that provide new kinds of horrors that were not available in the past. And so it's by no means obvious which way things will go. Will there be a continued net benefit of technology? As I think we said, it has been up to now despite nuclear weapons, etc. Or will at some stage the downside run ahead?
00:36:27
Speaker
of the benefits. And I do worry about the latter being a possibility, particularly because of this amplification factor, the fact it only takes a few people in order to cause disruption that could cascade globally. And the world is so interconnected that we can't really have a disaster in one region without it affecting the whole world.
00:36:46
Speaker
German has this book collapse where he discusses five classes of particular civilizations where is other parts of the world unaffected. I think if we really had some catastrophe it would affect the whole world just affect parts and that's something which is a new downside so the stakes are getting higher as technology advances.
00:37:05
Speaker
And my book is really aimed to say that these develops are very exciting, but they pose new challenges. And I think particularly they pose challenges because a few disciplines can cause more trouble. And I think it'll make the world harder to govern. It'll make cities and countries harder to govern and a stronger tension between three things we want to achieve, which is security, privacy, and liberty. I think that's going to be a challenge for all future governments.
00:37:34
Speaker
Reading your book, I very much got the impression that it was essentially a call to action to address these issues that you just mentioned. And I was sort of curious, what do you hope that people will do after reading the book, or learning more about these issues in general? Well, first of all, I hope people can be persuaded to think long term. And I
00:37:56
Speaker
mentioned that religious groups for instance tend to think long term and the paper in second in twenty fifteen i think had a very important effect on the opinion in latin america africa and east asia in the lead up to the paris climate conference for instance that's the example where someone from outside traditional politics and have an effect who is what's very important is that politicians will only respond.
00:38:19
Speaker
to an issue if it's prominent in the press and prominent in the inbox. So we've got to ensure that people are concerned about this. Of course, I end in the book saying what are the special responsibilities of scientists because scientists clearly have a special responsibility to ensure that their work is safe and that the public and politicians are made aware of the implications of any discovery they make. And I think that's important. Even though they should be mindful
00:38:45
Speaker
that their expertise doesn't extend beyond their special area. And that's a reason why scientific understanding, in a general sense, is something that really has to be universal with this important education. Because if we want to have a proper democracy where debate about these issues rises above the level of tabloid slogans, then given that the important issues that we have to discuss involve health, energy, the environment, climate, et cetera,
00:39:14
Speaker
which had scientific aspects, then everyone has to have enough feel for those aspects to participate in the debate, and also enough feel for probabilities and statistics to be not easily bamboozled by political arguments. So I think an educated population is essential for proper democracy, obviously that's a platitude, but the education needs to include to greater extent an understanding of the scope and limits of science and technology.
00:39:42
Speaker
And I make this point at the end and hope that this will lead to a greater awareness of these issues. And of course, for people in universities, we have a responsibility because we can influence the younger generation. And it's certainly the case that students and people under 30 who may be alive towards the end of a century are more mindful of these concerns than the middle aged and old. And so it's very important that these activities like the effective altruism movement
00:40:10
Speaker
80,000 hours of these other movements among students should be encouraged because they are going to be important in spreading an awareness of long-term concerns. Public opinion can be changed. We can see the change in attitudes to driving and things like that, which have happened over a few decades. And I think perhaps we can have
00:40:31
Speaker
and more environmental sensitivity has become regarded as rather naff or tacky to waste energy and be extravagant in consumption. I'm hopeful that attitudes will change in a positive way, but I'm concerned simply because the politics is getting very difficult, because with social media, panic and rumor can spread at the speed of light and small groups can have a global effect.
00:40:58
Speaker
And this makes it very, very hard to ensure that we can keep things stable given that only a few people are needed to cause massive disruption. That's something which is new and I think is becoming more and more serious.

Responsible AI and Biotech Innovation

00:41:12
Speaker
So we've been talking a lot about things that we should be worrying about. Do you think there are things that we are currently worrying about that we probably can just let go of that aren't as vigorous?
00:41:23
Speaker
I think we need to ensure responsible innovation in all new technologies. We've talked a lot about bio, and we are very concerned about the misuse of cyber technology. As with AI, of course, there are a whole lot of concerns that we have. I personally think that
00:41:41
Speaker
The takeover of AI will be rather slower than many of the evangelists suspect, but of course we do have to ensure that humans are not victimised by some algorithm which they can't have explained to them. I think there is an awareness of this and I think what's been done by your colleagues at MIT has been very important in raising awareness of the need for responsible innovation and ethical application of AI.
00:42:06
Speaker
And also what your group has recognized is that the order in which things happen is very important. If some computer is developed and goes rogue, that's bad news. Whereas if we have a powerful computer which is under our control, then it may help us to deal with these other problems, the problems of misuse of biotech, etc. So the order in which things happen is going to be very important.
00:42:29
Speaker
But i must say i don't completely share these concerns about machines running away and taking over because i think there's a difference in this for biological evolution. It's been a drive towards intelligence so it has been favored but so is the question in the case of computers they may drive towards greater intelligence but it's not obvious that that is going to be combined with aggression because they are going to be evolving by intelligent design.
00:42:57
Speaker
not the struggle of the fittest, which is the way that we evolve. What about concerns regarding AI just in terms of being misprogrammed and AI just being extremely competent for design on our part, poor intelligent design?
00:43:12
Speaker
Well, I think in the short term, obviously, there are concerns about AI making decisions that affect people. I mean, I think most of us would say that we shouldn't be deprived of a credit rating or put in prison on the basis of some AI algorithm which can't be explained to us. We are entitled to have an explanation if something is done to us against our will. And that is why it is worrying if too much is going to be delegated to AI.
00:43:37
Speaker
And I also think that constraints on the development of self-driving cars and things of that kind is going to be constrained by the fact that these become vulnerable to hacking of various kinds. And I think it will be a long time before we will accept a driverless car on an ordinary road. Controlled environments, yes. In particular lanes on highways, yes. In an ordinary road in a traditional city, it's not clear that we will ever accept a driverless car.
00:44:07
Speaker
So I think I'm likely less bullish than maybe some of your colleagues about the speed at which the machines will really take over and be accepted that we can trust ourselves to them. As I mentioned at the start, and as you mentioned at the start, you are a techno-optimist for as much as the book is about things that could go wrong. It did feel to me like it was also sort of an optimistic look at the future. So what are you most optimistic about? What are you most hopeful for? Looking at both short-term and long-term, however you feel like answering that.
00:44:38
Speaker
I'm hopeful that biotech will have huge benefits for health, will perhaps extend human life around a bit, but that's something about which we should feel a bit ambivalent, I think. So I think health and also food, I think, if you ask me what is one of the most benign technologies, it's to make artificial meat, for instance.
00:44:57
Speaker
It's clear that we can more easily feed a population of nine billion on a vegetarian diet than on a traditional diet like Americans consume today. And so to take one benign technology, I would say artificial meat is one and more intensive farming so that we can feed people without encroaching too much on the natural part of the world. What's most about that?
00:45:22
Speaker
If you think about very long-term trends, then life extension is something which obviously if it happens too quickly, it's going to be hugely disruptive. I mean, I would be in a multi-generation family, etc. And also, even though we will have the capability within a century to change human beings, I think we should constrain that on Earth and just let that be done by the few crazy pioneers who go away into space. But if this does happen,
00:45:47
Speaker
Then, as i say in the introduction to my book, it will be a real game changer in a sense. I make the point that one thing that hasn't changed over most of human history is human character. Evidence for this is that we can read the literature written by the Greeks and Romans more than 2000 years ago and resonate with the people and their characters and their attitudes and emotions.
00:46:10
Speaker
It's not at all clear that on some scenarios, people 200 years from now will resonate in anything other than an algorithmic sense with the attitudes we have as humans today. And so that will be a fundamental and very fast change in the nature of humanity. The question is, can we do something to at least constrain
00:46:31
Speaker
the rate of which that happens, or at least constrain the way in which it happens, because it is going to be almost certainly possible to completely change human mentality and maybe even human physique over that time scale. One has only listened to people like George Church to realize that it's not crazy to imagine this happening. So you mentioned in the book that there's lots of people who are interested in cryogenics, but you also talked briefly about how there are some negative

Cryogenics and Ethical Concerns

00:46:56
Speaker
effects.
00:46:56
Speaker
of cryogenics and the burden that it puts on the future. I was wondering if you could talk really quickly about that. There are some people, I know some, who have a medallion around their neck, which is an injunction that if they drop dead, they should be immediately frozen and their blood drained and replaced by liquid nitrogen. And they should then be stored. There's a company called Alcor in Arizona that does this and allegedly revived at some stage when technology has advanced. I find it hard to take this seriously.
00:47:26
Speaker
But they say that, well, the chance may be small, but if they don't invest this way, then the chance is zero. They have a resurrection. But I actually think that even if it worked, even if the company didn't go bust and sincerely maintain them for centuries and they could then be revived, I still think that's what they're doing is selfish because they'd be revived into a world that was very different. They'd be refugees from the past and they'd therefore be imposing an obligation on the future.
00:47:55
Speaker
We obviously feel an obligation to look after some asylum seeker refugee, and we might feel the same if someone had been driven out of their home in the Amazonian forest, for instance, and had to find a new home. But these refugees from the past, as it were, they're imposing a burden on future generations. So I'm not sure that what they're doing is ethical. I think it's rather selfish. I hadn't thought of that aspect of it. I'm a little bit skeptical of our ability to come back.
00:48:25
Speaker
I agree. I think the chances are almost zero. Even if they were stored and etc., one would like to see this technology tried on some animal first to see if they could freeze animals with liquid nitrogen temperatures and then revive it. I think it's pretty crazy. Of course, the number of people doing it is fairly small.
00:48:44
Speaker
And some of the companies doing it as well in Russia, which are real ripoffs, I think, and won't survive. But I say even if these companies did keep going for a couple of centuries or as long as necessary, then it's not clear to me that it's doing good. And I also quoted this nice statement about what happens if we clone and create a Neanderthal. Do we put him in a zoo or send him to Harvard, said the professor from Stanford.
00:49:07
Speaker
Those are ethical considerations that I don't see very often. We're so focused on what we can do that sometimes you sort of forget, okay, once we've done this, what happens next? I appreciate you being here today. Those are my questions. Was there anything else that you wanted to mention that we didn't get into?

Medical Ethics and Future Trends

00:49:24
Speaker
One thing we didn't discuss, which is a serious issue, is the limits of medical treatment because you can make extraordinary efforts to keep people alive long before they have died naturally and to keep alive babies.
00:49:37
Speaker
never live a normal life etc. I certainly feel that that's gone too far at both ends of life. One should not devote so much effort to extreme premature babies and allow people to die more naturally. Actually, if you ask me about predictions I'd make about the next 30 or 40 years, first, more vegetarianism, secondly, more euthanasia.
00:50:01
Speaker
I support both vegetarianism and I think euthanasia should be allowed. I think it's a little bit barbaric that it's not. Yes. I think we've covered quite a lot, haven't we? I tried to. I just like to mention that my book does touch a lot of bases in a fairly short book. I hope it will be read not just by scientists. It's not really a science book, although it emphasizes how scientific ideas are what's going to determine how our civilization evolves.
00:50:29
Speaker
And I'd also like to say that for universities, we're not only in travel students, but we have universities like MIT and my university of Cambridge.

Role of Universities in Future Challenges

00:50:39
Speaker
We have convening power to gather people together to address these questions. And I think the value of the centers, which we have in Cambridge and you have an MIT are that they are groups which are trying to address these very, very big issues, these threats and opportunities. The stakes are so high.
00:50:59
Speaker
that if our efforts can merely reduce the risk of a disaster by one part in 10,000, we've more than earned our keep. I'm very supportive of our Center for Extensive Risk in Cambridge and also the Future of Life Institute, which you have at MIT. Given the huge numbers of people who are thinking about small risks like which foods are carcinogenic and the threats of low radiation doses, et cetera,
00:51:23
Speaker
It's not at all inappropriate that there should be some groups who are focusing on the more extreme, albeit perhaps rather improbable, threats which could affect the whole future of humanity. So I think it's very important that these groups should be encouraged and fostered, and I'm privileged to be part of them.
00:51:41
Speaker
All right, so again, the book is on the future prospects for humanity by Martin Reese. And I do want to add, I agree with what you just said. I think this is a really nice introduction to a lot of the risks that we face. I started taking notes about the different topics you covered and I don't think I got all of them, but.
00:51:57
Speaker
There's climate change, nuclear war, nuclear winter, biodiversity loss, overpopulation, synthetic biology, genome editing, bioterrorism, biological errors, artificial intelligence, cyber technology, cryogenics, and the various topics in physics. And as you mentioned, the role that scientists need to play in ensuring a safe future.
00:52:18
Speaker
So I highly recommend the book as a really great introduction to the potential risks and the hopefully much greater potential benefits that science and technology can pose for the future. So Martin, thank you again for joining me today. Thank you, Ariel, for talking to me.
00:52:44
Speaker
If you enjoyed this show, please take a moment to like it, share it, and maybe even give it a good review. And don't forget that Lucas Perry also has a new podcast on AI value alignment and a new episode from him will go live in the middle of the month.