Introduction and Guest Overview
00:00:00
Speaker
We're living through a time of enormous technological, social and even geophysical changes. Just thinking about what's going on right now can be overwhelming, let alone trying to contemplate what's coming next. I guess this week is Peter Schwartz, a futurist. Peter is one of the foundational figures of the field of scenario planning and wrote the celebrated book, The Art of the Long View.
00:00:25
Speaker
He's also one of the founders of the Long Now Foundation that we'll discuss. He led teams for Steven Spielberg, figuring out visions of the future for films like Minority Report. He's also the chief futurist at Salesforce. So Peter is someone who regularly thinks about the future.
00:00:46
Speaker
And in particular he thinks about not just what the future looks like, but what decisions we need to take now to meet all the possible futures that we may encounter.
Systematic Future Thinking
00:00:56
Speaker
I found this in a mentally enriching conversation. In fact, every day since I've recorded this, I've been having the same thought. Oh, I should have asked Peter Schwartz this. I should have asked him about, I don't know, the Foundation trilogy by Isaac Asimov. I should have asked him about what happens if AI guides consciousness and do we think that might happen? Or what if we start mining rare earth metals on the moon? Is that a good idea?
00:01:21
Speaker
And the point is it's really got me thinking about different ways of thinking about the future. Some take-home points for me are that we should think about the future. And we can do this in a systematic way. We can tease out the pieces that we think are almost inevitable. Carbon dioxide concentrations are going to rise over the next decade. That's almost impossible to avoid. Will Trump
00:01:46
Speaker
have a second term? I don't know. Will Putin come to a sticky end? Also, these things are hard to predict. They're much more contingent. But we can still think about the different scenarios there. We're facing huge challenges, but this is not for the first time that humanity has confronted social or technological upheavals. We should take heart from this. And we should contemplate what's to come, not from a standpoint of fear, but of careful thought.
00:02:12
Speaker
I'm James Robinson, you're listening to Multiverses. Peter Crots, thank you for joining me on Multiverses.
Technological and Social Upheavals
00:02:34
Speaker
Pleasure to be here.
00:02:35
Speaker
Since we're talking about long things, the great arc of time, I'm going to take the liberty to read quite a long quote from the beginning of your book, The Art of the Long View. So it's from Paul Valerie, the French poet, but it's from an F day of his from 1932. You know all this, but I thought it was such a beautiful quote. I wanted to begin with this. All the notions we thought solid,
00:03:02
Speaker
all the values of civilized life, all that made for stability in international relations, all that made for regularity in the economy. In a word, all that tended happily to limit the uncertainty of the morrow, all that gave nations and individuals some confidence in the morrow. All this seems badly compromised. I've consulted all the orcas I could find of every species, and I've heard only vague words, contradictory prophecies, curiously feeble assurances,
00:03:31
Speaker
Never has humanity combined so much power with so much disorder, so much anxiety with so many playthings, so much knowledge with so much uncertainty. Do these words still ring true, do you think, almost a century later?
00:03:49
Speaker
even more so. I mean, every word in that is absolutely true today.
Organizational Future Planning
00:03:54
Speaker
Look, what change that gave Valerie that sense of fundamental uncertainty was new knowledge. People knew new things and as a result could do new things and could relate to each other to do new things. It really was all about, in most sense, science, right? That is the ability to advance scientific knowledge, you know, and it really began to come apart with
00:04:20
Speaker
relativity and quantum theory, right? I mean, even the fundamentals of space, time and objects became uncertain, right? And so it really became a sense that almost anything was possible. And that opened up the future in new ways, because really until that time, you know, the future was like yesterday, that the likelihood for most people on the planet, even in 1930,
00:04:45
Speaker
was that tomorrow would be like yesterday that your future would be like your parents and their parents before that. And progress was very modest. Things didn't change all that much. Right. And your kids were likely to do what you did. You know if you were a farmer your kids were going to be farmers and so on. And that sense of continuity
00:05:07
Speaker
of time and history and so on that was deeply embedded in the human experience that things don't actually change much in an individual's lifetime all blew up in the 20th century. All of that began to change and by mid-century, by his time, things were changing unbelievably rapidly even for him. He grew up in an era when cars were new, airplanes were new, radio was new, television was just being discovered.
00:05:35
Speaker
the world was connected in ways that had never been connected before. And so suddenly everything opened up in almost an explosive pace. And he says as well that all the augers are unable to provide any kind of assurances, you know, that they're just offering vague words and contradictory prophecies. Do you think we've got better at dealing with the future now? I mean, you are a futurist, so I
00:06:04
Speaker
I assume that you feel that we're able to at least bring some techniques to bear so that we can confront all these changes and not just live with the confusion. Yes, look, at two different levels. There's a level of organizations who think and plan and make decisions for the future. And they are unequivocally, we've gotten a lot better. We deal with uncertainty, we deal with change, we deal with the pace of change, as we're dealing now with an unbelievably rapid change.
00:06:33
Speaker
On top of which, I'd say, let me call it in the general world of public communication, whether it's television, newspapers, or even in schools, there is a more explicit discussion of the future, of what's possible, of what some of the uncertainties are, what some of the long-term issues are.
00:06:52
Speaker
So you could have a moderately well-informed conversation about some big long-term things like climate change being one of the most obvious in terms of Big long-term changes that the world thinks about or technology So I think you would have to say that there's a more interesting
00:07:11
Speaker
Generally informed conversation about the future that is not hyper Let me call it method methodical within the world of organizations government corporations Universities and so on it's become much more systematic
00:07:24
Speaker
Yeah, that's really interesting. And I think we can talk some more about the way that corporations and governments bring to bear these kind of systematic tools and your work
The Long Now Foundation's Vision
00:07:35
Speaker
on that. But first, I want to touch on the second point on how perhaps there is more of a cultural appreciation of the future, as it were, because that's perhaps a message that
00:07:47
Speaker
I feel it goes against some of the prevailing thoughts where many would argue that we live very much in the moment and we're sort of trapped in a kind of endless news cycle, doom scrolling, watching 10 second videos on TikTok and so forth. None of that seems very kind of future oriented. But as you say, there are these kind of debates around climate change and that
00:08:17
Speaker
demand that we think about the future and people are engaging with those. I wonder if you have any other anecdotes or perhaps examples of places where culturally we seem to be thinking a little bit more long-term than might at first we seem to be doing.
00:08:36
Speaker
Well, an obvious one, and you mentioned this in our pre-conversation, is the Long Now Foundation, an organization I helped start with a number of friends, Stuart Brand and Danny Hillis and others, all about long-term thinking and stimulating long-term thinking. And just two observations. One is we've had many, many, many thousands of people join
Influence of 'Minority Report' on Future Perception
00:08:57
Speaker
to become members, right? And the only privilege that the membership gives you is you get a better quality video of our lecture series. And our lecture series always sells out, and all the talks are about long-term thinking. And we literally get something on the order of 400,000 people ultimately watching those lectures about long-term thinking.
00:09:19
Speaker
So you can get almost a half a million people engaged in a conversation in a very rigorous way. I mean, these are not lightweight talks. And these are serious talks about long term issues in different fields of science and economics and history and so on that require some serious engagement. We're delighted. Kevin Kelly and I came up with that idea back in the mid 90s, soon after we started long now. And we're astonished at the level of engagement
00:09:48
Speaker
with the lecture series. So that's just one example. I think, you know, if you think about science fiction films and the like, I've had the privilege of helping to write a few of those. And, you know, my best-selling book, Art of the Longview, which you just quoted, over, you know, 30 years, much, much price is still in print, 30 years later, is, you know, sold close to a million copies. But two billion people have seen Minority Report.
00:10:18
Speaker
Yeah. Right. And they're very explicit and highly detailed vision of the future that engaged literally billions of people around the planet. Yeah. I think it'd be
00:10:32
Speaker
Now you've brought it up. I think it'd be great to talk a little bit about Minority Report. I know Spielberg brought you and some other folks together to... Right. And it was just the story of the psychics in this car. That was it. Blew no context. And so Steven came to me and said, look, I'd really like to make the most realistic movie of the future anybody's ever made.
00:10:52
Speaker
So help me and bring together a team of the best experts you can find to work together to create that world. And that's what we did. So I brought together about 15 people, everybody from Jaron Lanier and Stuart Brand and a number of others, all to work together, Joel Garrow, Peter Calthorpe, all to create a world for that story.
00:11:17
Speaker
And so we had a hotel conference room at Shutter's Hotel in LA for about four days. And each day, Stephen, the producers, the scriptwriters, the art director, Alex McDowell, would come in and start asking questions of this panel of about 15 people. So what's happening with advertising? What is this apartment like? What's a car like? What's a building like? What is a police station like?
00:11:44
Speaker
What is a shopping mall like how is medical surgery work and so on and we went through all the details Of what that world could become and we debated and discussed and so on I don't we actually had artists sketching as we worked on it sketching images for the story and so on what's the car of the city and so on and
00:12:04
Speaker
It had a huge impact. What we ended up doing is writing what's called the Bible, which is the kind of details of that world that then the script writer, the art director, the actors use in creating that world.
Effective Altruism vs Long Now Foundation
00:12:17
Speaker
And so basically all the details of the world that you saw on screen, we came up with, it came up with whether it was Tom Cruise using gesture control for his computer or the advertising that talked to him and recognized him as he walked through the shopping mall, which we're now beginning to do with AI.
00:12:34
Speaker
of the virtual reality experience, actually augmented reality experience at the AR studio and so on. All of these elements, the opticals, recognition for identity, the moving newspaper headlines, which one of the things I always like to do is put little jokes into these movies when I worked on them. And the moment in the film where the cop is escaping his own subway,
00:13:03
Speaker
and another passenger is reading a newspaper, a version of USA Today in 2050, but the headlines change. But before the headlines change, there's a little headline, mechanical nanotechnology triumphs. Well, that was a little payoff to Eric Drexley who came up with the idea of nanotechnology. You had to know that. Eric was an old friend and it just goes by in a second or two. There's things like that in the story, little Easter eggs.
00:13:32
Speaker
But that was basically the essence of it. And the truth is, as we were finishing this, Stephen and I said, we would like people a decade or so from now to say, why? That's just like in Minority Report when they see something. And we wrote it in 1999. The film came out a couple of years later. And the truth is that is exactly what happened in Regular Report. When Apple introduced its new headset, they thought, this is straight out of Minority Report.
00:14:01
Speaker
And, you know, it's that kind of, it became a vernacular. And that is what we, or was to change the language people used about the future. Until then, the dominant image was Blade Runner. You know, it was actually brilliant. I mean, Sydney conjured up in LA of the future that was really truly original and amazing. We wanted a different one, somewhat more realistic for you. And I think we produced it.
00:14:27
Speaker
Yeah, it's amazing that so many of those things have come to pass. And it seems like we're on track for a lot of that technology being in place before 2050. I mean, much before 2050. Yeah, I think we overestimated that. Well, it was just a kind of almost arbitrary decision to give enough time for things to develop. It's probably more like 2030. Yeah.
00:14:50
Speaker
Yeah, personally, I think coming back to this this kind of cultural appreciation of the long term. I'd also point to and I'm curious about your views on this on effective altruism and the kind of movement that started there. And it
00:15:12
Speaker
kind of arose from, I guess, originally Peter Singer and then other philosophers out of Oxford, particularly Toby Ord and very recently Wilma Gaskill, sort of a best seller. And they seem to be thinking also about the future, maybe in slightly different ways, but it's also gathering in popularity from what I can tell. Is that a group that you kind of, that the Long Now Foundation makes contact with or is it,
00:15:39
Speaker
So far, you're kind of working on slightly different.
00:15:42
Speaker
ways of approaching the future? Well, look, I think it's really great that people are beginning to think long term like that. So let's be clear that inherently it's a good thing. As far as I'm aware, there's not been much real contact across this. We had, along our board meeting recently, we talked a bit about it, but frankly, not with much concrete. Having said that, I think there is a slightly different orientation and there's an important question with respect to the new movement.
00:16:12
Speaker
We are trying in long now to give people a sense of deep time and the sweep of history. It isn't inherently altruistic. It is so that we can make better choices, but they're not effectively, in a sense, automatically altruistic. They're an understanding of science or the forces of history. Neil Ferguson is a very good friend of mine, a British American historian, and he talks a lot about long-term forces that shape the world and so on. He's by no means an altruist.
00:16:45
Speaker
just this side of libertarian. So, you know, I mean, he's a nice guy and kind and generous. I don't mean to say otherwise, but and he's a very dear friend, but he's not an altruist.
00:16:57
Speaker
But you can think long-term without being altruistic. Having said that, in the movement, there is an important tension that has developed, and that is the sense of, OK, if you're really thinking about deep time and literally the billions of people yet to come and say, all right, in human history, let's say humanity stays around for another 100,000 years. Let me just pick an arbitrary number, or even 10,000 years, like long now has. That's our time frame, 10,000 years.
00:17:27
Speaker
In 10,000 years, over the next 10,000 years, unless something goes drastically wrong, there are, I don't know, call it three or four hundred billion people going to be born and live maybe more.
00:17:37
Speaker
Today we only got eight billion, so the fate of the eight billion versus the three or four hundred billion, why you can sacrifice these eight in favor of those because the numbers clearly work out in their favor, right? Saving those eight, three, four, five hundred billion versus the eight billion that live today is a powerful logic, and unfortunately, inherently wrong.
00:18:00
Speaker
That kind of logic leads you to, I think, perverse conclusions. You do things today, for example, consume resources to create a new class of civilization and technology in such a way that it doesn't matter that you used up all the resources because you created an effective artificial world.
00:18:22
Speaker
So it leads you to perverse choices that may not be optimal for the species. As opposed to, there are things we're doing today that have long-term consequences that we need to think about, as well as long-term issues that need to be reflected in the choices that we make today. And that those long-term are, let me call it the civilizational scale, which can be measured in, call it decades, maybe even centuries, but not in the scope
00:18:48
Speaker
of what the effective altruism movement is considering in terms of relative position of today versus tomorrow. That's a peculiar
The 10,000-Year Perspective
00:18:58
Speaker
twist for some people. The idea that people are thinking we should do good things for the deep future is a good idea. I think I'd agree with some of those criticisms and I don't want to
00:19:09
Speaker
well too much of them because I do want to have some debates or invite effective altruists on and get their sort of picture on this. But I feel like looking too far ahead and kind of multiplying out the moral consequences
00:19:26
Speaker
Dan have these, yes, quite perverse implications, and I'm not sure how helpful it is. You mentioned that the Long Now Foundation has this 10,000-year view. One thing I really like, by the way, is that if you go on the Long Now Foundation's blog or website, every day is prefixed with a zero at the moment.
00:19:50
Speaker
We're in 02023. When you first see that, one thinks, oh, is there a misprint here or something? What's going on? Then you realize, no, this is just reminding me that we're at
00:20:03
Speaker
sort of the thin end of that wedge of 10,000 years. But I'm curious, what led to it being 10,000 years, not 100,000, a million, a thousand? It was very simple. It was actually my idea.
00:20:21
Speaker
And that was that human civilization, in terms of really organized society, is roughly 10,000 years old. It's when agriculture was being born, villages were starting to happen, humanity started settling down in various locations. And when that happened, then knowledge could advance, civilizations could develop. There may have been a few little bits here and there earlier, but it was on that order, 10,000 years.
00:20:48
Speaker
And so it was that time frame in the past that said, OK, this was the beginning of civilization roughly 10,000 years ago. And there are forces that were set in motion that are still playing out today. And we need to be thinking at least that far into the future. So it was kind of the past matching the future. And so that's where the 10,000 years came from. Yeah. Yeah, that makes sense. I think.
00:21:12
Speaker
On the other hand, I also have to think that the 10,000 years that are to come are going to be so much richer than the 10,000 years that we've had. I mean, I hope so. Yeah. I mean, whether it's just, if you just think of the number of people now, I mean, the number of people in the year, um, zero BC was about 200 or zero AD zero. It was about 200 million. So we're already.
00:21:39
Speaker
In some ways, there's just kind of 50 times as much life happening every year now, as in that year, just because we have 50 times as many people. And in other ways, or additionally, each of those lives has so many more capabilities. Just in terms of the kind of one very crude measure of this, this was mentioned by Casey Hanmer, another guest,
00:22:05
Speaker
simply that we have 100 times as many calories
Population Growth and Energy Use
00:22:09
Speaker
at a disposal as a human did in the past. Previously, they basically had the calories that sustained them. Now, we have cars and planes and all sorts
00:22:24
Speaker
not to mention just the kind of computing power and the kind of intelligence that we have at our disposal as well. So yeah, it's in some ways a balanced view, but in other ways very imbalanced. And I also find it kind of unfair when people say, oh, you know, we're not looking very far into the future given that it's so hot, you know, just projecting a year ahead is projecting, you know, the equivalent of say 50 years in the year BC or zero BC,
00:22:53
Speaker
And actually, it's even harder than that because there's so much extra productivity now. In fact, one of the most fun long-term, long now talks, oh, quite a while ago, was the science fiction writer and computer scientist, Werner Vinge. And Werner looked at the very long-term interplay between population and energy.
00:23:13
Speaker
The more energy you had, the more population you could support, the more population you had, the more energy you could generate, but needed, and so on. And he looked at this cycle over 50,000 years into the future.
00:23:25
Speaker
And he laid out a vision of the interplay of the growth of population here on the earth and spreading out into the galaxy And the interplay of how we create and use energy And it was absolutely brilliant. It's really and it's very funny as well It's really worth taking a look at it some very clever graphics to demonstrate his point Brilliant. Yeah, that sounds lovely I want to see
00:23:53
Speaker
touch on another Long Now Foundation story. The naming, how did that come about? Oh, again, we were discussing names, and that was Brian Eno's idea, Long Now. Yes, that's the story I've heard. It's true. And so we were meeting and coming up playing with a lot of ideas. The clock, we had already come up with the idea of the clock. That's where it began, really, with trying to build the clock.
00:24:21
Speaker
and Brian maybe just what's the clock clock which was the central project that we began with this was Danny Hillis's idea Danny is one of the great computer scientists in the world a truly original mind one of the great back as a number of original minds in this group and Danny is one of them
Creating a 10,000-Year Clock
00:24:40
Speaker
he had observed, Stuart branded me, because we'd been friends for years before that, that he had always imagined a clock that would tick once a year and the cuckoo would come out once a millennium and would mark the passage of deep time and people would celebrate when the cuckoo came out every thousand years or so. Then the three of us were chatting and said, we should actually build one of those.
00:25:05
Speaker
And then the conversation got rolling and said, well, you know, maybe we should really do that. And we started thinking about designs and then Alexander came in and Kevin and Brian and we pulled that whole gang together. This must have been about 90, 1995 or so, 96.
00:25:26
Speaker
Jeff Bezos had just before gotten modestly rich starting Amazon. He was still just selling books about that time. And Jeff got fascinated with it and agreed to fund the project.
00:25:39
Speaker
And it became the real thing. And it's now essentially almost done. It will probably start up next year. It was built in a cavern in Texas on land that Jeff owns. It's about 250 feet high in the cavern that we built a carved spiral staircase around it. If you go to the Long Now website, you can see videos about it. It's a spectacularly beautiful device and intended to go for the next 10,000 years
00:26:06
Speaker
And solving the design and engineering problems of it, and an organizational and so on, led to all kinds of interesting questions. That was part of the point of it, that having to solve this problem of building a device that would persist for 10,000 years forces you to think about a lot of interesting questions, which we did.
00:26:26
Speaker
There's a book about it already, and some people think it's totally stupid, but in some ways, we thought about it as a pyramids from our civilization to the future civilization.
Timeless Cultural Works
00:26:38
Speaker
If you go to Abu Simbel in Egypt, and you see these amazing giant statues of the Egyptian gods or Petra in Jordan,
00:26:49
Speaker
things that are thousands of years old and you feel that deep sense of history. That's what it's about. Because the very little that we build today is intended to be around for the next 10 millennia. This is.
00:27:04
Speaker
I can't think of many other things, although I did have Christian Burke, who's a Canadian poet, and he's trying to inscribe a poem. Or actually two poems, but within a single text, as it can be read in two different ways. There's a cipher that applies to the text. But anyway, inscribe a poem into the DNA of an extremophile bacteria.
00:27:28
Speaker
D. Radiodorans, which is very, very good at repairing its genetic code. His belief or hope is that if he manages to do that, he may write a work of literature which outlasts everything else of human civilization. Could we? Could we?
00:27:49
Speaker
So yeah, these are probably the two most long lasting projects I can think of, although it's interesting. I've just been reading Richard Fisher's book. Actually, he tells the story. He's a British journalist, written a book called The Long View.
00:28:07
Speaker
very similar to your book, The Art of the Lumbee, in title. But he's got a good rundown of a few projects going on, which I know you'll be aware of. There's actually a kind of pyramid of cubes being built in Germany, I think one cube every
00:28:23
Speaker
Yeah, Cade, I think that's put down a huge concrete group. He actually told the story, I don't know if you've heard it this way, but the story, his story of the naming of a long round foundation is that Brian Inley went to a cocktail party in New York, and it was in a loft, but in a really dicey area of town. He was sort of riding out there, just thinking, am I going to the right address?
00:28:49
Speaker
He gets into the building, has to sort of walk past someone slumped in the doorway, and he arrives in this just beautiful loft, and he talks, and the owner says, oh yes, this is the best place that I've ever lived. And apparently the thought comes into Brian Eno's head, I want to be, you know, these people are thinking in terms of the small hair, you know, just what's surrounding them. And they're also thinking in terms of the short now, right? And his,
00:29:16
Speaker
Appropriately, the story is that he said to himself, I want to think in terms of the big here and the long now. This is the story, at least, which is in Richard Fisher's book. But I really like that idea. It could be true. Yeah. You know, we were literally brainstorming a bunch of ideas. And Brian, he didn't completely tell that story. But the ideas of the big here and the long now and all those were the language that we used in that conversation.
00:29:46
Speaker
Yeah. And I think it really matters the naming of these things. Yes, it turned out it was a good name. It's very evocative and gets people to think it was our objective. What does that mean? It's not obvious what it means.
Futurism and Decision-Making
00:30:05
Speaker
Perhaps we can talk about some of the, I want to come back at some point to these kind of broader cultural issues, but let's get a little bit practical and talk about your work as a futurist. I think there's some skills there for thinking about the future, which I want to make folks aware of. I think for many people,
00:30:31
Speaker
The first thought of what a futurist is would just be someone who tries to foretell the future and tries to kind of predict what's going to happen. But I think you have a slightly different take on things. So perhaps you can run us through that. Sure. In fact, that's where I really started out. When I first started thinking about what I wanted to do, I sort of discovered the field. I wasn't the first. The first was really a guy named Herman Kahn. You might have called Alvin Toffler.
00:30:56
Speaker
But I started out saying, oh, I want to figure out a better vision of the future myself. But I quickly realized, this was in the early 70s when I got to Stanford Research Institute, that actually the institutions of society, business, government, et cetera, themselves did not have good tools for thinking about the future.
00:31:19
Speaker
They were stuck with either trying to predict based on history or imagine that science fiction without reference to history. These were the two tools that were around as it were. One trapped you in the past and the other disconnected you from the past, neither one of which was a good basis for making decisions in an uncertain future. Because the thing that I had become quite convinced of as I studied things more deeply,
00:31:46
Speaker
was that things were becoming ever more uncertain, going back to the Valerie quote. In fact, one of the very first books that I encountered was a book called Limits to Growth, which was using computer models to predict the outcome of a variety of forces like pollution, population, energy consumption, resource consumption.
00:32:06
Speaker
And I studied these models fairly carefully. I even taught a class in it at UC Davis when I realized that the models were only capturing a part of reality and that there were huge forces not captured in that model that were going to create enormous uncertainties and that were not reflected even in the intellectual process of acknowledging that uncertainty. And I said, look, there's got to be a better way to think about this. There's got to be a better way to make decisions in the face of uncertainty.
00:32:33
Speaker
And so the thing that I concluded was that what I really ought to be about is both developing and disseminating the tools to give people the means themselves, individuals, organizations, institutions, et cetera, to do a better job of thinking about and making decisions in the face of uncertainty.
00:32:52
Speaker
The pee leap actually happened, however, with Pierre Vock. Pierre was the first head of scenario planning in Shell, and I succeeded him. He was one of my most important mentors. And the thing that Pierre realized, and one of the things that tied us together, was that the goal was not better prediction, but better decisions.
00:33:13
Speaker
And that the measure of success was not, did you get the future right? The measure of success is, did you do the right thing? Getting the future right and ignoring it is not a success. It'd be a little off, but making the better decision is, in fact, a success. And so what I realized was that you had to really spend an enormous amount of time trying to understand the mind of a decision maker, including your own.
00:33:42
Speaker
try to understand what kind of an analysis of the future would influence and shape that mind and help it make better decisions in the face of that. And that was really the essence of what kind of shaped my work as a future. First, to recognize we needed better tools to deal with uncertainty. And second, that we were really dealing with better decision making, not better prediction. Yeah. Yeah, I think there's clearly that element of persuasiveness that is
00:34:13
Speaker
not necessarily in people's thoughts when they think of futurists, but it's something that, as you say, you brought in and you realize that was key because if you don't have that, decisions don't get made. The other interesting thing that I recall from your book and other talks I've seen you give is that you can find cases where your
00:34:38
Speaker
predictions or your views of the future are divergent, but where actually they all point to the same decisions being the right one. And I think that seems another place where it's not about coming up with a single model of the future and just putting that on a piece of paper, but rather showing compelling visions of the future. And in some cases, one realizes that looking at all those different visions
00:35:06
Speaker
actually, you know, it's obvious what we need to do. And you kind of cut through, you know, yes, there's uncertainty, but there's not uncertainty in the decision that you take.
00:35:17
Speaker
Yeah, I think that's right. There are two elements to that. The first is there are some decisions, as you rightly pointed out, that are robust. That you do this, and it'll work in a variety of different possibilities. And you'll have a good outcome. Might be slightly different outcomes in that. And so you can test various options against multiple futures. And if you can find those options, it isn't always the case that there are. But if there are, then you've got something very powerful.
00:35:42
Speaker
The other is that sometimes in this analysis, you see some things that are inevitable with maybe consequences that follow. For example, when I was still in Shell in 1984, we were studying the future of a country then called the Soviet Union. You may remember that existed probably before your time. But the Soviet Union was, of course, in a deep cold war with the West. We were studying the possibility, was there ever a chance that we could end up looking for oil in the Soviet Union?
00:36:12
Speaker
And in our analysis starting in 84, we reached the conclusion that they were headed for a massive economic crisis in the next few years.
Climate Change and Adaptation
00:36:20
Speaker
And we said, look,
00:36:22
Speaker
It is inevitable that there's going to be a crisis. So that is coming. So the interesting question is what comes out of that crisis? And then we came up with two scenarios, one of which we called the new Stalinism, the other we called the greening of Russia. And that would be what happens if a guy named Mikhail Gorbachev comes to power and a few other things happen. And literally the next year Gorbachev comes to power and said, we knew which scenario we're in. Soviet Union is going to be gone in five years. The Berlin Wall will fall by 1990 and we'll be looking for oil in Russia.
00:36:52
Speaker
And that is exactly what happened. It's interesting as well that having those scenarios, it prepares you for when the event does happen. Exactly. You understand the meaning of those events because you've actually thought about them in advance. Yeah.
00:37:18
Speaker
Are there inevitables now that you could point to that there may be things, I mean, there seems to be some. It's obvious it's already happening, which is the climate.
00:37:28
Speaker
Right. My first climate assessment was 1977 when I was still at SRI. And I was part of it as a junior research person on a team study in climate change, or the potential for climate change at that point. And I said my background was aeronautical engineering, astronautics. I had a very deep background in fluid mechanics and the math of fluid mechanics.
00:37:52
Speaker
So I helped build some of the first climate models and our conclusion was that it was not just about a kind of gradual global warming But we were going to see an increasing frequency of extreme weather Moving up down over a very long period and it's very obvious that we're in it now you know, you don't have to be pretty blind and in real serious denial to ignore what's going on in the world even as we speak today and as a result
00:38:19
Speaker
We are going to respond to climate change, or we're going to live with climate change, or both. That is, we now see that it's already too late to stop it. To change the direction of climate change, we have to go carbon negative, and all we're doing right now is going from
00:38:37
Speaker
high growth of carbon to a slightly slower growth of carbon. We're still putting more carbon in the atmosphere for the better part of the next century, and as a result, we are locking in centuries of climate change. Until we can get to the point of sucking out that CO2 out of the atmosphere, we are going to be in a long period of climate change. So we need to do both, adapt and continue to do major efforts to slow the rate of climate change.
00:39:07
Speaker
The truth is, this was not actually a policy failure. It's important to recognize. When did climate change really start? The beginning of the Industrial Revolution. We started using coal in large quantities. That's, you know, by 1890, it was already too late to stop climate change. We were already burning enough coal. By the beginning of the 20th century, it was already gangbusters. The analysis as early as 1910 showed that coal burning was going to lead to climate change.
00:39:37
Speaker
So this was the inevitable outcome of the vast expansion of burning fossil fuels in huge quantities all over the planet. And we're still doing. So there's no sign that we're going to slow this down radically anytime soon. So the implication is it is inevitable that we're going to see massive climate change worldwide. And in fact, the one right now that we're seeing an early signal of that's very important is the potential for the collapse of the Gulf Stream.
00:40:06
Speaker
It is the Gulf Stream that keeps Northern Europe warm and pleasant. The Gulf Stream goes away and Northern Europe becomes Siberia. We may be headed toward that in the next decade or so.
00:40:19
Speaker
I know you presented a report to, I think it was the Pentagon, maybe around the year 2000, that actually they wanted to know what's the worst case scenario, I think, with the Gulf Coast. And that was like front and center, the collapse of the Gulf Stream.
00:40:39
Speaker
It's a climate change scenario. When the climate shifts, it sometimes moves gradually, but then other times it jumps for a variety of reasons that have to do with the flow of ocean currents, atmospheric currents, et cetera. If you look at the long-term climate record, you see a lot of big jumps, and it isn't really plausible, not a gradual change, but a big jump in change. I think that's what, not really
00:41:06
Speaker
worries me. As you say, we've locked in a lot of the climate change already and it's continuing to get worse and yet we've not seen any of these jumps. We're boiling the frog as it were and we're turning up the thermostat. That's bad enough.
00:41:31
Speaker
to contemplate what could happen if, you know, there's a Gulf Stream class. Yeah, look, we're making the jump right now. I don't mean just the Gulf Stream one, but the extreme weather we're having.
00:41:42
Speaker
If the new normal is extreme weather of the sort that we're now experiencing, extreme heat, extreme storms, extreme flooding, and so on in more places around the world, I have never seen a heat map of the planet like is going on literally today as we speak around the world. And so I think that jump may have happened now. Yeah.
00:42:12
Speaker
Let's look on the, what are the things we can do about this? There's a lot. But they're obvious, right? And we're doing many of them, but not all of them. The most obvious things are reducing energy consumption, getting more efficient energy consumption, all of that sort of thing. Moving away from fossils to renewables. The one big thing we're not doing is enough nuclear power. We should be moving aggressively to building nuclear plants all over the planet.
00:42:41
Speaker
We can now build smaller scale ones. Literally today, or actually it was yesterday, the first nuclear plant in decades in the United States turned on in Georgia. There's another one that's coming a few years from now in that same plant. That's it. There's no others under construction. Nuclear is clean, fossil free, lasts forever, produces baseload power. One of the reasons that France has some of the lowest emissions in the world is because almost all of its electricity is nuclear.
00:43:08
Speaker
The Germans, the good, clean, green Germans had a lot of nuclear plants, but Angela Merkel got frightened after Fukushima and started throwing them all down. So now they're burning dirty coal from Germany and Poland and have increased their fastest growing emissions of any industrial country in the world. So we're bloody stupid in that regard. We should be building tons of nuclear plants moving toward electric vehicles. We're doing that. That's clearly happening.
00:43:39
Speaker
We could do it a bit faster, but not much. We're doing a lot of the right things, but not all of them.
00:43:47
Speaker
Yeah, I think there is a lot of promise there. The attitudes are changing around nuclear. And I think that has to happen first before we see boots on the ground building of new stations. In the UK, they're talking about a new reactor coming online every year. There's not actually a plan for that. But again, it starts with the rhetoric.
00:44:10
Speaker
lot of investments in fusion as well, which is really exciting. If that happens, that's a really, really big deal. Look, there's a company outside of Boston called Commonwealth Fusion that just came out of MIT. They believe that within a decade, they will be able to start building mid-scale fusion plants.
00:44:33
Speaker
that will generate essentially an enormous amount of energy from hydrogen fusion and a limitless source of fuel.
Renewable Energy and Technological Advancements
00:44:43
Speaker
If they're right, then that changes the game. That's a big danger. It is still uncertain whether that will actually work.
00:44:51
Speaker
In fact, I am getting a briefing on the detailed physics of it because I'm uncertain about it. We have proven with the National Ignition Facility at Lawrence Livermore Labs that you can actually create a fusion, which is a big deal. The technology that they're using there is not practical for a fusion plant, but what Commonwealth is doing is building basically a small magnetic model to create the fusion. If that works,
00:45:19
Speaker
new future, much better future. Yeah. It's encouraging that there's so many different kinds of bets that are being placed on Fusion, I have to say. I think the other thing I'd point to is just the cost of renewables. And that surprised me how much the learning rate has brought down and continuing to bring down renewable costs. And I think we could see that kind of for that. Yes. Yeah. Wasn't anything we did in US or Europe
00:45:47
Speaker
The Chinese just did the thing that has always worked before, produce a ton of them. As you produce more and more, you get better and better and cheaper and cheaper volume. It's just simple, classic industrial learning curve, but they're the ones who drove the price of solar down by 80%.
00:46:03
Speaker
And no one thought it would work. The predictions for the price of renewables were all showing them bottoming out at something that's far higher than they're at currently. And they've just been consistently wrong, right? So we've been consistently pessimistic there.
00:46:20
Speaker
Nobody anticipated that China would do what it did and became the major producer of solar panels in the world for themselves and for export. That is what surprised everybody. That's why the cost came way down. Some of that is also true for wind. We learned how to make wind turbines and they did too, much cheaper than we started out. Even wind has come down a lot.
00:46:41
Speaker
But there's good scenarios here, and there's also possible Gulfstream collapse scenarios. But again, I think it's probably something where everything is pointing towards the same decisions. We need to invest widely in lots of different technologies, place a lot of bets.
00:47:00
Speaker
There's some no-brainers. It's clear that renewables are working and they're going to continue to get cheaper. It's just we can't produce them enough and we can't connect them to the grids fast enough. We need to upgrade our grids.
AI's Impact on Industries
00:47:16
Speaker
The other thing that we need to do, remember, is to adapt to climate change. It is happening. It's going to happen. For example, coastal zones where there's lots of flooding. You don't want to buy property in Florida these days. Florida is a big sandbar, the whole state. Most of it is going to disappear in the next few hundred years. You really don't want Miami property unless you're prepared to build some big dikes. There are a lot of places in the world
00:47:46
Speaker
that are going to face serious troubles. Amsterdam is a good example. Amsterdam is one of my favorite cities in the world, but it's basically at or below sea level. The sea level is going to rise, but the Dutch are pretty good at building. They've got a long track record of getting to the national sea level. The Thames barrier is going to have to be twice as wide. Things like that that are going to have to adapt to a new world of climate change.
00:48:16
Speaker
The other sort of elephant in the room is AI. Is the kind of dizzying progress on AI something that's surprised you or is it in many of the scenarios that you've looked at?
00:48:28
Speaker
Well, as you know, I work as the chief futurist for Salesforce. So we've been investing slowly in AI. And then five years ago, I figured out where it was going. That is, we were going to about now have digital assistance that you'd have essentially a interface that now we call chat GPT. But we saw that five years ago, and we started developing for it. And because I was doing the future product strategy, and you can see all the pieces coming together if you did your homework, which we did.
00:48:58
Speaker
and we got it right. So it didn't surprise me at all. And I don't know if you pay attention to these things, but we've already made major product announcements building in AI. We've actually been doing it for quite a few years. We had a limited AI capability and the interface was not really conversational. What changed fundamentally was the conversational interface, right?
00:49:22
Speaker
the actual AI behind it has been a kind of steady progress. The discontinuity was the interface and made it much more accessible to everybody in a great variety of ways and to be able to do things like visualization as well as text and so on. The other one that people have not paid attention to, it's a really big deal which was our first AI that was of the new generation which is coding.
00:49:47
Speaker
It turns out AIs are very good coders. I'm not a coder. I understand enough about it to understand what's going on, but I don't write code, but now I can speak code. I can create things by just asking my AI to do it, and it can write the code. That's a big deal that it can actually do coding as well. I think it's a much better coder than it is a reasoner in many other ways.
00:50:17
Speaker
I mean, perhaps that's because the reasoning that it produces is fairly bland. And having fairly bland code is not a problem, right? People want their code to be straightforward and unfancy, as it were. Sorry? No Rococo code. Yes, no Rococo code. It's not like poetry or something. Yeah, so I use the coding, I use coding
00:50:44
Speaker
every day with a kind of assistant now. And I am a coder, but it's, that's exactly that, right? Um, our coders are already 30% more productive. Yeah. We've already seen the numbers. It's quite staggering. One thing I'm curious about is whether you think we'll have a scenario where there's fewer coders or where it's just like we've given birth as it were to, uh, you know, another generation of people.
00:51:14
Speaker
in that the assistants that we're using are like just an augmentation of the workforce. And it's like every company has hired an extra 30% coders. At least in my world, it never seems that we can have enough coders, right?
00:51:30
Speaker
I think it's a both end. Some tasks will disappear from the realm of coding because they're too simple and easy to be done. Others will become ever more complex and more sophisticated and so on. And in some cases, it's not a matter of bland, but truly original code, solve new classes of problems and so on. So I don't think coders are going to go away. Just like writers aren't going to go away just because we can now do a halfway decent first draft.
00:51:57
Speaker
The people who are in some ways most threatened are, let me call it, mediocre research assistants.
00:52:04
Speaker
That's basically what you have is a mediocre research assistant. You're a mediocre research assistant, you got a problem because my AI can do a better job than you can. A really good research assistant, my AI can't do that yet. But it's also important to realize that we're on a curve of learning that's very fast and very steep. An important question is if you think about a classic S-curve, things start out, then they accelerate, and then they level out.
00:52:31
Speaker
Are we on the flat part of the curve? Have we already started on the steep part of the curve? Or have we already begun to top out? Are the models getting, you know, they're not going to get much more powerful and so on. A good friend of mine is a guy named Richard Socher, one of the top AI professionals, heads up a company, u.com, a competitor to Google in search using AI. And he was the head of an AI company, the Salesforce bot a number of years ago. Absolutely brilliant guy. And Richard and I were talking last Friday, and he argues that we're near the top of the curve.
00:53:01
Speaker
that with current generation of technology, that we're not going to see the large language models get much more powerful, that we're not going to see the classes of applications continue to get much larger and so on. So that we've seen an explosive discontinuity here in terms of the nature of the interfaces and how we interact with AI, but it may be coming close to the top of the curve. There's others who believe we're still in the flat part and we've got a long way to go up and it's going to get in the next two years, three years, much more powerful.
00:53:32
Speaker
Yeah, I'm not sure of my own beliefs here. Certainly a couple of years ago, or even a year ago, I would have said, machine learning is great, deep learning, brilliant, but it's not going to produce an AI that has common sense. It's not going to solve those kind of problems. We're going to need to
00:53:57
Speaker
have some more symbolic AI coming into play here, which has kind of gone out of fashion. But now I see some of the things that LLMs are doing now, and I just think, well, actually, they seem to be codening on to something. There are, what was the phrase that was used in a paper recently, kind of flashes of general intelligence appearing. Sparks. Sparks, that's it.
00:54:21
Speaker
I'd agree with that. Whether those sparks fizzle out or whether they light up a full fire and we end up with that just from doing more of the same, I'm not sure. But I suppose we can think about all the scenarios here, even if we're in the middle of the S curve and we're close to capping out.
00:54:44
Speaker
We're going to be living with the consequences just of this level of technology for a long time, trying to figure out all the ways we can use what we already have. Every day, people are playing with these, discovering, oh, we can do this. We don't even know the power of the tools that are on the table.
00:55:03
Speaker
Well, we can already see without getting to kind of much more sophisticated levels of general intelligence and so on, we can already see the next couple of stages. That is, we're going to have agents and then autonomous and semi-autonomous agents that are working in the background doing tasks for us where we don't have to tell them what to do or ask them to do it. They know how to do it. They get it done and tell us, oh, yeah, I made the plane reservations for you.
00:55:23
Speaker
Things like that where you didn't have to tell them, they said, oh, I looked at your email and said you were going to Philadelphia next week, so I looked up which flights and I know when your schedule looks like, so I got your reservation on the flight that you would like to take. So you didn't have to think about it. There'll be a lot of stuff like that where you didn't have to think about it.
00:55:42
Speaker
On the bigger question, this is something I've discussed a lot with my colleagues and my CEO, Marc Benioff. We were just having dinner and talking about it last night. There's two ways to think about this. The models that we're now building may become so powerful and so capable that for all in effect, they are as if they were achieving general intelligence. They behave as if they were, even if they weren't.
00:56:07
Speaker
It manifests behaviors like purpose, like intention, like judgment, and so on. Even if there's no purpose or intelligence behind it, but where the software generates it in a realistic and believable way. Separate from that is the whole question of, are we actually developing a general intelligence? If that's the case, it is more likely to be in a different paradigm than what we're doing right now. What we're doing right now, you don't add up to a conscious self-aware.
00:56:35
Speaker
To do that, you need embodiment. That is, the thing needs to feel a physical embodiment. It needs to be connected to the real world. It needs to have senses. It needs to have extended memory. It needs to have purpose. All of those kinds of emergent properties and connections to the real world do not yet exist. Our AIs are disembodied, disconnected bits of software.
00:56:59
Speaker
as opposed to a real physical being connected to the real world able to understand its connections and its intentions. We're very far from that. The original idea was model the brain and then build something like the brain. We've almost given up on that because brain just proved to be too hard. But the architectures to general intelligence don't really yet exist toward simulated general intelligence, i.e. that might as well be that we're moving toward.
00:57:28
Speaker
Yeah, I'm not sure I entirely agree that there's a meaningful difference between simulated general intelligence and general intelligence. I'm a kind of behaviorist or a functionalist when it comes to these things. And if something kind of quacks like a duck, it has consciousness, as it were. But notwithstanding that, I think we can certainly agree that these machines acting with other people
00:57:57
Speaker
or in conjunction with humans, that's clearly some kind of augmentation of human intelligence. Human intelligence is already general intelligence. I've been reading a book, Superminds by Thomas Malone. I don't know if you've come across it.
00:58:15
Speaker
Yeah. And so he talks about how organizations, how we can already think of those as, as a form of, you know, super intelligence. Yeah, exactly. And it seems like, you know, we, we're just going to make it possible for yet smaller units of organization. So maybe just, um, you know, a couple of developers or a developer and a kind of product person in conjunction with a, uh,
00:58:41
Speaker
an AI, whether that AI is generally intelligent or not, as long as it's good enough at doing a lot of different tasks, that is now the equivalent of what would have been 1,000 person organization, a 10,000 person organization even. That seems good enough to talk about having created some meaningful super intelligence, even if it's not all within the AI.
00:59:09
Speaker
I think that's right. Look, I think augmented intelligence is a very powerful idea, right? That we as human beings are going to be made much more capable. I mean, a very simple example will be real time translation, right? Before long, I will speak 1000 languages.
00:59:25
Speaker
but I didn't have to learn any. That's very powerful. Imagine arriving in Bhutan and speaking Bhutanese. I mean, I love Bhutan, but I can't even read. The characters are all different. I have no idea how to speak Bhutanese, but I'd love to be able to have a conversation.
00:59:42
Speaker
with a Bhutanese teacher in Bhutan, but, you know, I don't speak Bhutanese. And I'm pretty good at language. I speak almost seven. I'd love to be able to, or I speak seven badly. We said that I'd love to be able to, you know, I spent, I went to France for a summer in the mid-90s to study French, and I managed to learn French at the level of a five-year-old.
01:00:13
Speaker
I spent six months studying every day, six hours a day, every day for six months, and I got to the level of a five-year-old. That's hyper inefficient as a way of building a new capability. I would much rather have been able to talk into this and have it come out French
01:00:36
Speaker
I think it depends what you're after, right? Because I think there is, you know, part of me thinks, wouldn't it be a shame that we lose all these capabilities, like to learn other languages? But then, you know, I realized that, you know, that's not going to
01:00:52
Speaker
happening. There are people who just love to learn languages because they love, you know, they don't even, my wife's parents are learning Japanese and they've been learning Japanese for years. They don't really want to speak to Japanese people. They live in Argentina and they've been to Japan once. Their fascination is with the language rather than being able to communicate in that language. So it's really, I think, just a way of, you know,
01:01:17
Speaker
extending our choices here, right? You can choice to do the hard work if you want, but you might just want to order coffee or having really meaningful conversation without having to spend a decade learning a language.
01:01:31
Speaker
Exactly. My wife loves learning Italian. She uses Duolingo every single day to study Italian because she loves it right and she goes to Italy to paint and so on and But you know, it's she knows how to navigate in Italian and have a minimal conversation In fact literally as we speak she's in her Italian study group right now And she loves it. Yeah, you see scenarios that were
01:01:57
Speaker
some meaning is potentially eroded by having, you know, just machines that are better at us. I'm thinking this is always the example that comes to my mind of Lee Sodal and his declaration that he was going to stop playing Go after he lost to AlphaGo.
01:02:21
Speaker
But then I had a conversation about this recently and someone pointed out, well, you know, nice old, old, very competitive guy. If you're getting your meaning from competition, yeah, sure, you might stop doing it. But if you just like the activity, it doesn't matter that machines are better at it. Maybe that's a good thing. You can play against the machine anytime you like.
01:02:42
Speaker
But yeah, I'm curious as to... Let's take another example. We now have AIs that can do art, right? So I could draw a picture, create an image that I like. My wife is a watercolor painter. The fact that I can use an AI to produce an interesting watercolor like painting.
01:02:59
Speaker
does not make her stop wanting to be a watercolor painter. The act of engaging, of deciding what it's and bringing it onto a piece of paper and so on. That's what she loves and then seeing it when it's done. So I don't think there's a real threat to the creative
01:03:15
Speaker
where there are, let me call it say commercial art by contrast, right? Advertising art, game art, that kind of art that's more threatened by AIs because there your goal is productivity and efficiency, et cetera, things like that. If you're an artist, that's not your goal. You're not wanting to be a hyper productive, it's the act of creating art. So I think it does matter what you're actually trying to accomplish.
01:03:43
Speaker
Yeah, yeah, I think that's a nice, it's a nice take on things. Do you have any kind of counterintuitive or maybe better put, views on AI that are not commonly held? Yeah, look, I think there's a really obvious one, which is education. One of the things that we've learned over the years is that private personal tutoring makes a huge difference, especially for the lower half of the class.
01:04:12
Speaker
as a kid who's not doing all that well in math or history or geography or science, you give them a personal tutor and you can bring them up to the upper half of the class. So imagine the world of education where everybody has a personal tutor in a device like this for every course, every class, everywhere. It helps them get every math question right, every history question right, and helps them really learn.
01:04:38
Speaker
suddenly we can take the bottom half of the class, which was struggling in this incredibly complex technological world. You and I have no trouble going online and making a dinner reservation. But if you're in the bottom half of your class,
01:04:53
Speaker
Actually, that's a non-trivial thing. And these days, if you want to make a dinner reservation, you call the restaurant and they say, oh, go to a website and you can make your reservation. Sorry. And they hang up. And if you can't do that, you can't get the reservation or plane reservation or file your taxes. There's just so many tasks now that involve
01:05:17
Speaker
at least mid-level simple information technology tasks. And frankly, for the lower half of the class, that's hard. And many jobs are now excluded for the lower half of your class. And every class had a lower half. Let's be clear. So now we can make everybody above average.
01:05:38
Speaker
And I think that is a very plausible outcome. It's one of the things that I think is one of the best things that's going to happen out of this, an explosion in AIs for education, tutors for everyone in every field, like my wife with Italian. She's got a personal Italian tutor in Duolingo.
01:05:55
Speaker
Right. Yeah. That knows how she learns and has been learning with her now for years and is a very effective and she loves it. She learns Italian. So I think this is the future of education in the classroom is going to go away and teachers are going away. This is going to supplement that and assure the high level performance of every kid. Yeah. Yeah. It speaks to a future where AI can reduce inequality. And I think that's one of the central questions in my mind is
01:06:23
Speaker
Is that the one that's going to play out for the reasons that you've just outlined, or will we see something where AI applies a kind of magnifying effect to everyone's capabilities? And perhaps it's not even a kind of linear one, but one that boosts more the people who understand these models better, are better able to
01:06:49
Speaker
prompts and so forth.
CRISPR and Genetic Engineering
01:06:52
Speaker
And I'm really, again, I'm not sure what will happen. We may see a bit of both. It may be kind of uneven. There may be a few people who are kind of super winners and able to force AI to do whatever they want, breaking all the guardrails and so forth. And a very, very long tail of people for whom it improves their lives, but to a lesser extent. Yeah, I'd agree with that.
01:07:15
Speaker
Yeah, but suddenly it's an exciting moment. Are there any other technologies that you think that, you know, AI seems to be taking up so much of the public attention now and climate change as well, and I think that's both of those justified. We talked a little bit about nuclear technologies as well, but are there other ones which maybe are outside the public consciousness and deserve to be more within the spotlight?
01:07:42
Speaker
Well, the most obvious one is the consequences of CRISPR, the ability to edit genes in a much more precise and reliable way. I'm just beginning to experience the first consequences of that technology with respect to new kinds of diagnostics, new kinds of therapies, et cetera. So genetic therapies of various sorts. We saw the first example, actually, mRNA vaccines.
01:08:08
Speaker
that for the pandemic the reason we were able to develop so rapidly the vaccines that would have taken years before was our knowledge now of genetics that produce the mRNA vaccines. So we've already seen the first hint of what is to come and I think over the next call a decade or so we're likely to see a massive
01:08:29
Speaker
a revolution in a number of elements of medicine as a result, both drug discovery and the specific interventions at a genetic level for a variety of kinds of issues. The first ones we're going to see frankly are in Parkinson's and Alzheimer's.
01:08:44
Speaker
two diseases that are genetic in origin and both of which we are now literally the last 48 hours announcing new therapies for both of those that are genetically based. Successful treatments that radically slow the rate of Alzheimer's and the deterioration from Parkinson's.
01:09:05
Speaker
I think that's one that I see as huge coming. Because it's medicine, it's highly regulated, so it plays out a bit more slowly. It needs to be tested and made safe and so on, and that's what's going on right now. But the tool that was created by Jennifer Doudna is so profound in its effectiveness that I think it's implication. Another place where we'll see it in a much more banal way,
01:09:34
Speaker
is in plant biology, things like plants that are much more productive or suited to climate change and so on. Because the same tools we're dealing with human biology, we're dealing with in plants, with farm animals, with pigs and cows and so on and modifying those. So we have a whole new set of genetic tools across a broad frontier of biology, human biology and other biology that are giving us the ability to help shape the future in new ways.
01:10:01
Speaker
I think you're right to say that in terms of the technology being there, it's probably as fully baked as AI, but you point out that there's regulation here, which is maybe just stopping it from being adopted as quickly as AI. I don't know whether that's a problem for CRISPR or rather a problem for AI, and perhaps we've
01:10:26
Speaker
sort of let the AI cat out of the bag a little too quickly. I'm not sure, but one of the things I learned recently is, you know, for years there's been research into LC, the ethical legal and social implications of genetics research, which, you know, the funding for that has been earmarked since the beginning of the Human Genome Project. So it's been a very well-funded
01:10:51
Speaker
There were big debates in Cambridge, Massachusetts, on basically mapping and altering DNA as early as 1970.
01:11:03
Speaker
So this has been when you could see it coming back to explicit debate surface because of the obvious consequence Runaway bacteria and so on so it's not hard to imagine the bad outcomes It was we already had regulatory frameworks for deal with this so it's pretty straightforward To figure out what to do with respect to genetics in that regard. Yeah. Yeah, and I mean suddenly there's been a lot more
01:11:28
Speaker
There's a lot more philosophers and ethicists involved in that field than in AI, which would surprise many people. Many people are now, I think the AI field must be growing very rapidly in that area. Nonetheless, I think we did a lot of our thinking ahead of the facts. Years before CRISPR's invention, we're thinking about, well,
01:11:49
Speaker
What could we do with this? What's interesting, though, is we've still not made up our mind on many topics. I think most would agree that therapeutic, non-germline interventions are OK. And then at the other end of the scale is enhancement in the germline, right? Yes.
01:12:12
Speaker
which we could potentially do. The one that is most obvious in a good sense right now is sickle cell anemia. Sickle cell anemia is a single gene. It's entirely plausible that in a person who had already been born, you can modify that gene so that in a victim of sickle cell, you can actually cure them.
01:12:36
Speaker
Having said that, imagine that you could actually do that as a germline intervention as well. That means that their children won't have sickle cell either. Is that a bad thing? Yeah, I don't think it is. I'm always a little bit puzzled in these debates because there's a lot of
01:12:53
Speaker
There's a lot of engineering of bacteria or E. coli and so forth, where there's not this kind of germline, non germline distinction. You make a change and it's going to propagate, right? And it's going to potentially just last forever. So I'm not really sure why that distinction seems so important for humans, but we've just kind of said, oh, well, just go ahead and play with bacteria.
01:13:18
Speaker
you know, do whatever you want and microbes. Um, and yet we really just have decided not to allow that, um, for, yeah, for the human genetic code. Uh, but I'm inclined to agree. I think the debates will end up being somewhat moot because someone is going to end up doing this. Um, whether or not they have the approval of everyone, it's going to happen.
01:13:48
Speaker
And if the benefits are seen, it's going to be very hard to row back from that. Yep, I agree. We're going to clone human beings that will also happen. You can see it all coming.
01:14:01
Speaker
I'm surprised actually that the pace hasn't been quicker here, given how easy the technology is to use.
Space Exploration and Industries
01:14:08
Speaker
Well, yeah, to do it, but to produce a desired outcome. There's still a lot of biology we don't understand. And so it has moved slowly because not only regulation, but it's hard science. The technology is easy, but the science is hard. Yeah, right. Figuring out what to do, like the right added snake. Yeah. Yeah, that makes sense.
01:14:31
Speaker
OK, any others like any others that we're not we're not thinking of enough. You've talked a lot about nanotechnology in the past. Is that something that I thought. But the one that is that has gone through a major discontinuity is a commercial space. One must not only change the electric vehicle market is change is created the space market.
01:14:55
Speaker
We started landing spaceships. That changed everything. My first job as a futurist was actually mission planning for the space shuttle at SRI back in 1973. And it was a complete failure because it was based on science fiction, in effect. That is, that we could do a launch a week as opposed to four launches a year. Now we're going to happen.
01:15:20
Speaker
It didn't happen. But Elon changed everything when he started landing spaceships and bring them back. Now the economics of space change really radically. And I think we are opening up the potential of near space for tourism, for better communications, for better science. So I think there will be eventually
01:15:46
Speaker
What we're going to end up doing is asteroid mining. I think that's going to be one of the first useful things we can do in space and eventually create an industrial base in orbit around the Earth and move industry off the planet because it's polluting and energy consuming and out there we've got essentially infinite solar energy and a huge resource base that we don't care about the pollution. Nobody cares about polluting a bit of vacuum out there.
01:16:12
Speaker
So I think we have seen the space age didn't begin with Sputnik. The space age began when Musk landed his spaceships. That's when you really started thinking about space as a really potential base for human operation. That's an interesting thought, yeah. Yeah, I'm often
01:16:40
Speaker
I do wonder what the big use cases of all that capacity, all that potential are going to be. I mean, certainly the communications one is something that just from the industry that I've come from, I can envisage very clearly. And I think the kind of monitoring aspects of that are also really important. The ability to
01:17:02
Speaker
have very regular frequent images coming from pretty much anywhere on the Earth. That could be game-changing for so many people. A Planet Labs that provides very cheap space imaging all over the world every day. Bill Marshall is a good friend who started it. I think that is absolutely a game-changer for so many things. You want to understand what's happening in the forests of the world, you have to see them, for example.
01:17:30
Speaker
and we now can. So I think that's it.
01:17:34
Speaker
I do think that's a very important one. And the truth is I think tourism, I think people are gonna wanna go up and hang out in zero gravity and enjoy the experience of space and watching the earth go by for a few days, space hotels. I've tried to convince my friend Richard Branson to buy the space station when they're gonna deorbit it sometime in the next decade, crash it into the ocean pieces.
01:18:02
Speaker
Why not turn it into a, you know, virgin orbital? He's got virgin galactic, virgin orbital to the space station. I think it's ideal location. Well, if that happens, we'll know who planted the seed in his head.
01:18:18
Speaker
And for industry, what sort of things do you think it might make sense to produce in space? I guess the kind of the gravity, sorry. Yeah, any heavy materials, basically what you want it, anything you want to use in space, you should make it space. You shouldn't lift things in space. And you can even get water and oxygen out of asteroids. So there's no reason why you shouldn't be able to make stuff up there.
01:18:44
Speaker
And, you know, you want to have an orbital hotel, make the steel for the orbital hotel or aluminum or whatever it is you want to use for up there. So there's no reason why we shouldn't have most of our industry off planet for making things that are heavy and polluted, like making steel. Yeah, right. So most of the industry usage would be sort of centered around building up infrastructure in space, probably for like space tourism and
01:19:14
Speaker
or possibly even developing industrial hub. But we wouldn't be sending stuff back to the earth, as it were. We might be bringing steel down. Bringing steel down is a lot easier than bringing them up.
01:19:31
Speaker
It's not at all implausible, and particularly if we're making things that are relatively lightweight and bringing them down. I do imagine that that will happen, but call that the next 50 years, not the next five. Most of what we're going to do is a near-Earth orbit, the near a few thousand miles. We're not going to Mars, maybe we'll do some stuff on the Moon, but there really isn't that much interesting on the Moon and there isn't much interesting on Mars. Our solar system other than the Earth,
01:19:59
Speaker
is not all that interesting, to be honest. So I don't see a lot of human exploration of deep space beyond that until we can build starships. Right. Do you think it's an important idea nonetheless that we
01:20:12
Speaker
aim to get further than? Yes, I do. I gave a long talk on it. It was called Scenarios for Starships. And one of the points I made is that if we don't develop starships, we're stuck in our solar system.
01:20:30
Speaker
And our solar system is uninteresting. So our horizons will be fundamentally limited. And I think that's a bad thing for the species. So I do think building a starship and getting out of the solar system is ultimately a good idea. Now, we're talking a century or more from now. But in fact, I'm part of a project called the 100 Year Starship Project.
01:20:52
Speaker
which long now launched with NASA Ames about 10 years ago. And it's, you know, a small community beginning to think about what it will take to actually build a variety of different forms of starships, long-sleeveships, multi-generational, or even figuring out new physics. So I think conceptually the idea that we're reaching beyond the solar system is a very important idea. Yeah, yeah.
01:21:19
Speaker
Yeah, it's not an easy problem. Yeah, immensely tricky. And even if we figure out how to do it, it would be a huge project. Do you think we have the right kind of
01:21:42
Speaker
attitudes yet to be able to undertake such a project? Do we need to get beyond, let's say, the hump of the climate crisis? And is that the point where there might be a change in our thinking?
01:21:53
Speaker
Well, you've linked something that is very important and potentially a very interesting scenario. Look, if the world learns how to cooperate to deal with climate, which we do need to, that very same cooperation gene we implant may be what we need to build, for example, starships to explore the near galaxy and so on.
01:22:16
Speaker
And so moving from a nationalistic self-protective view toward a more global view of how people can collaborate to solve massive problems is not an implausible but challenging
Global Unity and Optimism for the Future
01:22:31
Speaker
scenario. Like just yesterday, I was thinking about how you think about the war in Ukraine and people fighting over little bits of dirt
01:22:39
Speaker
on the surface. That nationalism is so important. Ukrainians are willing to fight and die to protect it. Russians are willing to fight and die to seize it. And that somehow we have not yet transcended the nationalism of protecting bits of turf on the planet and transcend the fact that we all belong on this earth.
01:23:01
Speaker
A good friend of mine is Rusty Schweikert, the astronaut. He was the Apollo 9 lunar module pilot. In fact, we just honored him a few weeks ago as one of the founders of the law of B612, the Asteroid Institute. And, you know, when Rusty was in space, he was one of the first astronauts to have this kind of transcendent experience of the Earth as a kind of unified entity. He said, there are no borders that I see from space. All I see is the flow of life all over the planet. And he had a kind of
01:23:28
Speaker
truly cosmic experience of the unity of life on the planet that changed him forever. And what chose to do with the rest of his life as a result. And I think until we get that kind of sense of shared fate that we all share the same fate on this planet, you know, nationalism will continue to get in our way. What message would you like to leave people with? Oh, unequivocally to be optimistic about the future.
01:23:56
Speaker
There's an enormous amount of pessimism around whether it's about economics or war and peace or climate change and so on. Almost every movie or book about the future is dark. Whereas I look at the sweep of human history and I see immense progress. Look, I was born in a refugee camp in Germany in 1946 of concentration camp survivors. And I look at what's happened to my life over the last 75 years.
01:24:24
Speaker
And it's transformative when I look at how people lived when I was a child versus how they live today and the opportunities in front of them. Are the challenges of all the big issues of climate change and economics and geopolitics absolutely real? But are they worse than the challenges of the 1930s and Nazism and so on?
01:24:42
Speaker
I don't think so. And humanity rose up to deal with those kinds of things. We solved the economic problem. We defeated the fascists. The Cold War ended without nuclear war. And yeah, there might be a new Cold War with China, and we probably will have some tensions over the next few decades. But what I see is almost unrelenting progress. Over the last 30 years, billions of people have climbed out of poverty, literally billions.
01:25:09
Speaker
We have a couple more billion to go, so we haven't solved all the problems on the planet. But when I look at the rate of human progress, the science and technology we have in front of us, and the level of cooperation, despite all the annoyances globally, I'm actually very optimistic. And I think people should be optimistic about the future. I think their kids are going to be better off than they are. Yeah, I hope so. And I think you're right. The evidence
01:25:35
Speaker
The evidence, at least from the long arc of time, is pointing that way. We have problems. We've always had problems. Some of our problems now are bigger than ones we've faced in the past. But on the other hand, I think the technology that we have now is clearly superior to anything that we've had in the past as well. Exactly. Exactly. So that's my message.
01:25:59
Speaker
Well, I think that's a very fine message to end on. Peter Schwartz, thank you so much. This has been really informative and I would recommend that everyone check out the Long Now Foundation and your book, The Art of the Long View as well. My pleasure, happy to talk with you.
01:26:34
Speaker
thank you for listening this is a quick note to say that i'm going to experiment with moving this to a fortnite cadence so one podcast every two weeks um just so i can see how that feels to have a bit more time to polish things and prepare for interviews um so yeah i hope that sounds good and catch you next time