Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Balancing the Risks of Future Technologies With Andrew Maynard and Jack Stilgoe image

Balancing the Risks of Future Technologies With Andrew Maynard and Jack Stilgoe

Future of Life Institute Podcast
Avatar
48 Plays7 years ago
What does it means for technology to “get it right,” and why do tech companies ignore long-term risks in their research? How can we balance near-term and long-term AI risks? And as tech companies become increasingly powerful, how can we ensure that the public has a say in determining our collective future? To discuss how we can best prepare for societal risks, Ariel spoke with Andrew Maynard and Jack Stilgoe on this month’s podcast. Andrew directs the Risk Innovation Lab in the Arizona State University School for the Future of Innovation in Society, where his work focuses on exploring how emerging and converging technologies can be developed and used responsibly within an increasingly complex world. Jack is a senior lecturer in science and technology studies at University College London where he works on science and innovation policy with a particular interest in emerging technologies.
Recommended
Transcript

Introduction to Existential Risks and FLI's Mission

00:00:07
Speaker
I'm Arielle Kahn with the Future of Life Institute. Whenever people ask me what FLI does, I explain that we try to mitigate existential risks. That is, we're basically trying to make sure that society doesn't accidentally kill itself with technology. Almost to a person, the response is, oh, I'm glad someone is working on that. But that seems to be about where the agreement on risk ends. Once we start getting into the details of what efforts should be made, who should do the work, how much money should be spent,
00:00:35
Speaker
Suddenly people begin to develop very different opinions about how risky something is and what we should do about it. Some of the most intense debates can even come from people who agree on the ultimate risks, but not on the means of alleviating the threat. To talk about why this happens and what, if anything, we can do to get people in more agreement, I have with me, Andrew Maynard and Jack Stilgo.

Expert Perspectives on Risk: Andrew Maynard & Jack Stilgo

00:00:55
Speaker
Andrew directs the Risk Innovation Lab in the Arizona State University School for the Future of Innovation in Society.
00:01:02
Speaker
He's a physicist by training and, in his words, spent more years than he cares to remember studying airborne particles. These days, his work focuses on exploring how emerging and converging technologies can be developed and used responsibly within an increasingly complex world. Jack is a senior lecturer in science and technology studies at University College London, where he works on science and innovation policy with a particular interest in emerging technologies. Jack and Andrew, thank you so much for joining us today.
00:01:31
Speaker
Great to be here. Great to be here. So before we get into anything else, I was hoping you both could just first talk about how you define what risk is. I think the word means something very different for a first-time parent versus a scientist who's developing some life-saving medical breakthrough that could have some negative side effects versus a rock climber. So if you could just explain how you think of risk.
00:01:58
Speaker
So yeah, let me dive in first, saying that not only have I studied airborne particles for more years than I care to remember, but I've also taught graduate and undergraduate students about risk for more years than I care to remember. So the official definition of risk is it looks at the potential of something to cause harm, but it also looks at the probability. So typically, say you're looking at an exposure to a chemical,

Understanding and Broadening the Scope of Risk

00:02:25
Speaker
Risk is all about the hazardous nature of that chemical, its potential to cause some sort of damage to the environment or the human body, but then exposure that translates that potential into some sort of probability.
00:02:37
Speaker
And that is typically how we think about risk when we're looking at regulating things. I actually think about risks slightly differently because that concept of risk runs out of steam really fast, especially when you're dealing with uncertainties, existential risk, and perceptions about risk and when people are trying to make hard decisions and they can't work out how to make sense of the information they're getting. So I tend to think of risk as a threat to something that's important or a threat to something of value.
00:03:06
Speaker
And that thing of value might be your health. It might be the environment, but it might be your job. It might be your sense of purpose or your sense of identity or your beliefs or your religion or your politics or your worldview. And as soon as we start thinking about risk in that sense, it becomes much broader, much more complex, but it also allows us to explore that intersection between different communities and their different ideas about what's important and what's worth protecting.
00:03:36
Speaker
Jack, did you want to add anything to that?
00:03:39
Speaker
So I have very little to add to what Andrew just said, which was a beautiful discussion, I guess, of the conventional definition of risk. I would draw attention to all of those things that are incalculable. And when we are dealing with new technologies, they are often things to which we cannot assign probabilities and we don't know very much about what the likely outcomes are going to be. I think
00:04:06
Speaker
there is also a question of what isn't captured when we talk about risk. So it's clear to me that when we talk about what technology does in the world, that not all of the impacts of technology might be considered risk impacts.
00:04:22
Speaker
So as well as the risks that it is impossible for us to be able to calculate and when we have new technologies, we typically know almost nothing about either the probabilities of things happening or the range of possible outcomes. I'd say that we should also pay attention to all the things that I guess are not to do with technology going wrong but are also to do with technology going right.
00:04:50
Speaker
So, technologies don't just create new risks, they also benefit some people more than others, and they can create huge inequalities. I mean, if they're governed well, they can also help close inequalities. But if we just focus on risk, then we lose some of those other concerns as well.
00:05:12
Speaker
So Jag, so this obviously really interests my work because to me an inequality is a threat to something that's important to someone.

Technology, Inequality, and Power Dynamics

00:05:21
Speaker
Do you have any specific examples of what you think about when you think about inequalities or equality gaps?
00:05:29
Speaker
Well, I think before we get into examples, the important thing is to bear in mind a trend with technology, which is that technology tends to benefit the powerful. And that's an overall trend before we talk about any specifics, which quite often goes against the rhetoric of technological change because often
00:05:50
Speaker
technologies are sold as being emancipatory and helping the worst off in society, which they do, but typically they also help the better off even more. So there's that general question. I think in the specific we can talk about, well, what sorts of technologies do close inequities and which tend to exacerbate inequities? But it seems to me that just defining that as a social risk isn't quite getting there.
00:06:19
Speaker
So I guess this sort of moves into my next question because I would consider increasing inequality to be a risk. So can you guys talk a bit about why it's so hard to get agreement on what we actually define as a risk?
00:06:35
Speaker
So one of the things that I find is people very quickly slip into defining risk in very convenient ways. So if you have a company or an organization that really wants to do something and that doing something may be all the way from making a bucket load of money to changing the world in the ways they think are good, there's a tendency for them to define risk in ways that benefits them.
00:07:04
Speaker
So for instance, I'm going to use a hypothetical, but if you are the maker of an incredibly expensive drug and you work out that that drug is going to be beneficial in certain ways with minimal side effects, but it's only going to be available to a very few, very rich number of people, you will easily define risk in terms of the things that your drug does not do. So you can claim with confidence that this is a risk free or a low risk product.
00:07:34
Speaker
But that's an approach where you work out where the big risks are with your product and you bury them and you focus on the things whether you think that there is not a risk with your product. And that sort of extends across many, many different areas. This tendency to bury the big risks associated with a new technology and highlight the low risks to make your tech look much better than it is so you can reach the aims that you're trying to achieve.
00:08:01
Speaker
I quite agree, Andrew. I think what tends to happen is that the definition of risk, if you like, gets socialised as being that stuff that society is allowed to think about, whereas the benefits are sort of privatised in that the innovators are there to define who benefits and in what ways.
00:08:24
Speaker
I would agree, though it also sort of gets quite complex in terms of the social dialogue around that and who actually is part of those conversations and who has a say in those conversations. And so to get back to your point, Ariel, I think there are a lot of organizations and individuals that want to do what they think is the right thing, but they also want the ability to decide for themselves what the right thing is rather than listening to other people.
00:08:54
Speaker
How do we address that?

Democratizing Risk Discussions

00:08:58
Speaker
It's a naughty problem and it has its roots in how we are as people and as a society, how we've evolved. But I think there are a number of ways forwards towards beginning to sort of pick apart the problem. And a lot of those are associated with work that is carried out in the social sciences and even the humanities.
00:09:20
Speaker
around how do you make these processes more inclusive? How do you bring more people to the table? How do you begin listening to different perspectives, different sets of values and incorporating them into decisions rather than marginalizing groups that are inconvenient?
00:09:35
Speaker
I think that's right. I mean, it's ultimately, if you regard these things as legitimately political discussions rather than just technical discussions, then the solution is to democratize them and to try to rest control over the direction of technology away from just the innovators and to see that as the subject of proper democratic conversation.
00:09:59
Speaker
And there are some very practical things here. And this is where Jack and I might actually diverge in our perspectives. But from a purely business sense, if you're trying to develop a new product or a new technology and get it to market, the last thing you can afford to do is ignore the nature of the population, the society that you're trying to put that technology into. Because if you do, you're going to run up against roadblocks where people decide they either don't like the tech,
00:10:27
Speaker
or they don't like the way that you've made decisions around the technology, or they don't like the way that you've implemented it. So from a business perspective, taking a long-term strategy, it makes far more sense to engage with these different communities and develop a dialogue around them so you understand the nature of the landscape that you're developing a technology into. And you can see ways of partnering with communities to make sure that that technology really does have a broad, beneficial impact.
00:10:55
Speaker
Why do you think companies resist doing that? Is it just effort or is there other reasons that they would resist?
00:11:04
Speaker
I think we've had decades, centuries of training that says you don't ask awkward questions because they potentially lead you not be able to do what you want to do. So it's partly the mindset or the mentality around innovation, but it's also it's hard work. It takes a lot of effort and it actually takes quite a lot of humility as well.
00:11:28
Speaker
There's also a dynamic, which is that there's a sort of well-defined law in technological change, which is that we overestimate the effect of technology in the short term and underestimate the effect of technology in the long term. Given that companies and innovators have to make short time horizon decisions, often they don't have the capacity to take on board these big sort of world changing implications of technology.
00:11:56
Speaker
So if you look at something like the motor car, it would have been inconceivable for Henry Ford to have imagined the world in which his technology would exist in 50 years time.

Predicting Long-term Impacts and Public Engagement

00:12:11
Speaker
Even though we know that the motor car has led to the reshaping of large parts of America, it's led to an absolutely catastrophic level of public health risk.
00:12:24
Speaker
while also bringing about clear benefits of mobility. But those are big, long-term changes that evolve very slowly, far slower than any company could appreciate.
00:12:36
Speaker
So can I play devil's advocate here, Jack, and ask you a question which I'm sure you must have been asked before. With hindsight, should Henry Ford have developed his production line process differently to avoid some of the risks we now see or some of the impacts we now see of motor vehicles?
00:12:57
Speaker
Well I think you're right to say with hindsight it's really hard to see what what he might have done differently because the point is the changes that I was talking about are systemic ones with responsibilities shared across large parts of the system. Now could we have done better at anticipating some of those things? Yes I think we could have done and I think had
00:13:20
Speaker
motor car manufacturers talked to regulators and civil society at the time they could have anticipated some of those things because there are also barriers that stop innovators from anticipating right there are actually things that force innovators time horizons to to narrow.
00:13:41
Speaker
Yeah. So actually, so that's one of the points that really interests me. It's it's not this case of do we don't we with a certain technology, but could we do things better. So we see more longer term benefits and we see fewer hurdles that maybe we could have avoided if we'd have been a little smarter from the get go.
00:14:04
Speaker
How well do you think we can really anticipate that though? When you say being a little smarter from the get-go, I'm sure there's definitely things that we can always do that's smarter, but how much do you think we can actually anticipate? Well, so the base cancer is very, very little indeed. The one thing that we know about anticipating the future is that we're always going to get it wrong.
00:14:26
Speaker
But I think that we can put plausible bounds around likely things that are going to happen. So simply from what we know about how people make decisions and the evidence around that, we know that if you ignore certain pieces of information, certain evidence, you're going to make worse decisions in terms of projecting or predicting future pathways than if you're actually open to evaluating different types of evidence. And by evidence, I'm not just meaning the scientific evidence,
00:14:56
Speaker
But I'm also thinking about what people believe or hold as valuable within society and what motivates them to do certain things and react in certain ways. All of that is important evidence in terms of getting a sense of what the boundaries are of a future trajectory.
00:15:15
Speaker
And we should remember, Andrew, that the job of anticipation is not to try to get things right or wrong. So yes, we will always get our predictions wrong, but if anticipation is about preparing us for the future rather than predicting the future, then rightness or wrongness isn't really the target.
00:15:34
Speaker
And instead, I would draw attention to the history of cases in which there has been willful ignorance of particular perspectives or particular evidence that has only been realized later, which, as you know better than anybody, the evidence of public health risks that has been swept under the carpet. And we have to look first at the sort of incentives that prompt innovators to overlook that evidence.

Learning from Technological Blunders

00:16:03
Speaker
Yeah, I think that's so important. So it's worthwhile sort of bringing up the late lessons from early warnings reports that came out of Europe a few years ago, which are a series of case studies of previous technological innovations over the last 100 years or so, looking at where innovators and companies and even regulators either missed important early warnings or as you said, willfully ignored them.
00:16:31
Speaker
And that led to far greater adverse impacts than there really should have been. I think there are a lot of lessons to be learned from those in terms of how we avoid those earlier mistakes.
00:16:41
Speaker
So I'd like to take that and move into some more specific examples now.

Beyond Safety: The Impact of Self-Driving Cars

00:16:48
Speaker
Jack, I know you do, you're interested in self-driving vehicles. That was a topic that came up on the last podcast. We had a couple psychologists talking about things like the trolley problem. And I know that's a touchy subject in the auto industry. So I was curious, how do we start applying that to these new technologies that will probably be literally on the road soon?
00:17:13
Speaker
Well, my own sense is that when it comes to self-driving cars, it is, as Andrew was saying earlier, it's extremely convenient for innovators to define risks in particular ways that suit their own ambitions. And I think you see this in the way that the self-driving cars debate is playing out.
00:17:31
Speaker
and in part that's because the debate is a largely American one and it emanates from an American car culture and here in Europe we see a very different approach to transport with a very different emerging debate so
00:17:49
Speaker
The trolley problem, a classic example of a risk issue where engineers very conveniently are able to treat it as an algorithmic challenge. How do we maximize public benefit and reduce public risk? Here in Europe, where our transport systems are complicated, multimodal, where our cities are complicated, messy things,
00:18:18
Speaker
The self-driving car risks start to expand pretty substantially in all sorts of dimensions. So the sorts of concerns that I would see for the future of self-driving cars relate more to what are sometimes called second order consequences. What sorts of worlds are these technologies likely to enable?
00:18:39
Speaker
opportunities they're likely to constrain. And I think that's a far more important debate than the debate about how many lives a self-driving car will either save or take in its algorithmic decision making.
00:18:53
Speaker
So I think, Jack, you have referred to the trolley problem as trolleys and follies. And one of the things I really grapple with, and I think it's very similar to what you were saying, is that the trolley problem seems to be a false or a misleading articulation of risk. It's something which is philosophical and hypothetical, but actually doesn't seem to bear much relation to the very real challenges and opportunities that we're grappling with with these technologies.
00:19:23
Speaker
Yeah, I think that's absolutely right. It's an extremely convenient issue for engineers and philosophers to talk amongst themselves with. But what it doesn't get is any form of democratization of a self-driving future, which I guess is my interest.
00:19:39
Speaker
Yes. Now, of course, the really interesting thing here is, and we've talked about this, I get really excited about the self-driving vehicle technologies, partly living here in Tempe, where Google and Uber and various other companies are testing them on the road now.
00:19:55
Speaker
But you have quite a different perspective in terms of how fast we're going with the technology and how little thought there is into the longer term sort of social dynamic and consequences. But to put my full cards on the table, I can't wait for better technologies in this area.
00:20:13
Speaker
Well, without wishing to be too congenial, I am also excited about the potential for the technology. But what I know about past technology suggests that it may well end up
00:20:28
Speaker
gloriously suboptimal, right? I see I'm interested in a future involving self-driving cars that might actually realize some of the enormous benefits here, the enormous benefits to, for example, bring accessibility to people who currently can't drive.
00:20:44
Speaker
The enormous benefits to public safety, to congestion, but making that work will not just involve a repetition of current dynamics of technological change. I think current ownership models in the US, current modes of transport in the US, just are not conducive to making that happen.
00:21:04
Speaker
So I would love to see governments taking control of this and actually making it work in the same way as in the past. Governments have taken control of transport and built public value transport systems out of them. Yeah, if governments are taking control of this and they're having it done right, what does that mean to have this developed the right way that we're not seeing right now with the manufacturers?
00:21:31
Speaker
I think the first thing that I don't see any of within the self-driving car debate, because I just think we're at too early a stage, is an articulation of what we want from self-driving cars. We have the Google vision, the Waymo vision of the benefits of self-driving cars, which is largely about public safety. Fine, but no consideration of what it would take to get that right.
00:21:57
Speaker
And I think that's going to look very different. I think to an extent Tempe is an easy case because the roads in Arizona are extremely well organized. It's sunny. Pedestrians behave themselves. But what you're not going to be able to do is take that technology and transport it to central London and expect it to do the same job.
00:22:19
Speaker
So some understanding of desirable systems across different places is really important. And that, I'm afraid, does mean sharing control between the innovators and the people who have responsibility for public safety and for public transport and for public space.
00:22:38
Speaker
So to me, this is really important because even though most people in this field and other similar fields are doing it for what they claim is to be for future benefits and the public good, there's a huge gap between good intentions of doing the right thing and actually being able to achieve something positive for society.
00:22:58
Speaker
And I think the danger is that good intentions go bad very fast if you don't have the right processes and structures in place to translate them into something that benefits society. And to do that, you've got to have partnerships and engagement with agencies and authorities that have oversight over these technologies, but also the communities and the people that are either going to be impacted by them or benefit by them.
00:23:24
Speaker
I think that's right. I think just letting letting the benefits as stated by the innovators speak for themselves hasn't worked in the past and it won't work here. Right. We have to allow some sort of democratic discussion about that. All right. So we've been talking about some technology that I think most people think it's probably coming pretty soon. Certainly we're already starting to see
00:23:51
Speaker
testing of autonomous vehicles on the roads and whatnot. I want to move forward in the future to more advanced technology, looking at more advanced artificial intelligence, maybe even super intelligence.

Balancing Short-term and Long-term AI Risks

00:24:05
Speaker
How do we address risks that are associated with that when a large number of researchers don't even think this technology can be developed? If it is developed, it's still hundreds of years away. How do you address these really, really big unknowns and uncertainties?
00:24:21
Speaker
That's a huge question. And so I'm speaking here as something of a cynic of some of the projections of superintelligence. But I think you've got to develop a balance between near and mid-term risks, but at the same time work out how you take early action on trajectories so you're less likely to see the emergence of those longer term existential risks.
00:24:46
Speaker
One of the things that actually really concerns me here is if you become too focused on some of the highly speculative existential risks, you end up missing things which could be catastrophic in a smaller sense in the near to mid-term. So for instance, pouring millions upon millions of dollars into solving a hypothetical problem around superintelligence and the threat to humanity sometime in the future.
00:25:13
Speaker
at the expense of looking at nearer term things such as algorithmic bias, such as autonomous decision making that cuts people out of the loop and a whole number of other things, is a risk balance that doesn't make sense to me. Somehow you've got to deal with these emerging issues, but in a way which is sophisticated enough that you're not setting yourself up for problems in the future.
00:25:37
Speaker
The thing I would add, I completely agree Andrew, I think getting that balance right is crucial and I agree with your assessment that that balance is far too much at the moment in the direction of the speculative and long term. And one of the reasons why it is, is because that's an extremely interesting set of engineering challenges. So I think the question would be,
00:25:59
Speaker
on whose shoulders does the responsibility lie for acting once you recognize threats or risks like that. And typically what you find when a community of scientists gathers to assess risks is that they frame the issue in ways that lead to scientific or technical solutions. And it's telling, I think, that in the discussion about superintelligence the
00:26:29
Speaker
answer either in the foreground or in the background is normally more AI, not less AI, and the answer is normally to be delivered by engineers rather than to be governed by politicians. That said, I think there's sort of cause for optimism if you look at the recent campaign around autonomous weapons, in that that would seem to be a clear recognition of
00:26:53
Speaker
a technologically mediated issue where the necessary action is not on the part of the innovators themselves but on all the people who are in control of our armed forces.
00:27:07
Speaker
So one of the challenges here, I think, is one of control. And I think you're exactly right, Jack. And I should clarify that even though there is a lot of discussion around speculative existential risks, there is also a lot of action on nearer term issues such as the lethal autonomous weapons.
00:27:25
Speaker
But one of the things that I've been particularly struck with in conversations is the fear amongst technologists in particular of losing control over the technology and the narrative. So I've had conversations where people have said that they're really worried about the potential downsides, the potential risks of where artificial intelligence is going.
00:27:49
Speaker
But they're convinced that they can solve those problems without telling anybody else about them. And they're scared that if they tell a broader public about those risks, that they'll be inhibited in doing the research and the development that they really want to do. And I think that really comes down to control, not wanting to relinquish control over what you want to do with

Effective Risk Communication and Media Influence

00:28:08
Speaker
the technology. But I think that there has got to be some relinquishment there if we're going to have responsible development of these technologies.
00:28:15
Speaker
that really focuses on how they could impact people both in the short as well as the long term and how as a society we find pathways forwards. Andrew I'm really glad you brought that up because that's that's one that I'm not convinced by this idea that if we tell the public what the risks are then suddenly the researchers won't be able to do the research they want. Do you see that as a real risk for researchers or do you think that's a little
00:28:41
Speaker
So I think there is a risk there, but it's rather complex. So most of the time, the public actually don't care about these things. There are one or two examples. So genetically modified organisms is the one that always comes up, but that is a very unique and very distinct example.
00:29:00
Speaker
Most of the time if you talk broadly about what's happening with the new technology, people will say that's interesting and get on with their lives. So there's much less risk there about talking about it than I think people realize. The other thing though is even if there is a risk of people saying, hold on a minute, we don't like what's happening here.
00:29:22
Speaker
better to have that feedback sooner rather than later because the reality is people are going to find out what's happening and if they discover as a company or a research agency or a scientific group that you've been doing things that are dangerous and you haven't been telling them about it, when they find out after the fact people get mad and that's where things get really sort of messy.
00:29:43
Speaker
So it's far better to engage earlier and often, and sometimes that does mean you're going to have to take advice and maybe change the direction that you go in, but far better to do that earlier in the process. Jack, did you have anything to add there? No, I fear Andrew and I are agreeing too much. Sorry. Let me try and find something really controversial to say that you're going to sort of scream at me at. I think you're probably the wrong person to do that, Andrew. I think maybe we could get Elon Musk on the phone and then
00:30:14
Speaker
Yeah, that's interesting. So I'm not just thinking about Elon, but you've got a whole group of people in the technology sphere here who are very clearly trying to do what they think is the right thing. They're not in it primarily
00:30:27
Speaker
for fame and money, but they're in it because they believe that something has to change to build a beneficial future. The challenge is, these technologists, if they don't realize the messiness of working with people in society, and they think just in terms of technological solutions, they're going to hit roadblocks that they can't get over.
00:30:49
Speaker
So this to me is why it's really important that you've got to have the conversations. You've got to take the risk to talk about where things are going with a broader population. And by risk, I mean you've got to risk your vision having to be pulled back a little bit so it's more successful in the long term.
00:31:06
Speaker
So actually, I mean, you mentioned Elon Musk and he says a lot of things to get picked up by the media and it's perceived as fear-mongering, but I've found a lot of times, I mean, full disclosure, he supports us, but I found a lot of times when I go back and look at what he actually said in its complete unedited form and taken within context, it's not usually as extreme and it seems a lot more reasonable. So I was hoping you could both touch on the impact of media as well.
00:31:35
Speaker
and how that's driving the discussion. Well, I think it's actually less about media because I think
00:31:42
Speaker
I mean, blaming the media is always the convenient thing to do, they're the convenient target. I think the question is about actually the culture in which Elon Musk sits and in which his views are received, which is extremely technologically utopian and which wants to believe that there are
00:32:09
Speaker
simple technological solutions to some of our most pressing problems and in that culture it is understandable if seemingly seductive ideas about whether they're about artificial intelligence or about new transport systems are taken. I would love there to be a more sort of sceptical attitude so that
00:32:30
Speaker
when those sorts of claims are made, just as when any sort of political claim is made, that they are scrutinised and become the starting point for a vigorous debate about the world in which we want to live in. Because I think that is exactly what is missing from our current technological discourse.
00:32:50
Speaker
I would also say with the media, the media is obviously a product of society. We are titillated by sort of extreme scary scenarios and the media, it's a medium through which that actually happens. So I mean, I work a lot with journalists and I would say I've had very few experiences with being misrepresented and misquoted where it wasn't my fault in the first place.
00:33:18
Speaker
So I think we've got to think of two things when we think of media coverage. First of all, we've got to get smarter in how we actually communicate and by we I mean that the people that feel we've got something to say here. We've got to work out how to communicate in a way that makes sense with the journalists and the media that we're communicating through.
00:33:37
Speaker
But we've also got to realize that even though we might be outraged by something we see where we think it's a misrepresentation, that usually doesn't get as much traction in society as we think it does. So we've got to be a little bit more laid back with how uptight we get about how we see things reported. Is there anything else that you think is important to add that we haven't had a chance to discuss as much? I don't think so. No, I think that was that was quite nice coverage of

Challenging Questions for Future Progress

00:34:07
Speaker
Yeah, I'm just sorry that Andrew and I agree on so much. So I, yeah, I would actually just sort of wrap things up. So yes, there has been a lot of agreements. But actually, and this is an important thing, it's because most people, including people that are often portrayed as just being naysayers,
00:34:28
Speaker
I try to ask difficult questions so we can actually build a better future through technology and through innovation in all its forms. And I think it's really important to realize that just because somebody asks difficult questions doesn't mean they're trying to stop progress, but they're trying to make sure that that progress is better for everybody. Well, I think that sounds like a nice note to end on. Thank you both so much for joining us today. Thanks very much. Thanks, everyone.
00:34:57
Speaker
To learn more, visit futureoflife.org.