Introduction to the Podcast and Future of Life Institute
00:00:04
Speaker
Welcome to the Future of Life Institute podcast. I'm Lucas Perry. Today we have a conversation with Robert DeNouville about super forecasting.
Purpose and Impact of the Future of Life Award
00:00:14
Speaker
But before I get more into the episode, I have two items I'd like to discuss. The first is that the Future of Life Institute is looking for the 2020 recipient of the Future of Life Award.
00:00:29
Speaker
For those not familiar, the Future of Life Award is a $50,000 prize that we give out to an individual who, without having received much recognition at the time of their actions, has helped to make the world dramatically better than it may have been otherwise.
00:00:49
Speaker
The first two recipients were Vasily Arkopov and Stanislav Petrov, two heroes of the nuclear age. Both took actions at great personal risk to possibly prevent an all-out nuclear war. And the third recipient was Dr. Matthew Messleson, who spearheaded the international ban on bioweapons.
00:01:12
Speaker
Right now, we're not sure who to give the 2020 Future of Life Award to.
Call to Action: Nominate Unsung Heroes
00:01:18
Speaker
That's where you come in. If you know of an unsung hero who has helped to avoid global catastrophic disaster, or who has done incredible work to ensure a beneficial future of life, please head over to the Future of Life Award page and submit a candidate for consideration.
00:01:39
Speaker
The link for that page is on the page for this podcast or the description wherever you might be listening. You can also just search for it directly. If your candidate is chosen, you will receive $3,000 as a token of our appreciation.
00:01:55
Speaker
We're also incentivizing the search via MIT's successful red balloon strategy, where the first to nominate the winner gets $3,000 as mentioned, but there will also be tiered payouts to the persons who invited the nomination winner and so on. You can find details about that on the page.
Podcast Feedback Survey
00:02:17
Speaker
The second item is that there is a new survey that I wrote about the Future of Life Institute and AI alignment podcasts. It's been about a year since our last survey, and that one was super helpful for me understanding what's going well, what's not, and how to improve.
00:02:36
Speaker
I have some new questions this time around and would love to hear from everyone about possible changes to the introductions, editing, content, and topics covered. So if you have any feedback, good or bad, you can head over to the SurveyMonkey poll in the description of wherever you might find this podcast or on the page for this podcast. You can answer as many or as little of the questions as you'd like, and it goes a long way for helping me to gain perspective about the podcast, which is often hard to do from my end because I'm so close to it.
Encouraging Listener Engagement
00:03:06
Speaker
And if you find the content and subject matter of this podcast to be important and beneficial, consider sharing it with friends, subscribing on Apple podcasts, Spotify, or whatever your preferred listening platform is, and leaving us a review. It's really helpful for getting information on technological risk and the future of life out to more people.
00:03:32
Speaker
Regarding today's episode, I just want to provide a little bit of context.
Probability and Risk Analysis
00:03:36
Speaker
The foundation of risk analysis has to do with probabilities. We use these probabilities and the predicted value lost if certain risks occur to calculate or estimate expected value.
00:03:50
Speaker
This in turn helps us to prioritize risk mitigation efforts to where it's truly needed. So it's important that we're able to make accurate predictions about the likelihood of future events and risk so that we can take the appropriate action to mitigate them.
Robert DeNouville's Background and Super Forecasting
00:04:08
Speaker
This is where super forecasting comes in.
00:04:11
Speaker
Robert de Neuville is a researcher, forecaster, and futurist with degrees in government and political science from Harvard and Berkeley. He works particularly on the risks of catastrophes that might threaten human civilization. He is also a super forecaster,
00:04:28
Speaker
Since he was among the top 2% of participants in IARPA's Good Judgment forecasting tournament, he has taught international relations, comparative politics, and political theory at Berkeley and San Francisco State. He has written about politics for the economists, the New Republic, the Washington Monthly, and Big Think. And with that, here's my conversation with Robert DeNouville on super forecasting.
00:04:57
Speaker
All right, Robert, thanks so much for coming on the podcast. It's great to be here. Let's just start off real simply here.
Origins of Super Forecasting
00:05:03
Speaker
What is super forecasting? Say if you meet someone, a friend or family member of yours asks you what you do for work. How do you explain what super forecasting is? I just say that I do some forecasting. People understand what forecasting is. They may not understand specifically the way I do it.
00:05:18
Speaker
I don't love using super forecasting as a noun. You know, there's the book, super forecasting, it's a good book. And it's kind of great branding for Good Judgment, the company. But it's just forecasting, right? And hopefully, I'm good at it. And there are other people who are good at it. We have used different techniques. But it's a little bit like an NBA player saying that they play super basketball, still basketball. But what I tell people for background is that the US intelligence community had this forecasting competition, basically just to see if anyone could meaningfully forecast the future.
00:05:48
Speaker
Because it turns out one of the things that we've seen in the past is that people who supposedly have expertise in subjects don't tend to be very good at estimating probabilities that things will happen. So the question was, can anyone do that? And it turns out that for the most part, people can't, but a small subset of people in the tournament were consistently more accurate than the rest of the people.
00:06:08
Speaker
And just using open source information, we were able to decisively beat subject matter experts who actually that's not a high bar. They don't do very well. And we were also able to beat intelligence community analysts. We didn't originally know we were going up against them, but we're talking about forecasters in the intelligence community who had access to classified information we didn't have access to. We were basically just using Google.
00:06:30
Speaker
And one of the stats that we got later was that as a group, we were more accurate 300 days ahead of a question being resolved than others were just 100 days ahead. As far as what makes the technique of super forecasting sort of fundamentally distinct, I think one of the things is that we have a system for scoring our accuracy.
00:06:50
Speaker
A lot of times when people think about forecasting, people just make pronouncements. This thing will happen or it won't happen. And then there's no real great way of checking whether they were right. And they can also often after the fact, explain away their forecast. But we make probabilistic predictions and then we use a mathematical formula that weather forecasters have used to score them. And then we can see whether we're doing well or not well. We can evaluate and say, hey, look.
00:07:12
Speaker
we actually outperform these other people in this way. And we can also then try to improve our forecasting when we don't do well, ask ourselves why and try to improve it.
Traits of Successful Super Forecasters
00:07:21
Speaker
So that's basically how I explain it.
00:07:23
Speaker
All right, so can you give me a better understanding here about who we is? You're saying that the key point in where this started was this military competition basically attempting to make predictions about the future or the outcome of certain events. What are the academic and intellectual foundations of super forecasting? What subject areas would one study or did super forecasters come from? How was this all germinated and ceded prior to this competition?
00:07:52
Speaker
It actually was the intelligence community, although I think military intelligence participated in this. But I mean, I didn't study to be a forecaster, and I think most of us didn't. I don't know if there really has been a formal study that would lead you to be a forecaster. People just learned subject matter and then applied that in some way. There must be some training that people had gotten in the past, but I don't know about it.
00:08:14
Speaker
There was a famous study by Phil Tetlock, I think in the 90s it came out, a book called Expert Political Judgment, and he found essentially that experts were not good at this. But what he did find, he made a distinction between foxes and hedgehogs, you might have heard. Hedgehogs are people that have one way of thinking about things, one system, one ideology, and they apply it to every question, just like the hedgehog has one trick and it's its spines.
00:08:40
Speaker
hedgehogs didn't do well. If you were a Marxist or equally died in the wool, Milton Friedman, capitalist, and you applied that way of thinking to every problem, you tended not to do as well at forecasting. But there's this other group of people that he found it a little bit better, and he called them foxes, and foxes are tricky. They have all sorts of different approaches. They don't just come in with some dogmatic ideology. They look at things from a lot of different angles.
00:09:05
Speaker
So that was sort of the initial research that inspired him. Now there's other people that were talking about this, but it was ultimately Phil Tetlock and Barb Mellor's group that outperformed everyone else and looked for people that were good at forecasting and they put them together in teams and they aggregated their scores with algorithmic magic.
00:09:23
Speaker
We had a variety of different backgrounds. If you saw any of the press initially, the big story that came out in the press was that we were just regular people. There was a lot of talk about so-and-so was a housewife. And that's true. We weren't people that had a reputation for being great pundits or anything. That's totally true. I think that was a little bit overblown, though, because it made it sound like so-and-so was a housewife and no one knew that she had this skill, otherwise she was completely unremarkable.
00:09:50
Speaker
In fact, superforecasters as a group tended to be highly educated with advanced degrees. They tended to have backgrounds. They lived in a bunch of different countries. The thing that correlates most with forecasting ability seems to be basically intelligence, performing well on measures of intelligence tests. And also I should say that a lot of very smart people aren't good forecasters. Just being smart isn't enough.
00:10:12
Speaker
But that's one of the strongest predictors of forecasting ability.
Forecasting Techniques and Probabilistic Thinking
00:10:15
Speaker
That's not as good a story for journalists. So it wasn't crystals. If you do surveys of the way superforecasters think about the world, they tend not to do what you would call magical thinking. Some of us are religious. I'm not. But for the most part, the divine is an explanation in their forecast. They don't use God to explain it. They don't use things that you might consider superstition.
00:10:39
Speaker
Maybe that seems obvious, but it's a very rational group. How is super forecasting done and what kinds of models are generated and brought to bear? As a group, we tend to be very numerate. That's one thing that correlates pretty well with forecasting ability.
00:10:56
Speaker
When I say they come from a lot of backgrounds, I mean there are doctors, pharmacists, engineers. I'm a political scientist. There are actually a fair number of political scientists, some people who are in finance or economics. But they all tend to be people who could make at least a simple spreadsheet model. We're not all statisticians, but have at least a intuitive familiarity with statistical thinking, an intuitive concept of Bayesian updating.
00:11:22
Speaker
As far as what the approach is, I mean, we make a lot of simple models, often not very complicated models, I think, because often when you make a complicated model, you end up overfitting the data and drawing falsely precise conclusions, at least when we're talking about complex real world political science-y kind of situations. But I would say the best guide for predicting the future, and this probably sounds obvious, best guide for what's going to happen is what's happened in similar situations in the past.
00:11:50
Speaker
One of the key things you do if somebody asks you, will so-and-so win an election, you would look back and say, well, what's happened in similar elections in the past? What's the base rate of the incumbent, for example, maybe from this party or that party, winning an election given this economy and so on? Now, it is often very hard to beat simple algorithms that try to do the same thing, but that's not a thing that you can just do by road. It requires an element of judgment about
00:12:18
Speaker
what situations in the past count as similar to the situation you're trying to ask a question about. In some ways, that's a big part of the trick is to figure out what's relevant to the situation, trying to understand what past events are relevant. And that's something that's hard to teach, I think, because you could make a case for all sorts of things being relevant, and there's an intuitive field that's hard to explain to someone else.
00:12:41
Speaker
The things that seem to be brought to bear here would be like these formal mathematical models. And then the other thing would be what I think comes from Daniel Kahneman and is borrowed by the rationalist community, this idea of system one and system two thinking. So system one is the intuitive, the emotional. We catch balls using system one. System one says that the sun will come out tomorrow. Well, hopefully system two does too.
00:13:06
Speaker
Yeah, system two does too. So I imagine some questions are just limited to sort of pen and paper, system one, system two thinking, and some are questions that are more suitable for mathematical modeling.
00:13:19
Speaker
Yeah. I mean, some questions are more suitable for mathematical modeling, for sure. I would say, though, the main system we use is system two. And this is, as you say, we catch balls with some of them kind of intuitive reflex that's sort of maybe not in our prefrontal cortex. If I were trying to calculate the trajectory of ball and try to catch it, that wouldn't work very well.
00:13:38
Speaker
But I think most of what we're doing when we forecast is trying to calculate something. Now, often the models are really simple. It might be as simple as saying, this thing has happened seven times in the last 50 years. So let's start from the idea there's a 14% chance of that thing happening again. It's analytical. We don't necessarily just go with a gut and say this feels like a one in three chance.
00:14:00
Speaker
Now that said, I think that it helps a lot, and this is a problem with applying the results of our work. It helps a lot to have a good intuitive feel of probability, like what one in three feels like, just a sense of how often that is. And superforecasters tend to be people who they're able to distinguish between smaller gradations of probability. I think in general, people that don't think about this stuff very much, they have kind of three probabilities. Definitely going to happen, might happen, and will never happen.
00:14:30
Speaker
And there's no finer grained distinction there, which I think super forecasters often feel like they can distinguish between one or two percent probabilities, the difference between 50 percent and 52 percent. The sense of what that means, I think, is a big thing.
00:14:45
Speaker
if we're going to tell a policymaker there's a 52% chance of something happening, a big part of the problem is that policymakers have no idea what that means. They're like, well, what happened or won't it? What do I do with that number? How is that different from 50%? It's a fair point. I think one of the things that I would most like to see in terms of having people make better predictions and having people use them better is just to understand how probabilities work.
00:15:13
Speaker
All right, so a few things I'm interested in here. The first is I'm interested in what you have to say about what it means and how one learns how probabilities work. If you were to explain to policymakers or other persons who are interested, who are not familiar with working with probabilities a ton, how long can you get a better understanding of them and what that looks like? I feel like that would be interesting and helpful.
00:15:37
Speaker
And then the other thing that I'm sort of interested in getting a better understanding of is most of what is going on here seems like a lot of system two thinking, but I also would suspect and guess that many of the top super forecasters have very excellent finely tuned system ones. Yeah. Curious if you have any thoughts about these two things. I think that's true. I mean, I don't know exactly what counts as system one in the cognitive psych sense, but I do think that there is a feel that you get. It's like practicing a jump shot or something.
00:16:07
Speaker
I'm sure Steph Curry, not that I'm Steph Curry in forecasting, but I'm sure Steph Curry, when he takes a shot, isn't thinking about it at the time. He's just practiced a lot. And by
Critique of Subject Matter Experts in Forecasting
00:16:15
Speaker
the same token, if you've done a lot of forecasting and thought about it and have a good feel for it, you may be able to look at something and think, oh, here's a reasonable forecast. Here's not a reasonable forecast. I had that sense recently when looking at 538 tracking COVID predictions for a bunch of subject matter experts, and they're obviously kind of doing terribly.
00:16:34
Speaker
And part of it is that some of the probabilities are just not plausible. And that's immediately obvious to me and I think to other forecasters who spend a lot of time thinking about it. So I do think that without even having to do a lot of calculations or a lot of analysis, often I have a sense of what's plausible, what's in the right range, just because of practice.
00:16:54
Speaker
When I'm watching a sporting event and I'm stressed about my team winning, for years before I started doing this, I would habitually calculate the probability of winning. It's a neurotic thing. It's like composing some kind of control. I think I'm doing the same thing with COVID, right? I'm calculating probabilities all the time, making myself feel more in control. But that actually was pretty good practice for getting a sense of it. I don't really have the answer to how to teach that other people
00:17:22
Speaker
except potentially the practice of trying to forecast and seeing what happens and when you're right and when you're wrong. Good judgment does have some training materials that improve forecasting for people validated by research. They involve things about thinking about the base rate of things happening in the past and essentially going through sort of system two approaches. And I think that kind of thing can also really help people get a sense for it.
00:17:47
Speaker
But like anything else, there's not going to practice. You can get better or worse at it. Well, hopefully you get better. So our risk that is 2% likely is two times more likely than a 1% chance risk. How do those feel differently to you than to me or a policymaker who doesn't work with probabilities a ton? Well, I don't entirely know. I don't entirely know what they feel like to someone else. I think I do a lot of one time in 50, that's what 2% is, and one time in 100, that's what 1% is.
00:18:16
Speaker
the forecasting platform we use, we only work in integer probabilities. So if it goes below half a percent chance, I'd round down to zero. And honestly, I think it's tricky to get accurate forecasting with low probability events for a bunch of reasons, or even to know if you're doing a good job because you have to do so many of them. I think about fractions often and have a sense of what something happening two times in seven might feel like in a way.
00:18:43
Speaker
So you've made this point here that superforecasters are often better at making predictions than subject matter expertise. Can you unpack this a little bit more and explain how big the difference is? You recently just mentioned the COVID-19 virologists.
00:18:59
Speaker
virologists, infectious disease experts, I don't know all of them, but people whose expertise I really admire, who know the most about what's going on and to whom I would turn in trying to make a forecast about some of these questions. And it's not really fair because these are people often who have talked to 538 for 10 minutes and produced a forecast. They're very busy doing other things, although some of them are doing modeling and you would think that they would have thought about some of these probabilities in advance.
00:19:24
Speaker
But one thing that really stands out when you look at those is they'll give a five or 10% chance of something happening, which to me is virtually impossible. And I don't think it's their better knowledge of virology that makes it think it's more likely. I think it's having thought about what five or 10% means a lot. Well, they think it's not very likely and they assign it, which sounds like a low number. That's my
Accuracy of Super Forecasting
00:19:44
Speaker
guess. I don't really know what they're doing. What's an example of that?
00:19:47
Speaker
Recently, there were questions about how many tests would be positive by a certain date. And they assigned a real chance, like a 5% or 10%, I don't remember exactly the numbers, but way higher than I thought it would be for there being below a certain number of tests. And the problem with that was, it would have meant essentially that all of a sudden, the number of tests that were happening positive every day would drop off a cliff, go from, I don't know how many positive tests there are a day, 20-some thousand in the US,
00:20:17
Speaker
all of a sudden that would drop to like two or three thousand. And this we're talking about forecasting like a week ahead. So really a short timeline. It just was never plausible to me that all of a sudden tests would stop turning positive. There's no indication that that's about to happen. There's no reason why that would suddenly shift. I mean, maybe I can always say maybe there's something that a virologist knows that I don't.
00:20:38
Speaker
But I have been reading what they're saying. So how would they think that it would go from 25,000 a day to 2,000 a day over the next six days? I'm going to assign that basically a 0% chance. Another thing that's really striking, and I think this is generally true, and it's true to some extent of Superforecast, we've had a little bit of an argument on our Superforecasting platform. People are terrible at thinking about exponential growth. They really are.
00:21:03
Speaker
But they really under predicted the number of cases and deaths, even again, like a week or two in advance, because it was orders of magnitude higher than the number at the beginning of the week. But a computer that had an algorithm to fit an exponential curve would have had no problem doing it. And basically, I think that's what the good forecasters did is we fit an exponential curve and said, I don't even need to know many of the details over the course of a week. My outside knowledge of the progression of disease and vaccines or whatever isn't going to make much difference.
00:21:33
Speaker
And as I said, it's often hard to beat a simple algorithm, but the virologists and infectious disease experts weren't applying that simple algorithm. And it's fair to say, well, maybe some public health intervention will change the curve or something like that, but I think they were assigning way too high a probability to the exponential trends stopping. I just think it's a failure to imagine
00:21:55
Speaker
You know, maybe the Trump administration has motivated reasoning on this score. They kept saying it's fine. There aren't very many deaths yet, but it's easy for someone to reject the trajectory a little bit further in the future and say, wow, there are going to be. So I think that's actually been a major policy issue too, as people can't believe the exponential growth. There's this tension between not trying to panic everyone in the country or you're unsure if this is the kind of thing that's an exponential or you just don't really intuit how exponentials work.
00:22:23
Speaker
For the longest time, our federal government were like, oh, it's just a person. There's just like one or two people. They're just going to get better and that will like go away or something. What's your perspective on that? Is that just trying to assuage the populace while they try to figure out what to do? Or do you think that they actually just don't understand how exponential work?
Need for Reliable Information in Forecasting
00:22:40
Speaker
I'm not confident about my theory of mind as people in power.
00:22:44
Speaker
I think one element is this idea that we need to avoid panic, and I think that's probably, they believe in good faith, that's a thing that we need to do. I am not necessarily an expert on the role of panic in crises, but I think that that's overblown, personally.
00:23:01
Speaker
We have this image of how, you know, in the movies, if there's a disaster, all of a sudden, everyone's looting and killing each other and stuff. And we think that's what's going to happen. But actually, often in disasters, people really pull together. And if anything, have a stronger sense of community and help their neighbors rather than immediately go and try to steal their supplies.
00:23:17
Speaker
We did see some people fighting over toilet paper on news rolls, and there are always people like that. But even this idea that people were hoarding toilet paper, I don't even think that's the explanation for why it was out of the stores. If you tell everyone in the country they need two to three weeks of toilet paper right now today, yeah, of course they're going to buy it off the shelves. That's actually just what they need to buy.
00:23:37
Speaker
I haven't seen a lot of panic. If I had been an advisor to the administration, I would have said something along the lines of, it's better to give people accurate information so we can face it squarely than to try to sugarcoat it.
00:23:54
Speaker
But I also think that there was a hope that if we pretended things weren't about to happen or that maybe they would just go away. I think that was misguided. There seems to be some idea that you could reopen the economy and people would just die, but the economy would end up being fine. I don't think that would be worth it anyway. Even if you don't shut down, the economy is going to be disrupted by what's happening. So I think there are a bunch of different motivations for why governments weren't
00:24:22
Speaker
honest or weren't dealing squarely with this. It's hard to know what's not honesty and what is just genuine confusion. So what organizations exist that are focused on super forecasting?
Platforms for Super Forecasting
00:24:33
Speaker
Where or what are the community hubs and prediction aggregation mechanisms for super forecasters?
00:24:40
Speaker
So originally in the IARPA forecasting tournament, there were a bunch of different competing teams and one of them was run by a group called Good Judgment. And that team ended up doing so well, they ended up basically taking over the later years of the tournament and it became the Good Judgment project. There was then a spinoff, Phil Tetlock and others who were involved with that spun off into something called Good Judgment Incorporated.
00:25:03
Speaker
That is the group that I work with and a lot of the super forecasters that were identified in that original tournament continue to work with Good Judgment. We do some public forecasting. They try to find private clients interested in our forecasts.
00:25:19
Speaker
It's really a side gig for me, and part of the reason I do it is that it's really interesting. It gives me an opportunity to think about things in a way, and I feel like I'm much better up on certain issues because I've thought about them as forecasting questions. So there's Good Judgment, Inc., and they also have something called the Good Judgment Open.
00:25:38
Speaker
They have an open platform where you can forecast the kind of questions we do. I should say that we have a forecasting platform. They come up with forecastable questions. But forecastable means that they have relatively clear resolution criteria, but also you would be interested in knowing the answer. It wouldn't be just some picky, trivial answer. And they'll have a set resolution date so you know that if you're forecasting a chance of something happening, it has to happen by a certain date. So it's all very well defined. And coming up with those questions is a little bit of its own skill.
00:26:07
Speaker
It's pretty hard to do. So Good Judgment will do that. And they put it on a platform where then as a group, we discuss the questions and give our probability estimates. We operate to some extent in teams. They found there's some evidence that teams of forecasters, at least good forecasters, can do a little bit better than people on their own. I find it very valuable because other forecasters do a lot of research and they critique my own ideas.
00:26:32
Speaker
There's concerns about groupthink, but I think that we're able to avoid those and talk about why if you want. Then there's also this public platform called Good Judgment Open, where they use the same kind of questions and anyone can participate. And they've actually identified some new superforecasters who participated on this public platform, people who did exceptionally well, and then they've invited them to work with the company as well.
00:26:54
Speaker
There are others. I know a couple of super forecasters who are spinning off their own group. They made an app, I think it's called Maybe, where you can do your own forecasting and maybe come up with your own questions. And that's a neat app. There is Metaculous, which certainly tries to apply the same principles. And I know some super forecasters who forecast on Metaculous. I've looked at it a little bit, but I just haven't had time because forecasting takes a fair amount of time.
00:27:20
Speaker
And then there are always prediction markets and things like that. There are a number of other things I think that try to apply the same principles. I don't know enough about the space to know all the other platforms and markets that exist.
Forecasting in Policy and Decision Making
00:27:32
Speaker
So for some more information on the actual act of forecasting that will be put onto these websites, can you take us through something which you have forecasted recently that ended up being true and tell us how much time it took you to think about it and what your actual thinking was on it and how many variables and things you considered? Yeah, I mean, it varies widely and to some extent it varies widely on the basis of how many times I forecasted something similar.
00:28:01
Speaker
So sometimes we'll forecast the change in interest rates, the Fed moves. That's something that's obviously a lot of interest to people in finance. And at this point, I've looked at that kind of thing enough times that I have set ideas about what would make that likely or not likely to happen.
00:28:18
Speaker
But some questions are much harder. We've had questions about mortality in certain age groups in different districts in England. And I didn't know anything about that. And all sorts of things come into play. Is the flu season likely to be bad? What's the chance of flu season to be bad?
00:28:35
Speaker
Is there a general trend among people who are dying of complications from diabetes? Does poverty matter? How much would Brexit affect mortality chances? Although a lot of what I did was just look at past data and project trends, just basically projecting trends you can get a long way towards an accurate forecast in a lot of circumstances.
00:28:56
Speaker
When such a forecast is made and added to these websites and the question for the thing which is being predicted resolves, what are the ways in which the websites aggregate these predictions or are we at the stage of them often being put to use or is the utility of these websites currently primarily honing the epistemic acuity of the forecasters?
00:29:20
Speaker
There are a couple of things like I hope that my own personal forecasts are potentially pretty accurate. But when we work together on a platform, we will essentially produce an aggregate, which is roughly speaking the median prediction. There are some proprietary elements to it. They extremize it a little bit, I think, because once you aggregate it, it kind of blurs things towards the middle.
00:29:43
Speaker
They maybe weight certain forecasts and more recent forecasts differently. I don't know the details of it, but you can improve accuracy, not just by taking the median of our forecasts or the prediction market, but doing a little algorithmic tweaking, they found they can improve accuracy a little bit.
00:29:58
Speaker
That's sort of what happens with our output. And then as far as how people use it, I'm afraid not very well. There are people who are interested in good judgments forecasts and who pay them to produce forecasts. But it's not clear to me what decision makers do with it or if they know what to do. And I think a big problem selling forecasting is that people don't know what to do with a 78% chance of this or let's say a 2% chance of a pandemic in a given year. I'm just making that up. But somewhere in that ballpark,
00:30:28
Speaker
What does that mean about how you should prepare? I think that people don't know how to work with that. So it's not clear to me that our forecasts are necessarily affecting policy, although it's the kind of thing that gets written up in the news and who knows how much that affects people's opinions or they talk about it at Davos and maybe those people go back and they change what they're doing. Certain areas, I think people in finance know how to work with probabilities a little bit better, but they also have models that are fairly good at projecting certain types of things. So they're already doing a reasonable job, I think.
00:30:57
Speaker
I wish it were used better. If I were the advisor to a president, I would say you should create a predictive intelligence unit using superforecasters, maybe give them access to some classified information, but even using open source information, have them predict probabilities of certain kinds of things, and then develop a system for using that in your decision making.
Challenges in Long-term Super Forecasting
00:31:19
Speaker
But I think we're a fair ways away from that. I don't know any interest in that in the current administration.
00:31:25
Speaker
One obvious leverage point for that would be if you really trusted this group of super forecasters, and the key point for that is just simply how accurate they are. So just generally, how accurate is super forecasting currently? If we took the top 100 super forecasters in the world, how accurate are they over history? We do keep score, right? But it depends a lot on the difficulty of the question that you're asking. If you ask me whether the sun will come up tomorrow, yeah, I'm very accurate.
00:31:55
Speaker
If you ask me to predict a random number generator between 1 and 100, I'm not very accurate. And it's hard often to know with a given question how hard it is to forecast. I have what's called a Breyer score, essentially a mathematical way of correlating your forecast and probabilities you give with the outcomes. A lower Breyer score essentially is a better fit. So I can tell you what my Breyer score was on the questions I forecasted in the last year.
00:32:22
Speaker
And I can tell you that it's better than a lot of other people's prior scores, and that's the way you know I'm doing a good job. But it's hard to say how accurate that is in some absolute sense. It's like saying how good are NBA players at taking jump shots. It depends where they're shooting from.
00:32:37
Speaker
That said, I think, broadly speaking, we are the most accurate. So far, Superforecasters had a number of challenges, and I mean, I'm proud of this. They've pretty much crushed all comers. They've tried to bring artificial intelligence into it. We're still, I think, as far as I know, the gold standard of forecasting, but we're not profits by any means. Accuracy for us is saying there's a 15% chance of this thing in politics happening, and then when we do that over a bunch of things,
00:33:06
Speaker
Yeah, 15% of them end up happening. It is not saying this specific scenario will definitely come to pass. We're not profits. Getting the well-calibrated probabilities over a large number of forecasts is the best that we can do right now and probably in the near future for these complex political social questions.
Evaluating Forecasting Difficulty and Forecaster Skill
00:33:27
Speaker
Would it be skillful to have some sort of standardized group of expert forecasters rank the difficulty of questions, which then you would be able to better evaluate and construct a Breyer score for persons? It's an interesting question. I think I could probably tell you, I'm sure other forecasters could tell you which questions are relatively easier or harder to predict. Things where there's a clear trend and there's no good reason for it changing are relatively easy to predict.
00:33:54
Speaker
things where small differences could make it tip into a lot of different end states are hard to predict and I could sort of have a sense initially what those would be. I don't know what the advantage of ranking questions like that and then trying to do some weighted adjustment. I mean, maybe you could, but the best way that I know of to really evaluate forecasting skill is to compare it with other forecasters. That sets kind of a baseline. What do other good forecasters come up with and what do average forecasters come up with and can you beat prediction markets?
00:34:23
Speaker
I think that's the best way of evaluating relative forecasting ability, but I'm not sure it's possible that some kind of weighting would be useful in some context. I hadn't really thought about it. All right. So you work both as a super forecaster, as we've been talking about, but you also have a position at the Global Catastrophic Risk Institute.
Global Catastrophic Risk Analysis and Super Forecasting
00:34:44
Speaker
Can you provide a little bit of explanation for how super forecasting and existential and global catastrophic risk analysis are complementary?
00:34:54
Speaker
What we produce at GCRI, a big part of our product is academic research, and there are a lot of differences. If I say there's a 10% chance of something happening on a forecasting platform, I have an argument for that. I can try to convince you that my rationale is good, but it's not the kind of argument that you would make in an academic paper. It wouldn't convince people it was 100% right.
00:35:19
Speaker
My warrant for saying that on the forecasting platform is I have a track record. I'm good at figuring out what the correct argument is or have been in the past, but producing an academic paper is a whole different thing. There's some of the same skills, but we're trying to produce a somewhat different output. What superforecasters say is an input in writing papers about catastrophic risk or existential risk. We'll use what superforecasters think as a piece of data.
00:35:45
Speaker
That said, superforecasters are validated at doing well at certain category of political, social, economic questions. And over a certain timeline, we know that we outperform others up to like maybe two years. We don't really know if we can do meaningful forecasting 10 years out. That hasn't been validated, and you can see why that would be difficult to do. You'd have to have a long experiment to even figure that out.
00:36:11
Speaker
And it's often hard to figure out what the right questions to ask about 2030 would be. I generally think that the same techniques we use would be useful for forecasting 10 years out, but we don't even know that. And so a lot of the things that I would look at in terms of global catastrophic risk would be things that might happen at some distant point in the future. What's the risk that there'll be a nuclear war in 2020, but also over the next 50 years, it's a somewhat different thing to do
00:36:39
Speaker
They're complementary. They both involve some estimation of risk and they use some of the same techniques. But the longer term aspect, the fact that, as I said, one of the best ways superforecasters do well is that they use the past as a guide to the future. A good rule of thumb is the status quo is likely to be the same. There's a certain inertia. Things are likely to be similar in a lot of ways to the past. I don't know if that's necessarily very useful for predicting rare and unprecedented events.
00:37:09
Speaker
There's no precedent for an artificial intelligence catastrophe. So what's the base rate of that happening? It's never happened. I can use some of the same techniques, but it's a little bit of a different kind of thing. Two people are coming to my mind of late. One is Ray Kurzweil, who has made a lot of long-term technological predictions about things that have not happened in the past. And then also curious to know if you've read the precipice, Existential Risk in the Future of Humanity by Toby Ord.
00:37:38
Speaker
Toby makes specific predictions about the likelihood of existential and global catastrophic risks in that book. I'm curious if you have any perspective or opinion or anything to add on either of these two predictors or their predictions.
Predictions from Ray Kurzweil and Toby Ord
00:37:52
Speaker
Yeah, I've read some good papers by Toby Ord. I haven't had a chance to read the book yet, so I can't really comment on that. I really appreciate Ray Kurzweil, and one of the things he does that I like is that he holds himself accountable. He's looked back and said, how accurate are our predictions? Did this come true or did that not come true? I think that is a basic hygiene point of forecasting. You have to hold yourself accountable. You can't just go back and say, look, I was right and not rationalize whatever somewhat off forecast you've made.
00:38:18
Speaker
That said, when I read Kurtzweil, I'm skeptical. Maybe that's my own inability to handle exponential change. When I look at his predictions for certain years, I think he does a different set of predictions for seven-year periods. I thought, well, he's actually seven years ahead. That's pretty good, actually, if you're predicting what things are going to be like in 2020, but you just think it's going to be 2013, maybe they get some credit for that. But I think that he is too aggressive and optimistic about the pace of change.
00:38:44
Speaker
Obviously, exponential change can happen quickly. But I also think another rule of thumb is that things take a long time to go through beta. There's the planning fallacy. People always think that projects are going to take less time than they actually do. And even when you try to compensate for the planning fallacy and double the amount of time, it still takes twice as much time as you come up with. So I tend to think Kurzweil sees things happening sooner than they will. He's a little bit of a techno optimist, obviously. But I haven't gone back and looked at all of his self-evaluation. He scores himself pretty well.
00:39:14
Speaker
So we've spoken a bit about the different websites and what are they technically called? What is the difference between a prediction market? And I think Metaculous calls itself a massive online prediction solicitation and aggregation engine, which is not a prediction market. What are the differences here and how's the language around these platforms used?
00:39:35
Speaker
Yes, I don't necessarily know all the different distinctions and categories someone would make. I think a prediction market particularly is where you have some set of funds, some kind of real or fantasy money.
Prediction Markets: Function and Limitations
00:39:47
Speaker
We use one market in the Good Judgment Project.
00:39:50
Speaker
Our money was called Inkles and we could spend that money and essentially they traded probabilities like you would trade a share. So if there was a 30% chance of something happening on the market, that's like a price of 30 cents and you would buy that for 30 cents. And then if people's opinions about how likely that was changed and a lot of people bought it.
00:40:09
Speaker
then it would get a bit up to 50% chance of happening and that would be worth 50 cents. So if I correctly realized that something that the market says is a 30% chance of happening, if I correctly realized that that's more likely, I would buy shares of that. And then eventually either other people would realize it too, or it would happen. I should say that when things happened, then you'd get a dollar, then it's suddenly it's a hundred percent chance of happening.
00:40:33
Speaker
So if you recognize that something had a higher percent chance of happening than the market was valuing at, you could buy a share of that and then you would make money. That basically functions like a stock market, except literally what you're trading is directly the probability of the question we'll answer yes or no.
00:40:49
Speaker
The stock market's supposed to be really efficient, and I think in some ways it is. I think prediction markets are somewhat useful. A big problem with prediction markets is that they're not liquid enough, which is to say that in a stock market, there's so much money going around, and people are really just on it to make money. It's hard to manipulate the prices. There's plenty of liquidity.
00:41:10
Speaker
on the prediction markets that I've been a part of, like for the one in the Good Judgment Project, for example, sometimes there'd be something that there was like a 95% chance of it happening on the prediction market. In fact, there'd be like a 99.9% chance of it happening. But I wouldn't buy that share even though I knew it was undervalued because the return on investment wasn't as high as it was on some other questions. So it would languish at this inaccurate probability because there just wasn't enough money to chase all the good investments.
00:41:38
Speaker
So that's one problem you can have in a prediction market. Another problem you can have, I see it happen with predicted, I think. It used to be the IO exchange, predicting market. People would try to manipulate the market for some advertising reason. Say you were working on a candidate's campaign and you wanted to make it look like they were a serious contender. It was a cheap investment and you put a lot of money into the prediction market and you boost their chances, but that's not really boosting their chances. That's just market manipulation. You can't really do that with the whole stock market, but prediction markets aren't well capitalized. You can do that.
00:42:09
Speaker
And then I really enjoy Predicted. Predicted is one of the prediction markets that exists for political questions. They have some dispensation so that it doesn't count as gambling in the U.S.
00:42:17
Speaker
I think it's research purposes is there's some research involved with predicted, but they have a lot of fees and they use their fees to pay for the people who run the market. It's expensive, but the fees mean that the prices are very sticky and it's actually pretty hard to make money. Probabilities have to be really out of whack before you can make enough money to cover your fees. So things like that make these markets not as accurate.
00:42:41
Speaker
I also think that although we've all heard about the wisdom of the crowds, and broadly speaking, crowds might do better than just a random person, they can also do a lot of herding behavior that good forecasters wouldn't do, and sometimes the crowds overreact to things, and I don't always think the probabilities that prediction markets come up with are very good.
00:43:01
Speaker
All right, moving along here a bit, continuing the relationship of super forecasting with global catastrophic and existential risk.
Understanding Low-Probability Risks
00:43:09
Speaker
How narrowly do you think that we can reduce the error range for super forecasts on low probability events like global catastrophic risks and existential risks?
00:43:19
Speaker
If a group of forecasters settled on a point estimate of 2% chance for some kind of global catastrophic or existential risk, but with an error range of like 1%, that dramatically changes how useful the prediction is because of its major effects on risk. How accurate do you think we can get and how much do you think we can squish the probability range? That's a really hard question.
00:43:44
Speaker
When we produce forecasts, I don't think there's necessarily clear error bars built in. One thing that good judgment will do is it will show where forecasts all agree the probability is 2%. And then it'll show if there's actually a wide variation, something at 0%, something at 4%, or something like that. And that maybe tells you something. And if we had a lot of very similar forecasts, maybe you could look back and say, we tend to have an error of this much.
00:44:10
Speaker
But for the kinds of questions we look at with catastrophic risk, it might really be hard to have a large enough end. Hopefully it's hard to have a large end where you could really compute an error range. If our aggregate spits out a probability of 2%, it's difficult to know in advance for a somewhat unique question, how far off we could be.
00:44:32
Speaker
I don't spend a lot of time thinking about frequentist or Bayesian interpretations or probability or counterfactuals or whatever. But at some point, if I say there's a 2% probability of something and then it happens, I mean, it's hard to know what my probability meant. Maybe we live in a deterministic universe and that was 100% going to happen. And I simply fail to see the signs of it. I think that to some extent, what kind of probabilities you assign things depend on the amount of information you get.
00:44:57
Speaker
Often we might say that was a reasonable probability to assign to something because we couldn't get much better information. Given the information we had, that was our best estimate of the probability. But it might always be possible to know with more confidence if we got better information. So I guess one thing I would say is if you want to reduce the error on our forecasts, it would help to have better information about the world. And that's some extent where what I do with GCRI comes in.
00:45:25
Speaker
We're trying to figure out how to produce better estimates, and that requires research. It requires thinking about these problems in a systematic way to try to decompose them into different parts and figure out what we can look at the past and use to inform our probabilities. You can always get better information and produce more accurate probabilities, I think. The best thing to do would be to think about these issues more carefully.
00:45:53
Speaker
Obviously it's a field, catastrophic risk is something that people study, but it's not the most mainstream field. There's a lot of research that needs to be done. There's a lot of low hanging fruit, work that could easily be done, applying research done in other fields to catastrophic risk issues, but there just aren't enough researchers and there isn't enough funding to do all the work that we should do. So my answer would be we need to do better research. We need to study these questions more closely. That's how we get to better probability estimates.
00:46:21
Speaker
So if we have something like a global catastrophic or existential risk, and say a forecaster says that there's a less than 1% chance that that thing is likely to occur, and if this less than 1% likely thing happens in the world, how does that update our thinking about what the actual likelihood of that risk was?
Impact of Rare Events on Risk Assessment
00:46:40
Speaker
Given this more meta point that you glossed over about how if the universe is deterministic, then the probability of that thing was actually more like 100%. And the information existed somewhere. We just didn't have access to that information or something. Can you add a little bit of commentary here about what these risks mean? I guess I don't think it's that important when forecasting
00:47:06
Speaker
if I have a strong opinion about whether or not we lived in a single deterministic universe where outcomes are in some sense, in the future, all sort of baked in. And if only we could know everything, then we would know with 100% chance everything was going to happen.
00:47:21
Speaker
or whether there's some fundamental randomness, or maybe we live in a multiverse where all these different applicants are happening. You could say that in 30% of the universe is in this multiverse, this outcome comes true. I don't think that really matters for the most part. I do think as a practical question, we may make forecasts on the basis of the best information we have. That's all you could do. But there are sometimes you look back and say, well, I missed this. I should have seen this thing.
00:47:50
Speaker
I didn't think that Donald Trump would win the 2016 election. That's literally my worst briar score ever. I'm not alone in that, and I comfort myself by saying there was actually genuinely small differences made a huge impact. But there are other forecasters who saw it better than I did. Nate Silver didn't think that Trump was a lock, but he thought it was more likely, and he thought it was more likely for the right reasons that you could get this correlated polling error in a certain set of states that would hand Trump the Electoral College.
00:48:18
Speaker
So in retrospect, I think in that case, I should have seen something what Nate Silver did. Now, I don't think in practice it's possible to know enough about an election to get in advance who's going to win. I think we still have to use the tools that we have, which are things like polling. In complex situations, there's always stuff that I missed when I make a mistake and I can look back and say, I should have done a better job figuring that stuff out.
00:48:45
Speaker
I do think, though, the kinds of questions we forecast, there's a certain irreducible, I don't want to say randomness, because I'm not making a position on whether the universe is deterministic, but irreducible uncertainty about what we're realistically able to know. And we have to base our forecasts on the information that's possible to get. I don't think that metaphysical interpretation is that important to figuring out these questions.
00:49:09
Speaker
Maybe it comes up a little bit more with unprecedented one-off events. Even then, I think you're still trying to use the same information to estimate probabilities. Yeah, that makes sense. There's only the set of information that you have access to. Yeah, something actually occurs to me. One of the things that superforecasters are proud of is that we beat these intelligence analysts that had access to classified information.
00:49:33
Speaker
And I think that if we had access to more information, I mean, we're doing our research on Google, right? Or maybe occasionally we'll write a government official and get a FOIA request or something. But we're using open source intelligence, and I think it would probably help if we had access to more information. That would inform our forecast. But sometimes more information actually hurts you. People have talked about a classified information bias, that if you have secret information that other people don't have, you're likely to think that is more valuable and useful
00:50:02
Speaker
than it actually is, and you overweight the classified information. But if you have that secret information, I don't know if it's an ego thing. You want to have a different forecast that other people don't have access to. It makes you special. You have to be a little bit careful. More information isn't always better. Sometimes the easy-to-find information is actually really dispositive and is enough and
00:50:27
Speaker
If you search for more information, you can find stuff that is irrelevant in your forecast, but think that it is relevant. So if there's some sort of risk and the risk occurs after the fact, how does one update what the probability was more like?
Interpreting Unique Event Probabilities
00:50:42
Speaker
Depends a little bit of the context. If you want to evaluate my prediction, if I say that I thought there was a 30% chance the original Brexit vote would be to leave England.
00:50:52
Speaker
that actually was more accurate than some other people, but I didn't think it was likely. Now, in hindsight, should I have said 100%? Somebody might argue that I should have, that if you'd really been paying attention, you would have known 100%. But like, how do we know it wasn't 5% and we live in a rare world? We don't. We don't. You basically can infer almost nothing from an N of 1.
00:51:13
Speaker
Like if I say there's a 1% chance of something happening and it happens, you can be suspicious that I don't know what I'm talking about, even from that end of one. But there's also a chance that there was a 1% chance that it happened and that was the one time in 100. To some extent, that could be my defense of my prediction that Hillary was going to win. I should talk about my failures. The night before, I thought there was a 97% chance that Hillary would win the election.
00:51:38
Speaker
And that's terrible. And I think that that was a bad forecast in hindsight. But I will say that typically when I've said there's a 97% chance of something happening, they have happened. I've made more than 30 some predictions that things are going to be 90% percent likely. And that's the only one that's been wrong. So maybe I'm actually well calibrated. Maybe that was the 3% thing that happened. You can only really judge over a body of predictions.
00:52:06
Speaker
And if somebody is always saying there's a 1% chance of things happening and they always happen, then that's not a good forecaster. But that's a little bit of a problem when you're looking at really rare unprecedented events. It's hard to know how well someone does at that because you don't have an end of hopefully more than one. It is difficult to assess those things.
00:52:24
Speaker
now in the middle of a pandemic. And I think that the fact that this pandemic happened maybe should update our beliefs about how likely pandemics will be in the future. There was the Spanish flu and the Asian flu and this. And so now we have a little bit more information about the base rate which these things happen.
00:52:44
Speaker
It's a little bit difficult because 1918 is very different in 2020. The background rate of risk may be very different from what's in 1918. So you want to try to take those factors into account, but each event does give us some information that we can use for estimating the risk in the future. You can do other things. A lot of what we do as a good forecaster is inductive,
Expected Value in Decision Making
00:53:03
Speaker
right? But you can use deductive reasoning. You can, for example, with rare risks, decompose them into the steps that would have to happen for them to happen.
00:53:11
Speaker
what systems have to fail for a nuclear war to start, or what are the steps along the way to potentially an artificial intelligence catastrophe. And I might be able to estimate the probability of some of those steps more accurately than I estimate the whole thing. So that gives us some kind of analytic methods to estimate probabilities, even without real base rate of the thing itself happening.
00:53:33
Speaker
So related to actual policy work and doing things in the world, the thing that becomes skillful here seems to be to use these probabilities to do expected value calculations, to try and estimate how much resources should be fed into mitigating certain kinds of risks. The probability of the thing happening requires a kind of forecasting, and then also the value that is lost requires another kind of forecasting.
00:54:02
Speaker
What are your perspectives or opinions on super forecasting and expected value calculations and their use in decision making and hopefully someday more substantially in government decision making around risk? We were talking earlier about the inability of policymakers to understand probabilities. I think one issue is that a lot of times when people make decisions, they want to just say, what's going to happen? I'm going to plan for the single thing that's going to happen. But as a forecaster, I don't know what's going to happen.
00:54:30
Speaker
I might, if I'm doing a good job, know there's a certain percent chance that this will happen and a certain percent chance that that will happen. And in general, I think that policymakers need to make decisions over sort of the space of possible outcomes with planning for contingencies. And I think that is a more complicated exercise than a lot of policymakers want to do. I think it does happen, but it requires being able to hold in your mind all these contingencies and plan for them simultaneously.
00:54:58
Speaker
And I think that with expected value calculations, to some extent, that's what you have to do. That gets very complicated very quickly. When we forecast questions, we might forecast some discrete fact about the world and how many COVID deaths will there be by a certain date. And it's neat that I'm good at that. But there's a lot that that doesn't tell you about the state of the world at that time. There's a lot of information that would be valuable in making decisions.
00:55:23
Speaker
I don't wanna say infinite because it may be sort of technically wrong, but there is an essentially uncountable amount of things you might wanna know and you might not even know what the relevant questions to ask about a certain space. So it's always gonna be somewhat difficult to get an expected value calculation because you can sort of not possibly forecast all the things that might determine the value of something.
00:55:45
Speaker
I mean, this is a little bit of a philosophical critique of consequentialist kind of analyses of things too. Like if you ask if something is good or bad, it may have an endless chain of consequences rippling throughout future history. And maybe it's really a disaster now, but maybe it means that future Hitler isn't born. How do you evaluate that? That might seem like a silly trivial point.
00:56:07
Speaker
But the fact is it may be really difficult to know enough about the consequences of your action to an expected value calculation. So your expected value calculation may have to be kind of an approximation in a certain sense. Given broad things we know, these are things that are likely to happen, I still think expected value calculations are good. I just think there's a lot of uncertainty in them. And to some extent, it's probably irreducible.
00:56:31
Speaker
I think it's always better to think about things clearly if you can. It's not the only approach. You have to get buy-in from people and that makes a difference. But the more you can do accurate analysis about things, I think the better your decisions are likely to be.
00:56:46
Speaker
How much faith or confidence do you have that the benefits of super forecasting and this kind of thought will increasingly be applied to critical government or non-governmental decision making processes around risk?
Forecasting in Government Decision Making: A Pessimistic View
00:57:02
Speaker
Not as much as I'd like. I think now that we know that people can do a better or worse job of predicting the future, we can use that information and it will eventually begin to be integrated into our governance. I think that that will help.
00:57:16
Speaker
But in general, you know, my background's in political science, and political science is, I want to say, kind of discouraging. You learn that even under the best circumstances, outcomes of political struggles over decisions are not optimal. And you could imagine some kind of technocratic decision-making system, but even that ends up having its problems, where the technocrats end up just lining their own pockets without even realizing they're doing it or something.
00:57:41
Speaker
So I'm a little bit skeptical about it. And right now, what we're seeing with the pandemic, I think we systematically under-prepare for certain kinds of things, that there are reasons why it doesn't help leaders very much to prepare for things that will never happen. And with something like a public health crisis, the deliverable is for nothing to happen. And if you succeed, it looks like all your money was wasted. But in fact, you've actually prevented anything from happening, and that's great. The problem is that that creates an under-incentive for leaders
00:58:11
Speaker
They don't get credit for preventing the pandemic that no one even knew could have happened. And they don't necessarily win the next election or business leaders may not improve their quarterly profits much by preparing for rare risks. And other reasons too, I think that we're probably have a hard time believing cognitively that certain kinds of things that seem crazy like this could happen.
00:58:33
Speaker
I'm somewhat skeptical about that. Now, I think in this case, we had institutions who did prepare for this. But for whatever reason, a lot of governments failed to do what was necessary, failed to respond quickly enough or minimize what was happening. There are worse actors than others, right? But this isn't a problem that's just about the US government. This is a problem in Italy and China.
00:58:57
Speaker
It's disheartening because COVID-19 is pretty much exactly one of the major scenarios that infectious disease experts have been warning about. The novel coronavirus that jumps from animals to humans, that spread through some kind of respiratory pathway, that's highly infectious, that spreads asymptomatically. This is something that people worried about and knew about. And in a sense, it was probably only a matter of time that this was going to happen. And there might be a small risk in any given year.
00:59:26
Speaker
And yet we weren't ready for it. We didn't take the steps. We lost time. It could have been you saving lives. That's really disheartening.
00:59:35
Speaker
I would like to see us learn a lesson from this. And I think to some extent, once this is all over, whenever that is, we will probably create some institutional structures. But then we have to maintain them. We tend to forget a generation later about these kinds of things. We need to create governance systems that have more incentive to prepare for rare risks. It's not the only thing we should be doing necessarily, but we are underprepared. That's my view.
Pandemic Preparedness and Climate Change Parallels
01:00:02
Speaker
Yeah, and I mean, the sample size of historic pandemics is quite good, right? Yeah, it's not like we were invaded by aliens. Something like this happens in just about every person's lifetime. It's historically not that rare, and this is a really bad one, but the Spanish flu and the Asian flu are also pretty bad. We should have known this was coming.
01:00:23
Speaker
What I'm also reminded here of, and some of these biases you're talking about, we have climate change on the other hand, which is destabilizing and kind of global catastrophic risky, depending on your definition. And for people who are against climate change, there seems to be a lack in trust of science and then not wanting to invest in expensive technologies or something that seem wasteful.
01:00:50
Speaker
I'm just reflecting here on all of the biases that have fed into our inability to prepare for COVID. Well, I don't think the distrust of science is sort of a thing that's out there. I mean, maybe to some extent it is, but it's also a deliberate strategy that people with interests in continuing, for example, the fossil fuel economy have deliberately tried to cloud the issue to create distrust in science, to create phony studies that make it seem that climate change isn't real.
01:01:19
Speaker
we thought a little bit about this at GCRI about how this might happen with artificial intelligence you can imagine that somebody with a financial interest might try to discredit the risks and make it seem safer than it is maybe they even believe that to some extent nobody really wants to believe that the thing that's getting them a lot of money is actually evil.
01:01:38
Speaker
So I think distrust in science really isn't an accident. It's a deliberate strategy and it's difficult to know how to combat it. There are strategies you can take, but it's a struggle, right? There are people who have an interest in keeping scientific results quiet. Yeah. Do you have any thoughts then about how we could increase the uptake of using forecasting methodologies for all manner of decision making? It seems like generally you're pessimistic about it right now.
01:02:06
Speaker
Yeah, I am a little pessimistic about it. I mean, one thing is that I think that we've tried to get people interested in our forecasts, and a lot of people just don't know what to do with them. Now, one thing I think that's interesting is that often people, they're not interested in my saying there's a 78% chance of something happening.
01:02:24
Speaker
What they want to know is how did I get there? What's my arguments? That's not unreasonable. I really like thinking in terms of probabilities, but I think it often helps people understand what the mechanism is because it tells them something about the world that might help them make a decision. So I think one thing that maybe can be done is not to treat it as a black box probability, but to have some kind of algorithmic transparency about our thinking, because that actually helps people might be more useful in terms of making decisions than just a number.
01:02:54
Speaker
So is there anything else here that you want to add about COVID-19 in particular?
Future of COVID-19: Forecasts and Reflections
01:03:00
Speaker
General information or intuitions that you have about how things will go, what the next year will look like. There is tension in the federal government about reopening. There's an eagerness to do that, to restart the economy. The US federal government and the state governments seem
01:03:16
Speaker
totally unequipped to do the kind of testing and contact tracing that is being done in successful areas like South Korea. Sometime in the short to medium term will be open and there might be the second wave and it's going to take a year or so for a vaccine. What are your intuitions and feelings or forecasts about what the next year will look like?
01:03:42
Speaker
Again, with a caveat that I'm not a virologist or I'm not an expert in vaccine development and things like that. I have thought about this a lot. I think there was a fantasy, still is a fantasy that we're going to have what they call a V-shaped recovery that, you know, everything crashed really quickly. Everyone started filing for unemployment as all the businesses shut down. Very different than other types of financial crises. This virus economics
01:04:06
Speaker
But there was this fantasy that we would sort of put everything on pause, put the economy into some cryogenic freeze and somehow keep people able to pay their bills for a certain amount of time. And then after a few months, we'd get some kind of therapy or vaccine or would die down and suppress the disease somehow. And then we would just give it a jolt of adrenaline and we'd be back and everyone would be back in their old jobs and things would go back to normal. I really don't think that is what's going to happen.
01:04:34
Speaker
I think it is almost thermodynamically harder to put things back together than it is to break them. There are things about the U.S. economy in particular, the fact that in order to keep getting paid, you actually need to lose your job and go on unemployment in many cases. It's not seamless. It's hard to even get through on the phone lines or to get the funding.
01:04:56
Speaker
I think that even after a few months, the US economy is going to look like a town that's been hit by a hurricane and we're going to have to rebuild a lot of things. And maybe unemployment will go down faster than it did in previous recessions where it was more about a bubble popping or something.
01:05:11
Speaker
But I just don't think that we go back to normal. I also just don't think we go back to normal in a broader sense, this idea that we're going to have some kind of cure. Again, I'm not a virologist, but I don't think we typically have a therapy that cures viruses the way, you know, antibiotics might be super efficacious against bacteria. Typically viral diseases, I think, are things we have to try to mitigate. And some cocktail may improve treatments and we may figure out better things to do with ventilators.
01:05:38
Speaker
Well, you may get the fatality rate down, but it's still going to be pretty bad. And then there's this idea, maybe we'll have a vaccine. I've heard people who know more than I do say maybe it's possible to get a vaccine by November.
01:05:49
Speaker
But the problem is, until you can simulate with a supercomputer what happens in the human body, you can't really speed up biological trials. You have to culture things in people and that takes time. You might say, well, let's don't do all the trials. This is an emergency. But the fact is, if you don't demonstrate that a vaccine is safe and efficacious,
01:06:10
Speaker
you could end up giving something to people that has serious adverse effects or even makes you more susceptible to disease. Like that was probably one of the SARS vaccines I tried to come up with originally is it made people more susceptible. So you don't want to hand out millions and millions of doses of something that's going to actually hurt people. And that's the danger if you skip these clinical trials.
01:06:30
Speaker
So it's really hard to imagine a vaccine in the near future. I don't want to sell short human ingenuity because we're really adaptable, smart creatures and we're throwing all our resources at this. But there's a chance that there's really no great vaccine for this virus.
01:06:48
Speaker
We haven't had great luck with finding vaccines for coronaviruses. It seems to do weird things to the human immune system and maybe there's evidence that immunity doesn't stick around that long. It's possible that we come up with a vaccine that only provides partial immunity and doesn't last that long. And I think there's a good chance that essentially we have to keep social distancing well into 2021 and that this could be a disease that remains dangerous and we have to continue to keep fighting for years potentially.
01:07:14
Speaker
I think that we're going to open up and it is important to open up as soon as we can because what's happening to the economy will literally kill people and cause famines. But on the other hand, we're going to get outbreaks that come back up again. You know, it's going to be like fanning coals if we open up too quickly and some places we're not going to get it right.
01:07:32
Speaker
And that doesn't save anyone's life. I mean, it starts up again, the virus disrupts the economy again. So I think this is going to be a thing we are struggling to find a balance to mitigate and that we're not going to go back to December 2019 for a while. Not this year, literally maybe years. And I think that, you know, humans have amazing capacity to forget things and go back to normal life. I think that we're going to see permanent changes. I don't know exactly what they are. I think we're going to see permanent changes in the way we live.
01:08:00
Speaker
And I don't know if I'm ever shaking anyone's hands again. We'll see about that. Whole generation of people are going to be much better at washing their hands. I've already gotten a lot better at washing my hands, watching tutorials. I was terrible at it. I had no idea how bad I was. Yeah, same. I hope people who've shaken my hand in the past aren't listening. So the things that will stop this are sufficient herd immunity to some extent, or a vaccine that is efficacious. Those seem like the, okay, it's about time to go back to normal points, right? Yeah.
01:08:29
Speaker
vaccine is not a given thing, given the class of coronavirus diseases and how they behave. Yeah. Eventually, and this is where I really feel like I'm not a virologist, but eventually diseases evolve and we co-evolve with them. Whatever the Spanish flu was didn't continue to kill as many people years down the line. I think that's because people did develop immunity, but also viruses don't get any evolutionary advantage from killing their hosts.
01:08:54
Speaker
They want to use us to reproduce. Well, they don't want anything, but that advantages them. If they kill us and make us use mitigation strategies, that hurts their ability to reproduce. So in the long run, and I don't know how long that run is, but eventually we co-evolve with it and it becomes endemic instead of epidemic. And it's presumably not as lethal, but I think that is something that we could be fighting for a while.
01:09:18
Speaker
There's chances of additional disasters happening on top of it. We could get another disease popping out of some animal population while our immune systems are weak or something like that. So we should probably be rethinking the way we interact with caves full of bats and live pangolins. All right. We just need to be prepared for the long haul here.
01:09:37
Speaker
I think so. I'm not sure that most people understand that. I don't think they do. I mean, I guess I don't have my finger on the pulse and I'm not interacting with people anymore, but I don't think people want to understand it. It's hard. I had plans. I did not intend to be staying in my apartment. Having your health is more important and the health of others, but it's hard to face that we may be dealing with a very different dual reality.
01:10:00
Speaker
This thing opening up in Georgia is just completely insane to me. Their cases have been slowing, but if it's shrinking, it seems to be only a little bit. To me, when they talk about opening up, it sounds like they're saying, well, we reduced the extent of this forest fire by 15% so we can stop fighting it now. Well, it's just going to keep growing. You have to actually stamp it out or get really close to it before you can stop fighting it. I think people want to stop fighting the disease sooner than we should because it sucks. I don't want to be doing this.
01:10:30
Speaker
Yeah, it's a new sad fact, and there's a lot of suffering going on right now. Yeah, I feel really lucky to be in a place where there aren't a lot of cases, but I worry about family members in other places. And I can't imagine what it's like, places where it's been. I mean, in Hawaii, people in the hospitality industry and tourism industry have all lost their jobs all at once, and they still have to pay our super expensive rent.
01:10:52
Speaker
Maybe that'll be waived and they won't be evicted, but that doesn't mean they can necessarily get medications and feed their family. And all of these are super challenging for a lot of people. Never mind that other people are in the position of they're lucky to have jobs, but they're maybe risking getting an infection going to work. So they have to make this horrible choice.
01:11:11
Speaker
and maybe they have someone with comorbidities or who's elderly living at home, this is awful. So I understand why people really want to get past this part of it soon. Was it Dr. Fauci that said the virus has its own timeline? One of the things I think that this may be teaching us is certainly reminding me that humans are not in charge of nature, not the way we think we are. We're really dominate the planet in a lot of ways, but it's still bigger than us.
01:11:36
Speaker
It's like the ocean or something. You may think you're a good swimmer, but if you get a big wave, you're not in control anymore. And this is a big wave. Yeah.
Establishing Credibility in Forecasting
01:11:45
Speaker
So back to the point of general super forecasting, suppose you're a really good super forecaster and you're finding well-defined things to make predictions about, which is, as you said, sort of hard to do. And you have carefully and honestly compared your predictions to reality. And you feel like you're doing really well.
01:12:05
Speaker
How do you convince other people that you're a great predictor when almost everyone else is making lots of vague predictions and cherry picking their successes or their interest groups that are biasing and obscuring things to try to have a seat at the table? Or for example, if you want to compare yourself to someone else who has been keeping a careful track as well, how do you do that technically?
01:12:29
Speaker
I wish I knew the answer to that question. I think it is probably a long process of building confidence and communicating reasonable forecasts and having people see that they were pretty accurate.
01:12:43
Speaker
People trust something like FiveThirtyEight, Nate Silver's or Nick Cohn or someone like that because they have been communicating for a while and people can now see they have this track record and they also are explaining how it happens, how to get to those answers. And at least a lot of people started to trust what Nate Silver says. So I think something like that really is the long-term strategy. But I think it's hard because a lot of times there's always someone who's saying every different thing at any given time. And if somebody says there's definitely a pandemic going to happen,
01:13:12
Speaker
and they do it in November 2019, then a lot of people may think, wow, that person's a prophet and we should listen to them. To my mind, if you were saying that in November of 2019, that wasn't a great prediction. I mean, you turned out to be right, but you didn't have good reasons for it. At that point, it was still really uncertain unless you had access to way more information than as far as I know anyone had access to.
01:13:35
Speaker
But sometimes those magic tricks where somebody throws a dart at something and happens to hit the bullseye might be more convincing than an accurate probabilistic forecast. I think that in order to sell the accurate probabilistic forecasts, you really need to build a track record of communication and build confidence slowly.
01:13:54
Speaker
All right, that makes sense. So on prediction markets and prediction aggregators, they're pretty well set up to treat questions like, will X happen by Y date where X is some super well-defined thing? But lots of things we'd like to know are not really of this form. So what are other useful forms of question about the future that you come across in your work? And what do you think are the prospects for training and aggregating skilled human predictors to tackle them?
Diverse Questions and Avoiding Groupthink
01:14:24
Speaker
What are the other forms of questions? There's always a trade off with designing a question between sort of the rigor of the question, how easy it is to say whether it turned out to be true or not, and how relevant it is to things you might actually want to know. That's often difficult to balance. I think that in general, we need to be thinking more about questions. So I wouldn't say, here's the different type of question that we should be answering, but rather let's really try to spend a lot of time thinking about the questions
01:14:52
Speaker
what questions could be useful to answer. I think just that exercise is important. I think things like science fiction are important where they brainstorm a possible scenario and they often fill it out with a lot of detail. But I often think in forecasting, coming up with very specific scenarios is kind of the enemy. If you come up with a lot of things that could plausibly happen and you build it into one scenario and you think this is the thing that's going to happen, well, the more specific you've made that scenario, the less likely it is to actually be the exact right one.
01:15:22
Speaker
We need to do more thinking about spaces of possible things that could happen, ranges of things, different alternatives, rather than just coming up with scenarios and anchoring on them as the thing that happens. So I guess I'd say more questions and realize that at least as far as we're able to know, I don't know if the universe is deterministic, at least as far as we're able to know, a lot of different things are possible and we need to think about those possibilities and potentially plan for
01:15:49
Speaker
All right. And so let's say you had a hundred professors with deep subject matter expertise in say 10 different subjects, and you had 10 super forecasters. How would you make use of all of them? And on what sorts of topics would you consult?
Collaboration between Super Forecasters and Experts
01:16:05
Speaker
What group or combination of groups? That's a good question. I think we bash on subject matter experts because they're bad at producing probabilistic forecasts. But the fact is that I completely depend on subject matter experts.
01:16:18
Speaker
When I try to forecast what's going to happen with the pandemic, I am reading all the virologists and infectious disease experts because I don't know anything about this. I mean, I know I get some stuff wrong, although I'm in a position where I can actually ask people, hey, what is this and get their explanations for it? But I would like to see them working together to some extent having some of the subject matter experts recognize that we may know some things about estimating probabilities that they don't. But also the more I can communicate with people that know specific facts about things, the better the forecasts I can produce are.
01:16:48
Speaker
I don't know what the best system for that is. I'd like to see more communication, but I also think you could get some kind of a thing where you put them in a room or on a team together to produce forecasts. When I'm forecasting, typically I come up with my own forecast and then I see what other people have said. And I do that so as not to anchor on somebody else's opinion and to avoid groupthink. You're more likely to get groupthink if you have a leader and a team that everyone defers to and then they all anchor on whatever the leader's opinion is.
01:17:16
Speaker
So I try to form my own independent opinion, but I think some kind of a Delphi technique where people will come up with their own ideas and then share them and then revise their ideas could be useful and you could involve subject matter experts in that. I would love to be able to just sit and talk with epidemiologists about this stuff. I don't know if they would love it as much to talk to me. I don't know. But I think that that would help us collectively produce better forecasts.
01:17:42
Speaker
I am excited and hopeful for the top few percentage of super forecasters being integrated into more decision making about key issues.
Resources and Related Podcasts
01:17:53
Speaker
So you have your own podcast. If people are interested in following you or looking into more of your work at the Global Catastrophic Risk Institute, for example, or following your podcast or following you on social media, where can they do that? Go to the Global Catastrophic Risk Institute's website. It's gcrinstitute.org. So you can see, read about our work. It's super interesting and I believe super important. We're doing a lot of work now on artificial intelligence risk.
01:18:21
Speaker
There's been a lot of interest in that, but we also talk about nuclear war risk, and there's going to be, I think, a new interest in pandemic risk. So these are things that we think about. I also do have a podcast. I co-host it with two other Superforecasters, which sometimes becomes sort of like a forecasting politics variety hour. But we have a good time, and we do some interviews with other Superforecasters, and we've also talked to people about existential risk and artificial intelligence. And that's called Nonprofits. We have a blog, nonprofitspod.wordpress.org.
01:18:50
Speaker
But nonprofits, it's N-O-N-P-R-O-P-H-E-T-S, like profit, like someone who sees the future, because we are not profits.
01:19:00
Speaker
However, there is also another podcast, which I've never listened to and feel like I should, which also has the same name. There's an atheist podcast out of Texas, some atheist comedians. I apologize for taking their name, but we're not them. So there's any confusion. One of the things about forecasting is it's super interesting. And it's a lot of fun, at least for people like me, to think about things in this way. And there are ways, like Good Judgment Open, you can do it too. So we talk about that. It's fun. And I recommend everyone get into forecasting.
01:19:31
Speaker
All right. Thanks so much for coming on. And I hope that more people take up forecasting and it's a pretty interesting lifelong thing that you can participate in and see how well you do over time and keep resolving over actual real world stuff. I hope that more people take this up and that it gets further and more deeply integrated into communities of decision makers on important issues. Yeah. Well, thanks for having me on. It's a super interesting conversation. I really appreciate talking about this stuff.
01:20:02
Speaker
If you found this podcast interesting or useful, consider sharing it on social media, forums, with friends, or wherever you think it might be found valuable. We'll be back again next month with another episode in the FLI podcast.