Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
62: The Atom Bomb of Information Operations (An Interview with John Fuisz of Veriphix) image

62: The Atom Bomb of Information Operations (An Interview with John Fuisz of Veriphix)

Breaking Math Podcast
Avatar
526 Plays3 years ago

Forecasting is a constantly evolving science, and has been applied to complex systems; everything from the weather, to determining what customers might like to buy, and even what governments might rise and fall. John Fuisz is someone who works with this science, and has experience improving the accuracy of forecasting. So how can forecasting be analyzed? What type of events are predictable? And why might Russia think a Missouri senator's race hinges upon North Korea? All of this and more on this episode of Breaking Math.

The theme for this episode was written by Elliot Smith.

[Featuring: Sofía Baca, Gabriel Hesch; John Fuisz]


Recommended
Transcript

Introduction to Forecasting Techniques

00:00:00
Speaker
Forecasting is a constantly evolving science and has been applied to complex systems, everything from the weather to determining what customers might like to buy and even when governments might rise and fall. John Fuse is someone who works with the science and has experience improving the accuracy of forecasting. So how can forecasting be analyzed? What type of events are predictable? And why might Russia think a Missouri senator's race hinges upon North Korea? All this and more on this episode of Breaking Math.
00:00:27
Speaker
Episode 61, the atom bomb of information operations.

Meet the Hosts of Breaking Math

00:00:37
Speaker
I'm Sophia. And I'm Gabriel. And you're listening to Breaking Math. Tensors are the mathematical objects used in Einstein's general theory of relativity. And now they're in poster form. We have a lovely 24 inch by 36 inch poster that we made about tensors. And it's available on facebook.com slash Breaking Math podcast. Just click on shop.
00:00:56
Speaker
The posters are matte, full color, and make a perfect addition to any office or bedroom. So for $15.15, plus $4.50 shipping and handling, a grand total of $19.65, you can get this poster for you or someone you know. So check it out at facebook.com slash breakingmouthpodcast when you click on store and see if it's right for you.

Supporting Breaking Math

00:01:15
Speaker
We're also on Patreon at patreon.com slash breakingmath. You can go there if you want to support the show. Even a $1 donation makes a huge difference and we'll send you a thank you message. With $1 or more you can gain access to episodes slightly early and without ads. We also include the outlines we use to produce a show. We really appreciate your patronage at patreon.com slash breakingmath.
00:01:34
Speaker
For news about the show, we're on Twitter at Breaking Math Pod, on Facebook at facebook.com slash Breaking Math Podcast, and we have an interactive website at breakingmathpodcast.app and more content coming soon. Get in touch with us at breakingmathpodcast at gmail.com with questions, comments, suggestions, corrections, and anything else.

Interview with John Fuse: Math in Advertising and Politics

00:01:53
Speaker
And we're interviewing today John Fuse of VeriFx. John, thank you for being on the show. Thank you for having me.
00:02:00
Speaker
Yeah, I'm very, very excited to be on this show, actually. I'm excited for a few reasons. First of all, the way I got connected with John is through my brothers. My brother Adam, I believe that works for you right now. Is that right? Is that the capacity of your relationship?
00:02:13
Speaker
I don't think Adam necessarily works for me. I think I work for Adam. He keeps us going. So thank you to Adam for hooking me up with our awesome guest today. I'm very, very excited about today's episode because we're going to talk about how math is used in a software that is used in advertising as well as in politics. And we're also going to talk about deeply held beliefs.
00:02:35
Speaker
and how those beliefs might be hard to change, but how they can be nudged for a given means to an end. So I'm quite excited about that. Oh yeah, same. But John Fuse, who are you? Like, what's your background? Tell us about your research, how you got into this field.
00:02:51
Speaker
So I started as a physicist, a physics degree out of Georgetown, and then decided to go into law. And I spent 25 plus years in high tech law, intellectual property, work litigation, et cetera, where I was always faced with how do you move a jury? How do you convince someone it's not X, it's Y?
00:03:14
Speaker
I had a foot in business. I ended up playing with some forecasting issues. I represent Libyan Broadcasting Corporation on disinformation when that regime fell. Lo and behold, the U.S. has its issue with Russian foreign influence. And really knowing a math error and knowing probabilities and forecasts, it was a problem that was relatively easy for us to solve. And that became very fixed.
00:03:43
Speaker
Awesome. And so one thing that I think might be good to start off with is something that we'd touched on during the introductions.

Forecasting Politics: The Missouri Senator Race

00:03:50
Speaker
Do you want to tell the story about the senators race? Sure. So that's 2018. Senator McCaskill is fighting for her life against the ultimate victor, Senator Hawley. So we, when we had, when we had come up with our method and our basic theory,
00:04:08
Speaker
We wanted to test it in a real-life situation, and that was the Missouri midterms of 2008, Missouri midterms. Just to take a step back, the important thing to think about with respect to how we look for manipulation of the public, of the public's belief, is we start with beliefs. Whatever you believe,
00:04:26
Speaker
Think of the placebo effect. If you believe a pill will help you, it will actually not only have a, you know, emotional impact, has a physical impact, right? Your body takes that belief and actually internalizes it and does things with it. And really quick, just before we continue with your story, you keep mentioning this idea of belief. And can you refine that definition perhaps for our audience?
00:04:50
Speaker
It's really, so belief, we don't look at it on an emotion, kind of happy, sad. It's what you, how you think something will resolve. Yeah, so we focused on belief and what issues will change belief. So what happened in Missouri was we were able to identify issues that we thought would impact people's belief with respect to Senator McCaskill. And then what we did is we looked on social media for abnormal usage of the words.
00:05:14
Speaker
So that would be if the word Korea, on average as mentioned perhaps a thousand times a day, we would look for evidence when it was used maybe 10,000 times a day. And then we would look at the impact that would have on people's belief. And we had identified sufficient evidence that we were convinced that the issue Korea, for some unknown reason, perhaps related to pro-military, anti-military, would be bad for Senator McCaskill, would be good for Senator Hawley.
00:05:38
Speaker
So we recruited a bunch of students over at George Mason University, and during the final six weeks of the election, each week we'd go through the tweets to look for abnormal usages of that terminology. And sure enough, right at the end of Senator McCaskill was doing very well, for no apparent reason, had nothing to do with the news, had nothing to do with anything else. You saw the issue of Korea just spike. And as it spiked and was abnormally pushed through social media, Senator McCaskill's ratings just tanked.
00:06:03
Speaker
And what we found with our research is that you can actually move a population three to five percent, three to five percent of the population. So in these tight races, that's enough to flip a race. And she ultimately lost. And so that gets us into our next question, which is how do you quantify belief? You mentioned three to five percent. And we discussed a little bit in the pre-interview the importance of belief change. And so what is the way you quantify belief? What is belief change? And how does it apply to this work?
00:06:32
Speaker
Yeah, right. That's a key issue because you have to also deal with error, right? So what is it and what's your error rate in measuring it? And at the end of the day, if you told me what your belief was on an issue and Gabriel told me what his belief is and I told you what my belief is, we all might use different numbers. We might use different words, but it all might be the same.
00:06:51
Speaker
So actually understanding kind of a static snapshot of what your belief is is incredibly difficult, if not impossible. But looking at the change isn't. So when you go into the emergency room, say we say we cut our finger right now during the podcast and we all have to run over to the emergency room. They will ask you what your pain level is on a scale of one to 10.
00:07:07
Speaker
Now, whether you say a five, whether that's what someone else who just cut their finger would say, maybe they say a four, maybe they'd say a six, we don't know. But we know is when the nurse keeps coming back in to check on us and we update that pain score, we can see the Delta. So we know whether you're feeling better or you're feeling worse. And that's really what we focus on. So we're actually measuring belief change on a week to week basis, as opposed to trying to take a static snapshot in time.
00:07:33
Speaker
So one thing that we're curious about is in your research, how are the questions framed? Yeah, right. So that that's a that's a fabulous question because, you know, when we looked at this issue, we realized so the first was whether to play in math or play in language. And language itself, we determined, has too much error. Right. Language is a biological population. So if we say family, the ideal family, apparently, is three point one four children, three point one four people.
00:08:01
Speaker
but yet there is no family with .14 person. Yet we all have some idea of what a family is or is not include, and that certainly can be different from person to person. So there's too much error. So we realized we had to stay in math.
00:08:13
Speaker
I remember you said that the questions that are involved in your research have to be forward-looking sufficiently far in the future. Could you elaborate on why that's important? Once we got into realizing that we needed to use numbers, what's the most accurate way to determine whether or not something's going to happen? We went to a wisdom of the crowds, group forecasting. We use numbers and we use group forecasting because it would be the best way to have an accurate prediction of the future. You mean forecasting for people, Ryan?
00:08:43
Speaker
Yeah, so the general consensus is 25 people or more, if you take 25 people and you ask them how many,

Group Forecasting for Accuracy

00:08:51
Speaker
I think the famous original study was weighing a cow during a country fair or something, a state fair, and asking
00:08:57
Speaker
random people to submit their bids, and it really was the average that was the most accurate. So if you take 25 people and you're asking these forward-looking questions, you will actually get a fairly accurate prediction of what will happen in the future. But even knowing that that may not be a representative sample, we look then for change in forecasting or betting, essentially. Who's going to win the Super Bowl this week? Who's going to win the Super Bowl next week? And we're looking at belief change week to week.
00:09:22
Speaker
And where does this number 25 come from? Just experience? Because I mean, and you don't mean that groups of 50 would be less accurate, do you? Yeah, they are because the error doesn't cancel. Essentially, once you get to 25 people plus, you can end up with 95% accuracy simply by, you know, the extremes cancel and you actually end up with a fairly accurate forward looking forecast. So there's the whole wisdom of the crowd.
00:09:48
Speaker
which is where a lot of the forecast has gone to. And how can this be modeled statistically, this almost optimum point of 25? Because it seems to fly in the face of some conventional statistical knowledge about standard deviations and things like that.
00:10:10
Speaker
There's several things at play. One, you're 25 people, there are certain criteria that need to be placed on them. So in terms of wisdom of the crowds, you need sufficient diversity. You need people who are not talking to each other. You really need 25 separate data points. So instead of the easiest way to think about that, I'm taking 25 measurements in different ways and then essentially ensembling the result. A famous application of that is probably Netflix
00:10:38
Speaker
prize for optimizing their CineMatch algorithm. So that's the one I reckon that you've watched actually might like Y or Z. They were trying to optimize their algorithm and what they found is that each team participating on its own essentially reached limits that they couldn't get past. If they simply took different approaches and averaged them out, that average ended up being more accurate. So what happened is, I think it was the Bell Core team after three or four years teamed up with Aussies, teamed up with another team I think from the US.
00:11:08
Speaker
and ultimately achieved the 10% threshold improvement that Netflix had set for the prize. The remaining teams had, I think it was 30 days to offer a counter proposal, and I believe it was 40 teams. All those 40 teams did was average the results, and they achieved the same level of same improvement. The five elements necessary for the wisdom of the crowd to essentially work is you need diversity of opinion, which makes sense. You see that in businesses. If you have people talking to each other,
00:11:39
Speaker
not willing to offer diverse theories, you end up with skewed results. The individuals have to be independent. They have to be decentralized. And then you have to ask the questions in such a way that you can actually aggregate all of the amount, aggregate all of the answers. And there has to be trust. So in terms of taking those five factors and building them into our questions set, we use much more than 25 people. We use 100 person panels.
00:12:05
Speaker
100 different people, all independent, decentralized, we take them from across the US or wherever we're studying. We have a way to aggregate all of their issues and we make sure they stay anonymous. We pay the people and we leave them completely anonymous and make sure they can't be targeted with anything. And the system works. We've run our system and run questions that were identical to surveys, 1000 plus person surveys, and our answers, our results are essentially identical.
00:12:34
Speaker
That's fascinating, and it really does, I think, I mean, Gabriel told me if you get the same impression, but it speaks to the idea of there being an essential, almost like an underlying algorithm for humans, because I mean, I'm sure you're aware of bands of humans and how there's like a few dozen people in a band and how it's a good organizational structure.
00:12:59
Speaker
Could it have anything to do with that, these results that you're getting? Could it have anything to do specifically with humans, or would 25 aliens likely have the same... Yeah, no, I'm listening to what you're saying, and so, you know, what John said reminded me of the bands, but it's been a while since I've studied anthropology, so it's... So we're treating humans really as... They're pretty formulaic once you hack into a human brain.
00:13:27
Speaker
And that is humans at our base, we are forecasting machines. So we emerge into the world, we take our first breath, and we start making forecasts. If I cry, will I get fed, right? We learned to do that before we can even talk. And Professor Lisa Feldman Barrett and her concept of construction emotion, she goes deep into that concept that basically most of human emotion, human reaction can be dropped down into the fundamental concept of forecasting.
00:13:57
Speaker
which is why that's where we take people right to that forecasting because it's what you are doing constantly. It's the core functionality of your brain as a forecast.
00:14:06
Speaker
Interesting.

Empowering Through Forecasting: Case Study with Nike

00:14:07
Speaker
I was curious for just some more concrete examples. I know that we talked about an example in politics where you found an uptick in tweets on North Korea. I know there's some other applications of your software, like in the commercial world, like for Nike. Can you tell us a little bit about that? Sure. So you think about belief and how it affects us, right? It's really what product we choose, what we pay for products, how we use products, how we experience things.
00:14:34
Speaker
So the prime use of most of this is in advertising. How do I get someone to either believe in my product, believe it's good, believe it's worth paying for, et cetera. So there's many different uses. And then there's also some social cause uses. So for Nike, they were very kind to be one of our first clients and were very interested in women's empowerment issues, right? What is necessary? What was necessary for Nike to make and help more women feel empowered
00:15:03
Speaker
that they could go out and exercise, that they could enjoy the benefits that men have. So it's identifying the beliefs, the positive beliefs, as well as the negative beliefs, and what issues, we call them nudges, could be used to kind of overcome that hurdle. Because so much of the prior, their efforts, regardless of how much money they put in and how much effort, they weren't seeing the results. And so that was one of the first uses and key uses for the technology.
00:15:27
Speaker
So we studied East Coast versus West Coast. We identified the horrendous misogyny in New York. Some of the horrible things people feel about women across the country and then look for ways to solve it. That's crazy. And that's interesting and that really does get into this idea about ethics.

Ethics in Belief Manipulation

00:15:47
Speaker
And I was wondering if you could talk about ethics and maybe even include some stuff about the status quo bias in your description.
00:15:53
Speaker
Yeah, so on ethics, once you realize that brains are formulaic and you can move people, whether it's ethics in advertising, ethics in social issues, ethics, if you can change the world, how would you change it? And so that's been a big issue for us within the company. Honestly, it's one of the first things that investors comment. Once someone has seen the technology and they see what it can do, it's how are we going to make sure you use it properly? And that's been a big issue. So we have an ethical policy within the company.
00:16:22
Speaker
We have a process in terms of how we even onboard clients. There are certain issues we will just not work on. I think we want to be someplace as a company where we can say no to clients if they are trying to do something in the world that we just don't agree with because there's no
00:16:40
Speaker
point to make the world a worse place. We had a lot of fascinating topics or a lot of fascinating discussions in our pre-interview. Gosh, this brings me back. It surprised me when you were talking about misogyny, how many people in power, like as you were saying CEOs, are very misogynistic as showed up in your data. I don't know. It just shows that we have a whole lot of work to be done.
00:17:05
Speaker
One of the questions, this is a bit of a lighter question here, but I was curious in all of your research across America or the world, I'm curious what kind of trends surprised you the most in terms of deeply held beliefs or attitudes?

Belief Studies Across Regions and Races

00:17:20
Speaker
It has to be with respect to people's view. Wow. Honestly, when we first ran the data and we got data back from New York, which I always just assumed New York is, you know,
00:17:30
Speaker
It's progressive, it's liberal, it's where I think of, as an East Coaster, it's where I think of kind of being the most forward thinking. The comments back with respect to women were so disgusting, I was going to throw people out of our study until I realized it was widely held. So we tested certain things. So standard research, women on average do 30% more work around the house than men. So we thought, okay, well, let's make men aware of that and see what happens. And they don't care.
00:18:01
Speaker
You know, women exercising, you know, whether or not men should take on more responsibilities around the house to allow women to exercise those that are married. And the comments back were along the lines of, you know, if she wants to get married, she'll look good. It's her problem. What was interesting that surprised us was LA was very female friendly.
00:18:23
Speaker
Right? Which you think they objectify women. It must be a horrendous place. But the reality was, I think what we found out was there are powerful women in LA, right? There are actors that, I don't know, pick your favorite female actor, Scarlett Johansson this week, who have achieved things people just dream they could. And therefore, women are held in higher regard. We're in New York City. It's all investment bankers. Oh, geez. It's all men.
00:18:48
Speaker
Right. And so so there's certain aspects of kind of those issues that are inside society. We ran another study where we did a black panel versus a white panel in New York City during the George Floyd protest. I mean, equally as disgusting. And just it's just fascinating because it's when you can hack into the brain on some level and get someone to just to expose what they're really thinking. That's when you start to have a playbook. OK, now I know what I'm up against. Now I know what issues will move them. Now I can start to do so.
00:19:17
Speaker
That's crazy. What also blows my mind is what you had said previously. You had said that when men find out about this, they don't care. So it's like, how do you make this world a better place then? I know that you're getting to do that as well, but it's just...
00:19:32
Speaker
Well, it seems like, if I'm not mistaken, it seems like a lot of your issues are... It kind of reminds me of a meteor falling on a seesaw where it just changes course if you have another meteor on the other side of the seesaw, it's the world's best seesaw. But it seems like you have these fulcrum words and concepts.
00:19:53
Speaker
And you talked about the very surprising one with North Korea and Russian stuff earlier. Can you think of any other strange fulcrum-type, hinge-type words that you've encountered in the course of your research?
00:20:06
Speaker
Are you talking about issues like in other words? Why is it that North Korea specific? Oh, yeah, like issue as I understand it And again, I realize we're jumping around topics a little bit But this is all gonna tie in together with the bigger theme here You know both talking about treatment of women as well as North Korea and hot button issues I think what Sophie is getting at is is why is North Korea a hot button issue in that? geographic location and are there surprising hot button issues in other places that you just wouldn't have expected and
00:20:34
Speaker
Yeah, they pop up all the time, right? And that's why we are not rational. If we were rational all the time, it would be easier to advertise to people. It would be easier to get people vaccinated. It would be easier to do a whole host of things. We are not rational. And the reality is, you know, understanding the basis for those irrationalities. What we say we want to do is not necessarily what we do. We know we should be environmentally conscious. We know we should
00:21:00
Speaker
buy X and not Y, but when we go to the grocery store and it's up to us and something's on sale, we buy the package. It reminds me a little bit of the bacon or something. I'm sure you're aware because you do research and statistics of the classic 1950, I think 1953 book, How to Lie with Statistics.

Challenges of Polling and Survey Biases

00:21:15
Speaker
It's not, I very well may have read it, it's not coming to mind. It's a classic book on biases in advertising during the 1950s. A lot of them still exist today, like drawing graphs that start at the wrong
00:21:31
Speaker
place or have and one of the things that they talked about in there is how when they did a phone survey asking people what magazines they bought, everybody's like, Oh, yeah, I have I have a subscription to Scientific American or The Times or Time or whatever magazines were big in the 50s on Saturday evening posts, things like that. But it turns out people were reading just like just like trashy considered like what they consider to be trashy magazines based on purchase numbers from the companies themselves.
00:22:00
Speaker
Okay, yeah, so dishonesty and pulling. Oh yeah, and it's just, I mean, and also I know that people drink twice as much as they tell the doctors that they do based on the results from landfills.
00:22:09
Speaker
100% and that right so we so it's it's that's that's why we did not go down the survey path because yes when you go to the doctor and a doctor asks you how much you drink that week I don't know anyone is honest when they ask you how much you exercise everyone overestimates it the study is even on exercise when they have tracked people with their phone location for a month and then ask them to self report if they went to the gym they still lie even though they were had their phone and they know whether or not they went to the gym
00:22:34
Speaker
Yeah. Those errors get worse when you talk to people about emotion. I think it was Ekman's research where he compared people, whether they understood if they had anxiety versus depression, and only people got it right. Sixty percent. You're talking about Paul Ekman, the official expression dude.
00:22:48
Speaker
Correct. So it's just people have a horrible time explaining who they are on some level, right? And then that goes back to one of my favorite author's books, McCall and Simon's Identities in Interact, and that concept of there's who we are as a person, who we project to others, and how society sees us. And when, in most of those surveys, people will filter. So we lie. We say we don't drink as much. We say we exercise more.
00:23:12
Speaker
If it's a social issue, we're environmentalists. When the reality is we're overstating that or we're playing to a social issue. When you get into things like bias, prejudice, it's very hard to go up to someone and say, tell me how prejudiced you are. They won't tell you.
00:23:29
Speaker
Which is why we go into predictions. So we get rid of words. Words have too much error. Too easy for someone to play with us. Stay with math. Math doesn't lie. We go to probabilities and we ask people to start predicting. So they tell me this week they think hate crimes will rise. They think it's 71%. I ask them next week. They say 73%. I ask them next week.
00:23:49
Speaker
they're back down to 72%. Then we start looking, we apply kind of a Claude Shannon's information theory, we're looking at resonant frequencies inside those populations, people who are banded in certain areas, and we start to look for commonalities. That's where we start to extract nudges and we understand now, if I amplify that issue, so if I take a shark attack, you see a shark attack story and you go to the beach, all of a sudden you think you're going to get eaten by a shark, even though
00:24:14
Speaker
There's no chance of it. But you saw the story, so it's in your head. And if you amplify some of these issues, you start to change beliefs and then that changes behavior. This goes without saying, but obviously that's what's on all of our minds here with foreign interference in elections and in other things, is they're able to somehow identify those nudge issues similar to exactly what you do.
00:24:34
Speaker
I've read some leaked Chinese documents on commenters and there seems to be a lot of stuff about that. I know they're really concerned about broken window policies and things like that with respect to the internet.

Media Influence on Public Perception

00:24:51
Speaker
Well, if you control information that people are exposed to, you can then insert nudges. So there's the debate over Chinese control over Hollywood, right? And it works positively and negatively. So I was on a trip with my wife, and we were visiting a friend in Russia, and we're getting on the plane, and there are a bunch of Russian men behind us speaking Russian. And we both like, you know, shiver. And then we looked at each other, and I was like, oh my god, it's like, we're at a James Bond movie and about to be off. There's something horrible. And we realized, no, it's because in the movie, it's all you ever see.
00:25:20
Speaker
Russians talking to each other, it's in a movie, they're all evil and they're all about to do something. No different than something. And so it's a question of you balancing those issues so that people's brains are properly forecasting and knowing, okay, these are not evil people, they just have a different accent. It's what's triggering your emotional reaction.
00:25:39
Speaker
Interesting. That could lead also into one of the other topics that we had said toward the bottom of the outline. And that's knowing that many entities, so vague with that word here, whether it's a foreign nation or just a company that wants to sell you products, one of our questions is, what would you advise for the best way for us to, would you say, defend or be inoculated to it? Like as this technology,
00:26:07
Speaker
Inevitably grows and expands and like any you can't keep a genie in a bottle for more than a couple decades like as this technology Expends can the technology itself be used as a defense against the technology sort of like an immune system type a deal
00:26:24
Speaker
Yes. So let me say two things about the technology, about the concepts. One, we would not have the United States of America if this didn't exist. And that is because the best propaganda ever was the US being able to take divergent groups, people from all around the world, and unite them under a single cause that everyone thought there was an American way of life and something better
00:26:46
Speaker
better to be achieved. When the reality was, it wasn't for large portions of the population, right? We're still going through that. But for that belief, that misinformation, the US wouldn't exist. But for the ability to change people's belief, you would not have capitalism. You wouldn't have advertising. If we all are eating the same soda or drinking the same food, whatever, we don't need advertising. So advertising belief, moving people's belief, I mean, it's fundamental to the American. In terms of now defending against
00:27:14
Speaker
Misuse of that. Yeah, it's it's not that difficult to limit to identify when someone may be using an issue improperly We do it somewhat now with advertising. It's just a question of extending that politics So I'm thinking on a on a personal level just our awareness that that you know The news is gonna be amplified to trigger us around election times We know that and just the fact that I know that itself. I might be you know hyper vigilant when I'm reading Twitter I mean I might be and I might not be I
00:27:42
Speaker
There's a greater likelihood that I'll be aware of it now that we've had this conversation and that we've been talking about it. Being aware of it is one thing. Were you asking, Sophia, if an actual software would be deployed to counter it?
00:27:56
Speaker
Actually, what I was about to ask is how this research might relate to folklore at all. Because I know that folklore has been used over the millennia to change opinion, things like that. I mean, most tales that people tell children are to get them away from cliffs and rivers so they don't drown or fall to their deaths.

Folklore and Forecasting: Ancient Techniques

00:28:17
Speaker
And I just think, I think it's interesting because you talk about how the United States wouldn't exist without that. And I see the United States as a very folklore based country. I mean, you have, I mean, we're inoculated with this kind of folklore when we're children, the George Washington in the cherry tree, life, liberty, pursuit of happiness, all these wonderful concepts. And it's definitely in a folkloric context. So I was wondering if you've, how folklore has, have you done research on folklore?
00:28:44
Speaker
Yeah, so this ties into something we were talking about before. So let's go back to probabilities for one second, because it all comes out of the math. And that is, if I have an 80% prediction, so I'm assuming that something's going to happen. So that essentially should mean the issue will resolve eight times out of 10 in a positive manner. So I should have eight yeses and two nos. I know it's wrong. I can know my 80% is wrong in as soon as three repetitions. So if I have a no, no,
00:29:14
Speaker
no, as opposed to no, no, yes, I know my 80% is wrong. For me to know that I'm correct, it's going to take 10 repetitions.
00:29:21
Speaker
So now when you think about I'm a human now and I have a finite amount of time on this world. So there's certain things that I'm not going to get to repeat that often. I have to rely upon stories from past generations. And that's really folklore plays and folklore is filling in those kind of long term probabilities and those long term patterns that as an individual, I may not have the chance to identify. Right. They're essentially the stories. Don't go near the cliff. Don't do this. Don't do that. Even religion on some level.
00:29:49
Speaker
They're trying to instill certain long held beliefs. What happens when I die? Who knows? And why should I behave? Who knows? So you're instilling some of those stories for belief, belief change, where I will never know I'm correct. I will only know I'm wrong. Nice. Very cool. Very cool.
00:30:09
Speaker
You listen to breaking math, which probably means you're a big nerd. And you're in good company. We're all big nerds here at Breaking Math, and I want to talk to you about Brilliant. Brilliant is a one-stop shop for math and science. They have everything from lectures on number theory to mind-expanding puzzles and exercises. And how do you learn this, you might ask? Through both presented information and problems to solve.
00:30:31
Speaker
After all, you learn best by actively using your knowledge. This week we want to feature a wonderful course on machine learning. It is one of many courses in data science available on Brilliant. So what are you waiting for? Sign up at brilliant.org slash breakingmath. The first 250 listeners get 20% off the annual subscription. That's brilliant.org slash breakingmath.
00:30:55
Speaker
Hey, Breaking Math fans. First, I want to thank you for listening. I have an important message for everyone. You can start your own podcast right now with Anchor. Anchor lets you create and distribute your own podcast. Just get an idea, record, and upload. It's just that easy. Anyone can do it. I'm on my way to accomplishing my dream, and you can too. Just get on your device's app store and download Anchor. It contains everything you need to make a podcast. With Anchor, you can put your podcast on all the big platforms.
00:31:24
Speaker
Apple Podcast, Spotify, Google Podcast, Amazon, and more. Reach the whole world with Anchor. Best of all, Anchor is free. You have nothing to lose with a free platform. Download the Anchor app or go to anchor.fm to get started.
00:31:43
Speaker
So we did a few episodes ago an episode on Fermi estimations which just to remind everybody it's like it's like saying like you know how many credit cards are there in the European Union and then you think okay there's 700 billion people probably each person has the average of two credit cards there's probably about a billion four hundred credit cards give or take a little bit
00:32:07
Speaker
I mean, that's a very simple example, didn't take too many steps. But I know that you've mentioned that your research is basically doing a very big Fermi problem, right? Yeah. When we started with the original problem we were trying to solve is how do you find foreign influence in social media?

Addressing Foreign Influence in Social Media

00:32:25
Speaker
And so we addressed it as a Fermi problem. And we actually flipped it on its head because we said, OK, if I wanted to influence someone else in a foreign country, how would I do it and ideally not get caught? So that's the step. So we went down that path. Would I look at
00:32:42
Speaker
First, I need to know how people are feeling, right? Because I need to know which direction I need to move them. Are they favoring my issue or not favoring my issue? So where would I look? If I looked at social media, that's one obvious answer where I think a lot of people go to, but social media, it's not a representation of the population, right? It's a subset, anywhere from 20 to 80% of the population on a social media platform, and the algorithms generally pick.
00:33:06
Speaker
various responses to get you to read more, so it's not representative. So if I simply went to social media, I really wouldn't know what's happening in that foreign population. I could try and hire a lot of people in that foreign country, but then I'd probably get caught. So that wasn't probably a good idea. I could look at Poland,
00:33:26
Speaker
The problem with polling was huge error, plus or minus 4%. So that's an 8% swing. I mean, that really doesn't tell me much, especially in a tight race. And then it'd be very hard for me to judge whether or not my acts were or were not helping. And we kept going down that process, which is why we ended up at forecasting. Because forecasting gave us the best sense of what people thought. I did not need to use local people. So if we wanted to influence, I don't know, let's say the UK,
00:33:55
Speaker
we could probably get a group of forecasters in the US to predict what's happening in the US or what's happening in the UK and then start to look at changes and get a better sense in terms of what's happening. And then that's what basically led us down this path. It was one large Fermi problem.
00:34:12
Speaker
And so speaking of Fermi problems, we talked a little bit about accuracy. I think there might be a good time for you to talk about this little story of yours that you mentioned to us in an email about how you became a super forecaster.
00:34:28
Speaker
Oh, yes. So we went down this Fermi process and realizing the best place to look was of change, of forecasts, were to look at the super forecasting and IR posts, so that's the Intelligence Advanced Research Project. So post 9-11, the government spent a lot of money trying to have better accuracy of forecasts, which is how they came up with these 25-person groups.
00:34:51
Speaker
And then there was the book that Philip Tedlock put out on super forecasters, where he said that he had found that there's a special class of individuals who somehow can predict the future better.

Becoming a Super Forecaster

00:35:04
Speaker
Started various groups. There's a Good Judgment Open and a number of the other groups. And I was plugged into some of those, and I started playing with them. And when I saw his note that said, this cannot be gained,
00:35:17
Speaker
Of course, that was a challenge to try and game it. And I looked then for the errors in his system, which is that error really helped us understand what forecasting can or can't do. So his basic premise was if someone has a Breyer score of less than 0.25, they represent a subset of American or super forecasters.
00:35:39
Speaker
Friar score is essentially the mean squared error of your forecast. Friar is used for mutually exclusive, discrete outcomes. So the event happens or it doesn't happen.
00:35:49
Speaker
70% forecast. The Breyer score would be calculated. It's 1 minus 0.7 squared plus 0 minus 0.3 squared. That would be a Breyer score of 0.18. If you average that score over a number of questions, you would be a super forecaster. I currently have respond to over 93 questions, 130 questions between the two accounts and I have a 0.2.
00:36:11
Speaker
And I haven't read most of those questions because I've just been playing with his error, which is he didn't, accuracy of forecasts, he didn't contemplate that someone would play the probabilities on the probabilities. So what all I do is if someone, if the consensus is 90%, I bet 100%.
00:36:27
Speaker
The crowd will be correct nine out of 10 times. I'll get two brighter points. My brighter average will be 0.2 over those 10 questions. I will be a super forecaster and I've never read a question. Oh my gosh. That's kind of fun. Right. But that was the basis for a huge amount of government research, billions of dollars probably, on a math error. Not understanding what a probability means and how do you, at which point it took us down the path of,
00:36:55
Speaker
Okay, how do you know a forecast is accurate? Especially if it's not repeating, which is the weather, which is so much of what we do. And we realize, so if you can't really know if a forecast is accurate, what good is it? And then that took us to what's good to know if your expectations are met.
00:37:14
Speaker
If you think it's going to rain, you think there's an 80% chance of rain and you take your umbrella and it doesn't, it will affect your emotions. Am I disappointed or am I happy? We use it then as a population to determine normative behavior. If they think there's an 80% chance of a hurricane hitting Miami and you don't evacuate, you are the odd one.
00:37:37
Speaker
If there's a 20% chance of it hitting and you're panicking and leaving, you're the odd one. So we use forecasting, not necessarily for the accuracy of the forecast, but to actually access behavior and normative behavior in society. So that math error changed our way in terms of how we think about forecasting and what that can be used for. And that's how we end up basically learning how to hack the brain and measure belief by having people forecast. And speaking of hacking the brain, I know your research constructed emotion.
00:38:07
Speaker
Correct. Constructed emotion. That's the concept of how we experience emotion really is based on how we are raised and what we are told our emotions mean. So as you are young and your parents telling you you're happy, you're not happy, they are putting words to then equate with what you are feeling. And your understanding of those emotions, it really has to do with your emotional upbringing.
00:38:38
Speaker
Now think about that in terms of populations. There will be similarities probably about all the kids in the neighborhood when you grow up in terms of how you were raised. There will be, right?
00:38:47
Speaker
versus maybe that applies to your state, but maybe that's different from the neighboring state. And yet, as you go through life in these different populations mix, you all have different biases. You have different triggers. What will trigger you to be happy? What you say is happy. We'll all start to change. The best person, if you want to learn about triggers, is probably just, you almost need a partner who's living with you constantly.
00:39:13
Speaker
who knows, okay, this just happened. That's going to trigger John's bad mood and having that person flag your triggers. And all of a sudden you'll start to see your triggers and realize what's changing your emotion and how you describe your emotion, how you're reacting to something that someone else may not react to because they were raised differently. Okay. Yeah. So like if a sports team loses, obviously a huge part of the population will be settling grouchy.
00:39:36
Speaker
Right, but I didn't grow up watching sports so I could still watch a nice game, but it doesn't affect me the same way as maybe someone who was raised a die-hard stealer fan or some such thing. And I noticed that during this pandemic, there were people who weren't really bothered by the masks and people who were, I mean, I wore a mask and I was bothered by it the entire time. But I did it, you know, for health reasons. But I noticed that some people really didn't like going out in a mask
00:40:05
Speaker
What I've gotten the impression throughout this whole interview is that we deal with these concepts constantly. I mean, you're talking about how forecasting is your area of research, but that how the brain is a forecast machine. And I mean, it's interesting, we had an interview with a neuroscientist from University College London a while ago.
00:40:22
Speaker
Zideman from University College London, and he talks about how the brain is like, I mean, I think it's two sides of the same token, a science machine, how the brain works as a constant scientist, having hypotheses, predicting the future. And it's just interesting that we're really finding out, it seems like, especially in the last 20 years, that the brain has been expressing itself within our society, like, you know, as like the institutions of like science and media and things like that.
00:40:52
Speaker
And it just fascinating how wide the research that you're doing can be applied to. No, it is. And that's where we, you know, we view ourselves as fundamental tool that has so many different potential uses, which is why we're always looking for people to collaborate with on these, the more specific applications. So you mentioned masks, right? Think, think about the brain as a forecasting machine. Your eyes are pulling in data. Your ears are pulling in data. Your nose is pulling in data.
00:41:18
Speaker
And in the data that your eyes are pulling in, when someone walks up to talk to you or pass you friend or foe, maybe you're looking at their eyes, you're looking at their micro expressions, all of a sudden you put a mask on them and you're denied that data. Right? It very well could be that, that just, that's tweaking something in you. Cause all of a sudden you're not getting the data you need to interact with people around you. Fascinating. I didn't even think about that. I didn't think about that at all. That's actually a fascinating reason why somebody might hate masks.
00:41:46
Speaker
And see, I mean, I just didn't like it because it felt, it was like a hot on my face, but like that's, I mean, that's way more like, that's way more meta. But you might experience it hot on your face and I had the exact same issue, but the reality is what actually could be going on is that it's just the dissonance in your brain that you're just not getting what you need.
00:42:12
Speaker
which we talked about earlier in terms of even how this goes in populations, older people, their senses start to work less and they rely upon past habits and it's just not fun anymore. Yeah, well hopefully there will be tools in the future to reinstate that plasticity, hopefully drugs or something. Yeah, we need to talk about a few more brain cells.
00:42:38
Speaker
So I guess as we were wrapping up, I want to ask you, if somebody wanted to learn more about Verifax, where would they go? Surely go to our website. So Verifax is V-E-R-I-P-H-I-X.com.
00:42:50
Speaker
Or you can find me on LinkedIn, John Fuse, John J, J-O-H-N, F-U-I-S-Z. There are two John Fuses, those to look for the one at VeriFix. Awesome. Or any of the other people associated with this. Find them on the internet and ping them. But yeah, it's a fascinating, I'm just fascinated by the research. Yeah. Fascinated by the tool in terms of what it can do and we'll see where it takes us tomorrow. And before you go to, I'm going to have one more question. What's the biggest laugh you had during your research? Biggest laugh?
00:43:21
Speaker
Oh, I don't know if anything has struck me as funny. I think I'm still in the, humans can be pretty disgusting. Oh gosh. Yeah, you know, and that's crazy. Like, you know, one of my goals has not ever been to be too cynical, but it's hard when you look at data, like we are animals, you know, so. At a base level, we are animals. We're very animalistic. People are very biased and very formulaic.
00:43:49
Speaker
Is there any hope for the human race? We've made it this far. There's clearly nothing wrong with us. I think as we learn more about ourselves, I can only hope for better things tomorrow. I think the more we learn about how our brains work, hopefully the more we can overcome some of these issues.
00:44:09
Speaker
Well, I have the same hope as you do. And so everyone out there, people who are starting to get into the, you want to get into fields of information. I mean, machine learning and data science and everything is getting bigger and bigger. And I honestly think that that hope that you just talked about is something that everyone who goes into a field like that probably has to have for their own sanity, but also, I mean,
00:44:35
Speaker
It's good to be optimistic, in my opinion, to come up with good strategies for the future. It's good to be pessimistic to avoid pitfalls. Well, John Fuse, I'm going to thank you so much for being on the podcast today. And again, that's verifix.com, V-E-R-I-P-H-I-X, or you can find John Fuse, F-U-I-S-Z, John with an H, on LinkedIn. And he's the one who works at Verifix, not the other one.
00:45:04
Speaker
Sophia and Gabriel, thank you very much for having me on. Our pleasure. We should do this again sometime.