Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Bayes’ Theorem  Explains It All:  An Interview with Tom Chivers image

Bayes’ Theorem Explains It All: An Interview with Tom Chivers

S5 E95 · Breaking Math Podcast
Avatar
5.2k Plays6 months ago

Tom Chivers discusses his book 'Everything is Predictable: How Bayesian Statistics Explain Our World' and the applications of Bayesian statistics in various fields. He explains how Bayesian reasoning can be used to make predictions and evaluate the likelihood of hypotheses. Chivers also touches on the intersection of AI and ethics, particularly in relation to AI-generated art. The conversation explores the history of Bayes' theorem and its role in science, law, and medicine. Overall, the discussion highlights the power and implications of Bayesian statistics in understanding and navigating the world. 

The conversation explores the role of AI in prediction and the importance of Bayesian thinking. It discusses the progress of AI in image classification and the challenges it still faces, such as accurately depicting fine details like hands. The conversation also delves into the topic of predictions going wrong, particularly in the context of conspiracy theories. It highlights the Bayesian nature of human beliefs and the influence of prior probabilities on updating beliefs with new evidence. The conversation concludes with a discussion on the relevance of Bayesian statistics in various fields and the need for beliefs to have probabilities and predictions attached to them.

Takeaways

  • Bayesian statistics can be used to make predictions and evaluate the likelihood of hypotheses.
  • Bayes' theorem has applications in various fields, including science, law, and medicine.
  • The intersection of AI and ethics raises complex questions about AI-generated art and the predictability of human behavior.
  • Understanding Bayesian reasoning can enhance decision-making and critical thinking skills. AI has made significant progress in image classification, but still faces challenges in accurately depicting fine details.
  • Predictions can go wrong due to the influence of prior beliefs and the interpretation of new evidence.
  • Beliefs should have probabilities and predictions attached to them, allowing for updates with new information.
  • Bayesian thinking is crucial in various fields, including AI, pharmaceuticals, and decision-making.
  • The importance of defining predictions and probabilities when engaging in debates and discussions.


Subscribe to Breaking Math wherever you get your podcasts.

Become a patron of Breaking Math for as little as a buck a month

Follow Breaking Math on Twitter, Instagram, LinkedIn, Website

Follow Autumn on Twitter and Instagram

Folllow Gabe on Twitter.

email: [email protected]

Recommended
Transcript

Introduction to Bayes' Theorem

00:00:00
Speaker
attention, truth-seekers, and probability enthusiasts. The time has come to unlock the secrets of the universe, and this episode will embark on a mind-bending exploration of Bayes' Theorem and its astonishing implication and the potential to predict everything. Yes, you've heard that right, everything. From the weather on Mars to the winter of the next to your lottery, Bayes' Theorem offers a powerful tool for making informed guesses about the future.

Understanding Bayes' Theorem with an Example

00:00:30
Speaker
Now hold back your hats, Face Theorem isn't something mystical like a crystal ball. It's a mathematical framework that allows us to update our beliefs based on the new evidence. Let's say you hear a strange scratching sound coming from the attic. That's a prior probability. You might believe that it's a harmless squirrel. That makes sense, right? But then you see a shadowy figure just dart across the window.
00:00:59
Speaker
Oh wait, that's some new evidence. And suddenly your belief is that it's a sneaky raccoon that's skyrockets. You know, just your basic posterior probability. See?
00:01:13
Speaker
With every new piece of information Bayes Theorem helps us refine our predictions. The more data we gather, the closer we get to a clear picture of what's happening or what might happen, right?

Potential of Bayes' Theorem in Prediction

00:01:26
Speaker
So imagine a world where weather patterns are no longer a mastery and where market trends can be accurately forecasted, where even the next technological breakthrough can be predicted in a high degree of certainty.
00:01:40
Speaker
Face Theorem, my friends, holds the keys to unlocking this level of understanding. But it's foolproof. Can we truly predict everything? That's a question that we're going to grapple throughout the show.

Introducing Tom Shivers and His Work

00:01:54
Speaker
So join me, Autumn Feneff, your host at Breaking Math podcast, in this episode to chat with Tom Shivers, an author and award-winning science writer for Semaphore. His writing has appeared in The Times London, The Guardian, New Scientist, Wired, CNN, and so many more. His books include The Rationalist, Guide to the Galaxy, How to Read Numbers, and last but not least, Everything is Predictable.
00:02:22
Speaker
how Bayesian statistics explain our world. Its release date was this past weekend in the U.S. May 7th at most large retailers. So let's dive in.

Tom Shivers' Journey to Bayesian Statistics

00:02:40
Speaker
So we're launching the podcast a couple of days after, but we are super excited to have you on the show, Tom. How are you doing this morning?
00:02:51
Speaker
I'm very well, thank you. Yes, it's fun to talk about the Bay stuff. I'm still having to remember a lot of the stuff because I wrote the book a long time ago, but it's a year ago now, but it's still exciting and hopefully you'll enjoy it as much as I do. Yes, actually, I read the whole book. Oh, wow. Yeah, cover to cover. I have been actually jazzed about interviewing you for this.
00:03:19
Speaker
Because my background is industrial engineering and mathematics. And, you know, you talk about the things that mathematicians, you think everybody goes from one thing to the next to the next and that stuff is boring. Right? Math is boring. It's not. We have a lot of quirks.
00:03:42
Speaker
There's a lot of interesting things about it. There's a lot of ways to interact with like decision making and how we live our lives. And actually you can use some quite simple maths. I mean, that's the thing I like about Bayes theorem is that it's actually, it's really simple and it's sort of
00:04:00
Speaker
conceptual stuff. I mean, it's all like multiplication and division. You know, my eight year old daughter could do all the basics that you require for it, but it has these really profound implications for so many areas as well. I will say I'm absolutely thrilled that you've read the whole book. I think you're the first person I've met who's done that who doesn't literally work on it. So that's very cool.
00:04:26
Speaker
So yes, I actually, I love reading a lot of different math books and just science in general. And I'm curious, how did you get involved in science journalism and writing?

Ethical Considerations in Science Journalism

00:04:38
Speaker
right okay so i mean well obviously like any good science journalist i have a degree in philosophy um so i don't know yeah so i started out just i was when i was when i was young i was just always just interested in stuff i was like i was always like an intellectual like magpie i wanted to learn about as many different things as i could and the way did you get to do that at university is to do philosophy so you do philosophy of whatever philosophy of science philosophy of mind philosophy of language you know um
00:05:07
Speaker
After uni, I did more study, more time at university doing a master's degree and then started but never completing a PhD, which was in ethics of science particularly, and then the PhD was in the ethics of science journalism. I didn't have a clue what I was doing, completely made up nonsense. I'd never worked a day as a journalist in my life, so it was like being a-
00:05:33
Speaker
Yeah, it was a bit crazy, but it was like being a lepidopterist who's never seen a butterfly or something. It was a weird decision. But anyway, after that, I had to go and get a proper job. Fell by sheer fluke into a job at a national newspaper and sort of steered my way towards science-y things from there because it was what I was interested in.
00:05:57
Speaker
how I ended up getting involved in the book particularly.

Bayes' Theorem in Media and Academia

00:06:02
Speaker
I've sort of been aware of Bayes' Theorem for years. I remember reading about it in this British journalist or sort of medical doctor called Ben Goldacre's Columns in the early 2000s. I remember finding it, what do you mean a test can be 99% accurate?
00:06:20
Speaker
there's not a 99% chance that it's right. I don't understand what you mean. And I just found it fascinating. But then, and so both of my previous books, I've, it's played a pretty central role in them, because I just find it so interesting. And it seems so, so crucial to sort of our understanding of so much. But what particularly drove me to write this book was that in my last book had a chapter on Bayes, I wrote an article about it for the British newspaper, The Observer,
00:06:49
Speaker
And the editor put in, it's just an after, you know, not as a big deal, he described it as like how this obscure theorem just describes the world or something like that, you know, in what I think the Americans call the deck, you know, the headline, the bit below the headline that isn't the main, you put it there, you know, it drove people mad, absolutely mad. I had like four days of people raging in my Twitter mentions going, you know,
00:07:16
Speaker
Um, how do you dare you call the Bayes obscure it's central to probability theory and all that sort of stuff. Like, yeah, like get where they were coming from. But actually, I bet you 90% of the population has no idea what Bayes theorem is. No idea. Like 90% of mathematics professors, uh, and graduate students had at least one question wrong. So do they know what statistics is? You know, exactly. Exactly. It was a fine oiled machine who's taught this. They're like,
00:07:46
Speaker
There's human error. Yeah, yeah, exactly. So so anyway, so I got I got it was after like three days of what I ended up describing as obscure gate because I just it was like one of the biggest it was hilarious, you know, it's enormous Twitter storm or it feels enormous when you're in the middle of it over what it was incredibly enormous Twitter storm.
00:08:05
Speaker
Yeah, exactly. So after that, I was driven by pure spite, really, because there were professors of biostatistics, and I remember a Stanford statistics PhD guy all weighing in and getting all snotty about it.
00:08:24
Speaker
You know what, I'm going to write a goddamn book about this. I'm going to write a book and it will be about Bay's theorem and it will show you it anyway. So I did that. And that's what you have now read. It is a product of pure spite. It is the most brilliant, uh, spite book. Yes. Thank you. That's really kind. As I like to say, surviving and thriving at a spite. No, it's a lot of fun. And here we are. Yes, exactly. So, um,
00:08:50
Speaker
beyond that, like where, where does the story start for Bayes theorem? I know science and religion just don't mix, but now you have math and religion and Thomas Bayes. So I'm curious how people log. Yeah. Yeah.

Historical Foundations of Probability

00:09:10
Speaker
Well, so, so Bayes, Bayes himself was at the sort of end or like he was, he was a,
00:09:18
Speaker
There'd been a couple of hundred years probably of people thinking, or certainly 150 years, people thinking about probability up before Bayes. And they'd been trying to work out things like, the classic thing is the problem with the points, which was Fermat of last theorem fame and Pascal, Blaise Pascal, another great French mathematician working out.
00:09:42
Speaker
Look, if you imagine two people playing a simple dice game or something, how do you work out, and they have to stop the game before it finishes, how do you work out what the fairway split the pot is? So like, if I've got two points and you've got one, and it's a game of first to three,
00:09:57
Speaker
What's the you know coin tossing or whatever what's the fair way to split the part and they've been centuries of working this out like should it be like I've got two and you've got one therefore I should have twice as much as you order and they had the insight that it's about how. How you how many how many sorts of.
00:10:15
Speaker
how many outcomes there are from where you are. And so in a two to one game, then there are four possible outcomes remaining. And the guy who's two to one ahead would win in three out of four. So the fair thing is to split it three parts to one. Anyway, so the probabilities went down this route and then Bernoulli was looking at things like the law of large numbers. But what all these people were doing was looking at how likely you are to see some
00:10:43
Speaker
result like for example a player one winning or player two winning or you know the number of balls drawn from some urn full of black and white balls whatever that sort of thing how likely are to see some result given a hypothesis like you know or given some state of the world like the people are playing a game but but what statisticians and scientists
00:11:04
Speaker
really want from probability is to be able to say, how likely is it that my hypothesis is true? If I could do a COVID vaccine trial, say, and I get in the placebo arm, 10 people get COVID and in the treatment arm, the actual vaccine arm, one person does.
00:11:27
Speaker
the, what's it called, the vaccine didn't work. That would be the same, you know, that'd be the problem. But I can't, but what we want to know is how likely is it that my vaccine does work? How likely is it, you know, what's the probability that my hypothesis is true? And it was only Bayes who, it was when Bayes did it, that Bayes was the first person to come up with that way of doing it. And he realized you have to use prior probability, you have to use your subjective estimate of how likely things were before. And yeah, the religion side of thing was,
00:11:54
Speaker
Well, he was a Presbyterian minister, a non-conformist minister. So in

Bayes' Religious Influence on Mathematics

00:12:01
Speaker
England in the 18th century, you weren't allowed, you had to follow either the Church of England's rules. I mean, this is obviously quite relevant to you.
00:12:11
Speaker
America, because a lot of Americans left for that exact reason, or they weren't Americans at the time. He wasn't allowed to study in English universities, had to go to Edinburgh in Scotland. He couldn't preach in official CIV churches and all sorts of stuff. But anyway, as well as being a preacher, he was a
00:12:32
Speaker
And I say this with enormous love, but he was a real nerd. He was a massive nerd, right? He was a hobbyist mathematician. Anyone listening to the podcast is a massive nerd. Yeah, of course. And I love them as well. I consider myself very much a nerd, right? That's my whole thing. With at least a master's degree, if not a PhD.
00:12:53
Speaker
Yeah, exactly. It's a strong possibility. Someone's a nerd. Yeah, you can do a Bayesian thing there. What's your prior probability of someone being a nerd is like, well, you know, probably about 5%. When you feel here, they've got a master's degree in something that should go up by at least three, least threefold. Anyway, um, yeah, the, so he, yeah, so he was, um,
00:13:13
Speaker
He'd got involved with this whole sort of coterie of other hobbyist mathematicians hanging around in southern England, around this one Lord Stanhope guy who funded them all and let them... It was just a bit like... I had someone comparing to me, like rich people nowadays might get involved in sports teams and stuff, you know, but back then they got involved in science. They spent their endless leisure hours writing papers for the Royal Society and that sort of stuff.
00:13:40
Speaker
Where religion also comes into it, you might like is that after Bayes' death, his friend Richard Price, which I love this detail. He was a friend of Benjamin Franklin's and a friend of John Adams, and I think Thomas Jefferson. I'd have to look back in the book, but yeah, a friend of these amazing founding fathers.
00:14:03
Speaker
Yeah, he was another nonconformist preacher, and he was very interested in using Bayes' theorem. Here's one who published the theorem after Bayes' death in the philosophical transactions of the Royal Society. And he wanted to use Bayes' theorem to protect, to defend God, defend the idea of God from Hume, of David Hume, the philosopher, because Hume had
00:14:29
Speaker
Hume had said the classic thing of extraordinary claims require extraordinary evidence. If you want to say that a miracle has happened, it needs to be more
00:14:39
Speaker
plausible that the miracle happened than that the person who's telling you about the miracle is lying to you. And so he's saying you can never really believe in biblical miracles, for instance. And Price, obviously, being a preacher, being a man of God, he wanted to say, actually, it is possible for unexpected things to happen, and you can never be certain they don't. And he used Bayes' theorem, which
00:15:02
Speaker
right baked into the maths of it pretty much is the impossibility of ever finding certainty. Although that might be a bit of a paradox though, I say it out loud. But he was saying that it was impossible to, you know, no matter how many times you see the sun coming up, you can never be sure that it didn't another time. And
00:15:22
Speaker
He therefore argued that no matter how many times you see people not coming back from the dead, for instance, it doesn't mean it didn't happen in the past. So he was using it as a way of defending biblical tradition from Hume. I'm not saying he was successful at that, but that was the idea. OK. OK. So with that,
00:15:46
Speaker
out of curiosity, where does this go with innovations in science and math? And how does this play into like, what we're doing now? Okay. Well, so, so finish the question. Sorry. I talked over there. So like, what if there were,
00:16:04
Speaker
there was one overarching theory that could help explain much of our modern day lives, right? So we're looking at how does this play into AI more. The big topics that we... Yeah. Yeah. So, I mean, the classic Bayes theorem application is usually medicine, right? That's how we, you know, it usually comes along
00:16:29
Speaker
people talking about cancer screening or COVID tests, you know, if someone says a test is 99% specific and 99% sensitive, so that is it only returns, if you have the condition that you're testing for, it only returns a false negative one time in 100. Let's say it's 99%. If you don't have the condition, it will only return a false positive one time in 100. So that sounds like it should be.
00:16:52
Speaker
If you take the test, there's only one in the hundred chart and it becomes back positive. There's only a one in a hundred chance that it's a false positive, right? But I was thinking about this the other day, what the best way of demonstrating it was and I was saying, you know, imagine that I went and took a medical test for some condition or other. And we know that it's that it only returns false positives one time in a hundred.
00:17:14
Speaker
And I go and take the test and I get a positive result. How likely is it that I have the disease? And most people would probably say, Oh, you know, one in a hundred.

Medical Testing and Prior Probabilities

00:17:23
Speaker
But then if I said, but what if it was a pregnancy test? Um, and then most new rapid COVID tests that nobody is showing positive for box and that it shows that you're positive.
00:17:36
Speaker
Yeah, but I mean, if it was a guy who looks like me taking a pregnancy test, then most of us would think the chances are more likely that it's a false positive than that it's a true positive, even if true positives, false positives are pretty rare because, you know,
00:17:55
Speaker
If you test 100 people, you might get the false positive once in it, and that's more likely to happen than that I am unexpectedly pregnant. I'm quite old now, so it would be surprising. But yeah, I know it. Yeah, well, exactly, exactly. There's still hope yet.
00:18:14
Speaker
So that's the classic thing. So if you do a cancer test and you get this very accurate cancer test, but it returns positive, you can't know how likely you are to have cancer unless you know how likely it was that you had cancer in the first place. You need to compare the hypothesis, the likelihood of the hypothesis. This is a false positive to the likelihood of the chance of the hypothesis.
00:18:35
Speaker
I have this very rare cancer. But what Bayes' theorem does is force you to look at how likely the two hypotheses are. And that means you have to know how likely your thing was in the first place. So that's the classic application. But it comes into a million different varieties. I mean, you mentioned law. There's a classic thing. When you have court cases, right? Yeah.

Bayes' Theorem in Law and AI

00:19:02
Speaker
Yeah, exactly. You have the thing called the prosecutor's fallacy, which is literally just not thinking like a Bayesian really. It is that you
00:19:14
Speaker
A lot of times a prosecutor will say in a court case, for example, there was an awful, awful case in the UK, which I really shudder to think about, but this poor woman, two of her babies died of cop death.
00:19:34
Speaker
That's a very rare condition. And a doctor told the court that, you know, since the condition only happens, I can't remember, it was one in 17,000 times or something like that.
00:19:46
Speaker
that, well, one in 17,000 times, one in 17,000 is one in whatever it was, you know, some number of millions. And therefore, the chance that this woman is innocent is only one in however many millions, you know, when she went to prison. But as other people, as statisticians pointed out, what he's done there is the Bayesian mistake of assuming that the chance of seeing something happen
00:20:08
Speaker
Yeah, the chance of seeing this result given a hypothesis is not the same as the chance that this hypothesis is true given a result any more than saying only one in eight billion humans is the Pope is the same as only one in eight billion chance that the Pope is human. There are different questions, fundamentally different questions, and what you need to do is compare the likelihood of the probability, the likelihood that this test came back.
00:20:37
Speaker
This was a coincidence versus the likelihood of a woman being a double murderer, which is also extremely unlikely. You need to compare hypotheses. It absolutely comes into the law. With AI, it's fascinating in a way because all modern, all AIs are really doing is... Is statistical probability.
00:20:59
Speaker
Predicting stuff. Yeah, exactly. It is predicting stuff. So whether it's, you know, when we say that chat GPT, we asked chat GPT to say, we ask, you know, how it, we, we ask it, how are you? And it says, I'm very well, thank you. It's not doing that because it is very well. It's just predicting. And that's the sort of thing a human would expect to hear or would say, given that, that prompt, right?
00:21:20
Speaker
And obviously what Bayes is, is the maths of prediction. When we make your prior is a prediction and you update your prediction with new information, new evidence. And when you're doing something like a LLM with AI, you have your basic language, you have the language learning model there that just takes it from
00:21:47
Speaker
what you're doing and it automatically will predict the next few words. Yes, yeah. And it just makes things...
00:21:57
Speaker
much more, well, easier for people to just think about and just hit the tab button when they're just writing something, or you just go to that next thing, right? And you can go to your next stop automatically. But what is the danger of that when we're thinking about that with ethics, right? It's got to mean that's a bigger question, isn't it? I mean, what I will say about LLMs that I find
00:22:24
Speaker
difficult there's a lot of people saying that you know it's just it's just predicting what we'll say next now i always find people put the word just before things and i was thinking that i like you he's he's just
00:22:36
Speaker
flying by flapping his arms. He's just running 100 meters in 9.6 seconds. It's very easy to say just, but actually predicting things is very difficult. And that's why humans are good at surviving in the world because we predict what the world would be like, which a stone can't do that. But as AI, there's evidence they do build models. But yes, the world is slightly off the topic of
00:22:59
Speaker
the book maybe, but it's definitely going to be, the world's going to have a complicated time working out the ethics of how AI, you know, as AIs predict what humans would do to, you know, to create art, for instance, and we, suddenly we've got, you know, machines that can create beautiful art for us, which isn't human made. How do we deal with that? Is it copyright? That's all, that's going to be very complicated. I don't know how, I, luckily, I don't have to answer those questions.
00:23:27
Speaker
a lot of trading card games. Really? Yes. So a lot of the artists that are in the, I'll say smaller art market. Yeah. There's been posters that have been put out by, so there have been trading card games in which we have seen the
00:23:53
Speaker
And usually the people that play it that are very competitive, I know that I'm a competitive trading card game. And on many occasions, you look at the cards, you look at the art and you say, was this done by the artist? Was this done by a certain person? Was this faked, right?
00:24:13
Speaker
And you know that there's certain features in the art, even if it is AI generated. Where do you look at these things? You look at the fine details.
00:24:27
Speaker
you look at the border for somebody's shirt, right? You look at the tiny patterns in there. And the predictability, I will say, in cohesiveness of something, if you're looking at a big picture, it looks fabulous. If you zoom in on something that's smaller, fine detail, you'll realize that instead of a square, it goes square on one side of your shirt, and then there'll be a circle on the other, like the legs or the border.
00:24:56
Speaker
And then if you were looking at something like body armor, right? It will have like a triple border somewhere and you'll think of like on your shoulder pads, then you look a chest piece and it only has a single border or a double border and things are not cohesive in that sense.
00:25:14
Speaker
Okay. So they're not doing a brilliant job at well, I will say like, they're not, that's exactly, look how fine you have to look at, you know, imagine five years ago and AI doing that and it wouldn't get anywhere near, you know, the progress they made has been astonishing. Absolutely. It's just that, you know, it's all those fine details for the AI. I don't know if it will ever line up to the human expectation.
00:25:40
Speaker
Even as someone, so think of it, a bubble, right? You cannot get every single pattern predictably correct. I would say, AI and machine learning are a great aid to getting yourself that way.
00:26:03
Speaker
However, when you're looking at the very fine details of say a mathematical proof that we haven't solved yet, if we're looking at some sort of new discovery, we can only look at these things based on the facts that we already have. And just knowing that the majority of things, I will say for AI, it will be correct within two standard deviations.
00:26:35
Speaker
I think that's probably, well, I certainly think that's true now. I'm, I've been, because I wrote, my first book was about AI. Right. And that was...
00:26:45
Speaker
I came out in what, 2019? So five, you know, and I was writing in 20 years ago, yeah, about five years ago, which is ridiculous. Okay. That's obviously, obviously can't be right. You know, it's 2019. It's not five years ago, but somehow it is. No, it's never, never. Can't be, can't be. Yeah. Anyway. So I remember when I was writing that, it was still like, it was still pretty big news that an AI image classifier could reliably tell the difference between a cat and a dog. And now we are well past that.
00:27:13
Speaker
And back then it was, there was, but not long before then, three or four years before then, there was that famous XKCD comic about someone saying, can you write me some script that can tell me where this photo was taken, whether this photo was taken in a national park, easy, I can do that in a second, and then tell me whether there's a photo of a bird. And then, well, nope, that's gonna take me five years. Five years in the massive research team. It was just the,
00:27:41
Speaker
the progress has been astounding and I agree there's a strong, you know, the edge cases constantly get harder and you see this with self-driving cars. It'll get, ironing out the last little wrinkles gets harder and harder, getting closer and closer to perfection. You know, you get lower diminishing returns. That said, I am very wary of betting against them not doing it to a significant degree because I do think, gee, just look at how much better it's got in the last few years and I
00:28:11
Speaker
You know, I agree. At the moment, you can always detect little things or often detect little things that aren't quite right, but it happens less and less with each other thing that comes out. Yes. The hardest things in art to do are, so this comes as more of a blanket statement.
00:28:27
Speaker
So I usually combine my background is partially in art, partially in math. So I do a lot like origami and mathematics of that sorts. But when you're looking at art, even as an artist, even as an illustrator doing computer generated stuff, the big thing that we look at is the fine details of pans.
00:28:55
Speaker
hand posture. It has proven relatively true that being able to create any sort of toes, hands, I'll say phalanges in general, is one of the most difficult fine-tuned things for AI and an artist to be able to do.
00:29:17
Speaker
Yeah. I mean, that's obviously, it was only about a year ago, they just couldn't do it. You could always look and know I've got six fingers, you know? Yes. And at the moment, so say May of 2024, it's still of that sort of issue. Yeah. Yeah. But I mean, then I will, my two comebacks to that, not that I'm disagreeing with you exactly, but yeah, again, the, I think the, the hands problem is
00:29:43
Speaker
is less dramatic. Looking at the output from things like Mid-Journey now, it's less dramatic than it was. They reliably have five fingers most of the time now. Sometimes they then have two left hands or something. It's by no means...
00:29:58
Speaker
right hand yeah exactly um but i mean i will also say that hands are like it's not as if humans get hands right like you look at the bits that humans draw and get it wrong even like the front you i know and i can't believe it was exactly but there was some um comic book artist who you noticed always seemed to have hands hidden behind stuff and things because they couldn't because they were such a pain like even like professional artists struggle with hands so i i want to give the the ai's a bit of a pass on some of this stuff you know
00:30:28
Speaker
Yep, absolutely.

Explaining Conspiracy Theories with Bayesian Reasoning

00:30:31
Speaker
Now, out of curiosity, where are these major predictions that we have, whether it has been in AI, or whether it's in a court of law, politics, voting, where are these things going wrong? Well, I mean, one way in which predictions, so okay, so like, an obvious way in which
00:30:54
Speaker
predictions go wrong is, well, it is in conspiracy theories, right? I think this is the thing that we might not think of as being a Bayesian phenomenon, but it kind of is, because like, what Bayes tells us is,
00:31:11
Speaker
that how our predictions of the world and therefore how we interpret incoming evidence is very much based on what we already believe. That's our priors and we update our prior probabilities with new evidence to come up with our posterior probabilities. And there's a sort of
00:31:28
Speaker
What, with conspiracy theories in particular, I think what I should stress first is that all human activity is, to some extent, predicting stuff. When we are, each breath we take, when we're making any sort of decision, we're predicting that this decision would be better than the other decision, that we'll have these outcomes. But when we have beliefs about the world, they are sort of predictions about how the world will behave.
00:31:54
Speaker
Very crucially, a lot of the book is about how you can really think about human senses, human beliefs, human behavior as a very Bayesian phenomenon with prior probabilities we update from the world. Now, what that implies is that prediction, when people hold false beliefs or what you might call conspiratorial beliefs, they should, in theory, get updated with
00:32:20
Speaker
So if you believe that the USA never went to the moon, for instance, the moon landings were hoaxed, you should be able to update that information. As new information, loads of photos come back from the moon and loads of people tell you who you trust that these are the news that they'll tell you, no, we did go to the moon. Look, here's this photo of the flag flying on the moon. Here's the photo of...
00:32:45
Speaker
If it were just the two hypotheses of either we didn't go to the moon or we did, then this new information should update you towards it. Because there are multiple hypotheses and because someone has a very strong prior that they didn't,
00:33:03
Speaker
When you can explain this in evidence of photos coming from the moon, you can explain it in one of two ways. You can explain it either they're real photos from the moon or people are trying to lie to us and trick us and they've set up a studio backlot in Hollywood and it's actually, who was it, Stanley?
00:33:20
Speaker
Stanley Kubrick. I just wanted to do a quick Google search. I was quick enough. I was pleased. So what that means is when you get these two competing hypotheses, when new evidence comes in that is compatible with not compatible with your original idea that we didn't go to the moon and that's the end of it.
00:33:43
Speaker
You can either move your confidence towards actually really good or it can increase your confidence in the hypothesis people are lying to me. And that's how when people have got very strong priors in something, very strong beliefs in something, the same evidence that would convince you or me to believe.
00:34:01
Speaker
We did actually go to the moon. We'll convince them that the mainstream media is lying to them. And so I find this really interesting because it allows you to say this is how people end up with conspiratorial beliefs or different beliefs from the same evidence without saying these people are strange and weird and somehow wrong and bad. You know, you can just say, well, they had different beliefs to start with. And from that different starting belief, they have actually, they are updating rationally with the new evidence. And I think that's a, it's a more,
00:34:31
Speaker
I don't know. Otherwise, you're forced to say people who believe the earth is flat or that... So we're just going to say that. That's the classic, isn't it? Is it flat and did we go to the moon? Yeah. Or vaccines cause autism or whatever. Or all the pizza gate stuff that the United States is run by a cabal out of a pizza restaurant in Washington.
00:34:57
Speaker
These things, to be clear, I'm not convinced about myself. I don't think they're necessarily completely true. Anyway, yeah, so the... Big one. Here's the big one for the US. We have the election coming up with it, right? Yeah. Better candidate.
00:35:12
Speaker
Well, God, if I'm to get in, yeah. But yeah, I do have opinions. But that's exactly it. When a politician gives a speech, there's lots of sort of people talking about how, you know, there was a classic bit of research showing that people will have different opinions about a speech.
00:35:41
Speaker
when they learn that it is by Hillary Clinton or Barack Obama or Donald Trump or George Bush, whatever. And people say, well, people got annoyed about that. And they said, well, you know, this just shows that people are irrational and they base their policy opinions on who says things rather than on the policies themselves. But actually, it's perfectly rational. I trust
00:36:01
Speaker
some politicians, less than other politicians, because of my priors. And that's a purely Bayesian thing, right? I have my prior beliefs about stuff. And so it makes, you know, if likewise, if someone, if Einstein would come and tell me that the space time is, that time is relative to space and all this sort of stuff, okay, you've seen, I trust you, you know what you're talking about. But if sort of like,
00:36:25
Speaker
John P. Nobody off the street comes and tells me that. I would be a bit more sort of, I don't know what you're talking about. But obviously we do know that Einstein was right. But you know, there isn't such a thing as trusting people and having priors in their trustworthiness. And that makes perfect sense.
00:36:44
Speaker
The biggest thing that all I have to say is that you can't trust somebody who has been a convicted felon. Anybody that knows anything about the US politics, a ham sandwich is better than both candidates. It's going to be a tough year, isn't it? It's going to be a tough year. On top of that, have you heard of someone named Vernon Supreme? I don't think so, no.
00:37:13
Speaker
Oh, lovely. Yeah, makes sense. He wears a boot upside down on his head. We have guys like that here, yep. Okay, it gets better.

Unpredictability in Politics

00:37:22
Speaker
Half of his stuff, you'll get a good giggle out of this one, half of his stuff makes more sense in the current political state of the US. Yeah.
00:37:32
Speaker
He believes that there should be a miniature pony that is an emotional support animal for every household. ALICE There are guys like that in the UK. The monster-raving loony party puts up candidates at most elections, and there's a guy called Count Binface who turns up all the garbage can over his head.
00:37:55
Speaker
Sometimes it's quite funny. He comes up with quite good and something that they do. He did. I think he got more than the conservative party candidate won election. I think the best one here that is currently running as an independent, it's an independent politician. They changed their, they illegally changed their name to literally anybody else. That's good. I like that. I like that. That was funny. It was good.
00:38:22
Speaker
Yeah, that gets a laugh. I like that. That's nice. When seeing that. So, you know, every single prediction, we do need a laugh. Yes. Especially as mathematicians, I'm going to bring it back to something that you do have in the book. I'll call it a little Easter egg. Yeah. Mathematics conferences always get interesting.
00:38:47
Speaker
Yes. Yes, they do. What is going on with the University of Minnesota?

Bayesian Conferences and Community

00:38:56
Speaker
Okay, so yeah, so I don't know why it's on the service at the University of Minnesota, but there were these amazing I will try and I'll there were these amazing conferences. Yes, so basically so firstly right Bayesian statistics as we've established are the only way of saying this is how likely my hypothesis is to be true and
00:39:19
Speaker
given the evidence. But most scientific statistics never, in the early 20th century, scientific statistics moved away from that. They went towards what we now call frequentist statistics, which is the exact opposite, the how likely am I to see this result given a hypothesis, which is
00:39:35
Speaker
people you know has pros and cons but you know a lot of people well got a lot of people didn't like it and and were sort of quietly using Bayesian stuff in the background and in the 70s it started to come out a bit more into the open and people that they had started having these um uh these conferences particularly in and around Valencia which were um uh
00:39:58
Speaker
which honestly sounded pretty crazy. Um, they, they was, uh, they were the first sort of real Bayesian world meetings and they had, they have, um, the first, the first, they were just, you know, in the daytime, cause it's very hot, they'd be, they'd do their work in the morning. They'd have a siesta, they'd do some more work sort of six to 10 in the evening. And then they would have big parties where they all sing. It was great. They'd, um, they'd, uh, I won't try and sing it because I'm not a natural singing voice sort of person, but there was a, um, excuse me.
00:40:26
Speaker
But there was a song called Thomas Bayes' Army from the Battle Hymn of the Republic. Mine eyes have seen the glory of the Reverend Thomas Bayes. He's stamping out frequentists in their incoherent ways and all sorts of stuff. It goes on for ages. And they had Bayesians in the night to this tune of Strangers in the Night, and they're like a Bayesian, like a Virgin. There was the one, the full Monte Carlo, which had a bunch of middle-aged statistics professors taking off their clothes in front of them.
00:40:57
Speaker
on stage in front of a screaming crowd. It sounds like a lot of fun, honestly. That actually sounds like a normal math conference. Really? Oh, that's lovely. Oh, that's good. So it's not too far from the truth at least some of the dinner parties. I don't know about some of the clothes being taken off.
00:41:18
Speaker
No, that might be more than we need. But yeah, the fact that you told me other maths conferences also do it. There was one guy said that he had a sweatshirt saying, Basians have more fun. But maybe that's not true. Maybe all mathematicians have loads of fun. I don't know.
00:41:35
Speaker
All I can hear is someone singing, glory, glory, probability. Yeah, I was tempted to break into song. I just thought it's probably not best. It's not for the best. But yeah, it does sound like it was a load of fun. They also, apparently, one of the Bayesian statisticians who was there says that they all took a boat out and went for a swim off the coast.
00:42:01
Speaker
the winds got up, the boat blew away when they were stuck in the water, and he says there was a decent chance that half of the top Bayesian statistics professors in the world would have all drowned in one moment. They were rescued, it was fine, but that would have set back the Bayesian statistics quite badly if that had happened.
00:42:19
Speaker
Um, I think out of all of the songs that so far that I've heard is that there's no theorem like Bayes theorem, there's no theorem way at all. It's great, isn't it?
00:42:34
Speaker
What? Yeah, I mean, these are all, I mean, they're nerds letting their hair down, aren't they? It's great. I absolutely love it. Yeah, that was at the sort of when Bayesianism was starting to come back into Vogue, because I think, you know,
00:42:53
Speaker
A lot of people had independently rediscovered it over the decades before, but you know, Ronald Fisher and people had sort of tried to stamp out Bayesianism in the 1920s, pretty much. But if people are Alan Turing independently rediscovered it, because you need it, you cannot, you cannot work out how likely something is without Bayes. So and people want to know how likely stuff is, you know, when you're doing code breaking, or when you're doing people using it for artillery, working artillery or mechanics, lots of Silicon Valley, it was reinvented several times in Silicon Valley. Absolutely.
00:43:23
Speaker
Yeah. And so you just need it. You need it. You need it from like stock market projections to weather forecasting and even self-driving cars.

Resurgence of Bayesian Methods in Modern Science

00:43:34
Speaker
Yeah, absolutely. Anything that is making a prediction, making predictions under uncertainty, you need Bayes. And whether that's the human brain or a self-driving car or Google Maps or code breakers, anything like that, you cannot do it. Artificial intelligence in general, you just cannot do it without Bayes.
00:43:56
Speaker
in the mid to late 20th century, it came back in and people realized that it's important. I mean, it is still not standard in science. Still most scientific research is done using frequentist models. So that is things like p-values, when we declare something statistically significant, that's a frequentist model. That is saying this result that we found will be unlikely to happen by chance. So under the hypothesis that there is no result here, that there is no effect here.
00:44:23
Speaker
i will be we don't see it one time in twenty which is which is the opposite of how likely is my hypothesis be true given the result but nonetheless it is the bayesian systems are getting more common there are ways it's been you know i was being to fire pharmaceutical
00:44:40
Speaker
a statistician who worked in the pharma industry for a long time who's saying, again, you just need this to be able to say, I think this drug is 75% likely to work, whatever. You cannot make statements like that without using Bayes. And I think it's great that it's coming back more into mainstream, and obviously it explains a thousand other things as well.
00:44:58
Speaker
Yes. Now, out of curiosity, what is something that you want people to know about the book that you want to drive home to the audience, to anyone who's listening about this?

Applying Bayesian Thinking in Daily Life

00:45:11
Speaker
Sure. Okay. Well, I mean, probably the idea, the thing, I tell you what, I tell you what, my sort of takeaway from sort of a life, from a life, uh, using it in my, in my own life sort of thing has been, um,
00:45:27
Speaker
that when we talk about the intelligence is about prediction. Everything that humans do, that everything that AI does, everything is about predicting and that's Bayesian priors and so on.
00:45:39
Speaker
what's really crucial. So the two things that fall out of that right are, firstly, you don't have to say, I think this thing is going to happen, or I think this thing is true. You can say, I'm 80% sure that this thing happens. And then you can update up and down with more information. And it means you don't have to sort of doggedly defend some position. I definitely think
00:45:59
Speaker
politician X is good or whatever, you say, I have a strong probability that this person is going to do good things if they get into office. And then when I get new information, oh, God, they're not as good as I thought. I can bring it down a bit rather than just saying, and now I abandon that belief. I don't have to sort of have yes or no answers to everything. I can have probability estimates on stuff. Also, I think it's really, we spend an awful lot of time
00:46:21
Speaker
arguing especially on the internet because everyone argues on the internet and god knows I do as well is this thing yes exactly is this thing you know is this i don't know is it woke is it racist is it eugenics is it is this thing does cancel culture exist all these names and things that we put on stuff right of course it does well but yeah uh but
00:46:46
Speaker
When we have these arguments, quite often people don't actually disagree about any of the facts. With the cancel culture example, no one's really arguing that people haven't lost their jobs because of things they've said. They're just trying to say whether we should call that phenomenon cancel culture. And the thing about what Bayes tells you is,
00:47:07
Speaker
beliefs of predictions. If I call it cancel culture, if I don't, does it change any predictions I make about the world? If it doesn't, well, maybe we maybe I don't think so. I don't think it does. But so maybe we can just say, it doesn't matter what we call it. I predict that if you said these certain things on a podcast, then you would probably be at risk of losing your job, which is why I'm so careful about things I say on podcasts. And that whether you can choose to call up cancel culture or not. But the point is, I mean, beliefs should be predictions about the world beliefs that you hold should
00:47:38
Speaker
imply some prediction that then can be falsified or not. It can come through or not and therefore can adjust those probabilities down the line. If they don't have any, if there's no sort of prediction attached to belief, then kind of what's the point in it? You know, call it cancel culture, call it not, whatever, that doesn't matter.
00:47:56
Speaker
You're just, we're just arguing about a word rather than about any sort of facts about the world. And I think that is the biggest takeaway for me is that you should, that beliefs should have probabilities and predictions attached. And if they don't, they're kind of meaningless. Wonderful. Cool. And anything else that you would like to add?
00:48:20
Speaker
I mean, only that you should all go and buy my book, obviously. Um, I think that's probably the crucial thing. I thought it was very entertaining. That's brilliant. I mean, that's what I go for. It is, it is not always easy to write books about maths that are funny. I've tried to do it. I think sometimes I've succeeded and I think that's, I think it was actually quite brilliant. I was reading through it and I'm like, I try to do a book. When I sit down, I do a book in a desk. Oh, wow. That's very impressive.
00:48:49
Speaker
So usually if I could enjoy it in an afternoon, a couple hundred pages in an afternoon, you have done well. Brilliant. Well, I'm honestly, I'm thrilled to hear that. I'm absolutely thrilled. That's really lovely. Yes. So, and in that case, I thank you for coming on the show. It's been a pleasure. It really has been an absolute pleasure. Cool. Well, thank you, Autumn. Um, yeah. And, uh, if I could just reiterate my point about everyone should definitely buy the book, that'll be great.