Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Avatar
528 Plays6 years ago

In the United States, the fourth of July is celebrated as a national holiday, where the focus of that holiday is the war that had the end effect of ending England’s colonial influence over the American colonies. To that end, we are here to talk about war, and how it has been influenced by mathematics and mathematicians. The brutality of war and the ingenuity of war seem to stand at stark odds to one another, as one begets temporary chaos and the other represents lasting accomplishment in the sciences. Leonardo da Vinci, one of the greatest western minds, thought war was an illness, but worked on war machines. Feynman and Von Neumann held similar views, as have many over time; part of being human is being intrigued and disgusted by war, which is something we have to be aware of as a species. So what is warfare? What have we learned from refining its practice? And why do we find it necessary?


Recommended
Transcript

Mathematics' Influence on War

00:00:00
Speaker
In the United States, the 4th of July is celebrated as a national holiday, where the focus of that holiday is the war that had the end effect of ending England's colonial influence over the American colonies. To that end, we are here to talk about war and how it has been influenced by mathematicians and mathematics.
00:00:15
Speaker
The brutality of war and the ingenuity of war seem to stand at stark odds to one another, as one begets temporary chaos, and the other represents lasting accomplishment in the sciences. Leonardo da Vinci, one of the greatest western minds, thought war was an illness, but outworked on war machines. Feynman and von Neumann had similar views, as had many over time, as part of being human is being intrigued and disgusted by war, which is something we have to be aware of as a species.
00:00:40
Speaker
So what is warfare? What have we learned from refining its practice? And why do we find it necessary? All of this and more on this episode of Breaking Math.

Introduction to 'War' Episode

00:00:49
Speaker
Episode 29, War.
00:00:56
Speaker
I'm Jonathan. And I'm Gabriel. And today we have on Carlo, a friend of Gabriel's who has a special interest in military history. Carlo. Hey, how you guys doing? We're doing great. And yourself? I'm doing well. Glad to be here. Thanks for having me. Absolutely. I'd like to thank you for coming on this episode. It's great having somebody with an interest in military history who can advise us and talk about this. It's a challenge to always apply mathematics to the humanities, but this is one of our efforts.

Technological Advances in Warfare: Humane or Not?

00:01:22
Speaker
Obviously, war is a horrible part of humanity, but it's an ever-present part as you open up a history book, and there's a lot to say about war and engineering and mathematics, so I think this will be a very interesting episode.
00:01:35
Speaker
Absolutely, and I'll also say that something that has been theorized throughout the ages is that the most humane form of war is the war that is most effective and completed most quickly. And so the more technological advancement you have, the more that allows you to prosecute
00:01:58
Speaker
or quickly and effectively and perhaps bring it to an end sooner. Interesting way of looking at it. My brother is very, very much a theologian. He has a huge interest in Catholic theology and philosophy as a whole. He actually talks about a just war doctrine. So it'd be interesting to find out from him maybe at a later date, more specifically how that applies.
00:02:19
Speaker
Now, one thing that's interesting about war is that a lot of stuff that's going on in the news is broad discussions on the rules of war.

The Mathematics of Conflict

00:02:31
Speaker
And there's sort of a continuum that you could kind of see that relates, I think, a little bit just as an example of the kind of thing we're going to be talking about on this episode, information theory.
00:02:41
Speaker
where at the one side of the continuum you have arguments, something that you might have every day with somebody, and then you get into things like lawsuits, legal battles, and then you have war, and then past war you have atrocities. It's interesting that it's all about how far you slide on the scale is how much conflict that you're in. The mathematics of conflict are
00:03:01
Speaker
We're going to go a little bit more into that, but it's a little bit of a preview. We're going to just talk real quick about how Aztecs warfare was in certain places, especially within different tribes that were related to one another. They would just play a deathly ballgame where they would execute the losers and give all the clothing and jewelry of the attendants.
00:03:24
Speaker
from one to the other, so that's a less extreme version, but it still involves human death. Wow, that's wild. So presumably Aztecs had the same problems with conflict that we have this day and age, and presumably there was a time before this game developed, but somehow their societal rules and what they're going to do evolved into this game that's almost like Hunger Games in

Historical Warfare Methods

00:03:45
Speaker
a way.
00:03:45
Speaker
Yeah, the same thing happened in parts of Europe in the late 18th century where they called it gentlemanly warfare between little kings and princes and things like that. And it was all bloody.
00:04:00
Speaker
I actually read a paper when I was in college. I took an anthropology class, and there was a paper I read about a tribe in, I believe it was New Guinea. You know what? I'm going to have to cite that. I'm going to have to find out where it was. It was definitely near Australia. There's a tribe where they do have war with other tribes. Interestingly enough, when they go to war, the minute there is a single fatality, the war ends. I'm sorry, the battle ends. Somebody calls out the name of whoever was fallen, and the battle ends for that day.
00:04:29
Speaker
both tribes then stop and clean up and they leave and then the tribe that lost a member then plots a form of revenge against the the attacking or the the other tribe and it kind of goes on forever kind of like uh how west side story or how gangs do i just found it interesting that they can have a war where it's not just complete annihilation it actually ends with a single death that's still tragic and you know that of course it's tragic but it's
00:04:57
Speaker
I don't know. I just I found it interesting to juxtapose against, you know, other battles where it's almost total annihilation.

Ethics and Rules of Engagement

00:05:04
Speaker
And Carlo, what do you know about the Geneva Convention?
00:05:07
Speaker
You know, the Geneva Convention, along with a variety of other, let's say, guidelines, plays a significant role in shaping, you know, what is considered acceptable in warfare. So you have the law of armed conflict, for example, which guides rules of engagement, things of that nature for various, you know, various entities when they're engaged in war. So, you know, there's a,
00:05:38
Speaker
lengthy, ethical discussion that can be had about the balance that needs to be maintained between effectively prosecuting a target and doing so in a way that's ethical and humane.
00:05:54
Speaker
Yeah, and we're going to talk more about the complexity of war, but that sort of is where we're coming from, looking at war as a symptom, really, at least at the current point in time of being human.
00:06:13
Speaker
If war is something that we have to study to understand ourselves, then perhaps studying how we study it is also very important.

Abraham Wald's WWII Contributions

00:06:21
Speaker
And with that, we're going to talk a little bit about Abraham Wald and the mathematics of bullet holes in airplanes during World War II.
00:06:29
Speaker
Yeah, Abraham Wald. He was a gentleman born in the Austro-Hungarian Empire in 1902. He was of Jewish descent, and as was the case with so many people of Jewish descent in the buildup to World War II, he was forced to flee Europe in response to Nazi Germany's growth and the threat they posed to people of Jewish descent. So he comes to the United States
00:06:58
Speaker
And he's a mathematician by training, and he gets heavily involved in statistics. So during World War II, the United States, as part of the war effort, they had various groups that were working to help with the war effort, and one of them was statistically based. It was kind of like a Manhattan Project, if you will, for statistics.
00:07:22
Speaker
And these were war statistics or just general statistics? Well, they would look at a variety of problems, but they were all tied to the general war effort. So there's a multiplicity of different topics they would address. So the army was concerned about the number of aircraft being shot down over Europe on bombing raids.
00:07:44
Speaker
So, they basically presented this statistical group with the question of what do we do to protect these aircraft that are getting shot down? And one of the things they looked at was putting armor on the aircraft. Now, the challenge with putting armor on the aircraft is that it weighs a lot. So, you know, you have the advantage of protecting the aircraft, but you have the disadvantage of consuming more fuel due to the added weight from the armor.
00:08:12
Speaker
So you want somewhere between a balloon and a brick? Yeah, essentially. You want to find that optimum level where you have just enough armor to protect the aircraft, get the aircraft home safely, but not use any more gas than you need to. Because the more gas you use, that decreases your range, because they didn't have mid-air refueling back then, so that decreases the range, it decreases the number of targets you can hit, so on and so forth.
00:08:38
Speaker
So, in essence, they're looking for that optimum amount of armor to put on the aircraft. Now, one of the things they looked at to determine what the optimum level is is where were the bullet holes on these aircraft. So if you're going to put armor on the aircraft, you want to put it in the places that are getting hit.
00:08:57
Speaker
So the Army provided the statistical data to this group of where the bullet holes were on the aircraft that were making it back from these bombing raids. The engines were a little over one bullet hole per square foot. The fuselage was approximately 1.75 bullet holes per square foot. The fuel system was about one and a half. The rest of the plane was about 1.8 bullet holes per square foot.
00:09:23
Speaker
So, you know, the logical intuition is, okay, you're getting the most bullets per square feet in the fuselage and the rest of the plane. And so I guess you rule the rest of the plane out because it's too much, right? Yeah, exactly. So if you have a limited amount of armor you can put on there, the logical intuition is let's put the armor where it's getting the most, where we're getting the most bullet holes.
00:09:44
Speaker
And that way we protect the crew, we get the plane home safely, we use a limited amount of armor to get there. Well, Abraham Wald had a totally different approach to this. His intuition was the exact opposite. His intuition was, okay, the lowest number of bullet holes are in the engine.
00:10:05
Speaker
and what he's thinking is the reason all the aircraft that made it back safely had large numbers of bullet holes elsewhere and very low numbers of bullet holes in the engine were because the aircraft that got hit a lot in the engine they all went down and so his intuition was telling him we need to put the armor around the engines because all the aircraft that are getting hit in the engines didn't make it back and so this brings up this notion of
00:10:33
Speaker
There's a concept when you're analyzing statistics. Sampling bias, right? Right, right. They refer to it in reference to this problem as a survivability bias, right? And so the idea is when you're looking at statistics, there's a natural assumption that you have a valid sample set that you're looking at.
00:10:54
Speaker
Right? And if you're only taking into account some of, let's say for the purposes of this example, you're only taking into account the aircraft that make it back, your sample set is skewed. You don't have a full sample set. The full sample set would include the aircraft that didn't make it back.
00:11:13
Speaker
Yeah, so you take the minimum number of bullet holes per square foot, and since it's on something so critical, it's what you should protect. Exactly. So his assumption was that if you just protect those engines, that's where you're going to get the most payoff. So they went ahead and went with that approach, and as it turns out, it saved many, many lives, many aircraft.
00:11:38
Speaker
And it ended up being the approach that was used by the Air Force, not only in World War II, but they armored engines during the Korean War and the Vietnam War as well. Yeah, that's a really interesting lesson because I know that
00:11:53
Speaker
like the number of statistics we use in war has gone up every single, I mean just about statistics used everywhere has gone up every year since we invented statistics pretty much. And I know that another time this has been used, we're going to talk about near the end of this episode about statistics in World War II, but this is a very interesting example of where engineering has been improved by this incredible amount of stress testing. You have bullets basically that you're stress testing against.
00:12:23
Speaker
One of the things that I did want to talk about real briefly is the survival, uh, the

Understanding Survivorship Bias

00:12:27
Speaker
survivor bias. I think a common example that we often hear about or maybe read about online is whenever somebody says, Oh, I experienced X when I was younger and I lived and I was fine. So there's no problem with it. Well, that's a perfect example of where if you experienced it and you're here, you know, and you live to tell the tale, uh, what about all those people who experienced whatever it is? I don't know what we want to say. Uh, what's a good example here?
00:12:53
Speaker
Uh, I don't know, mercury playing around with mercury. I don't know. Lead poisoning and water, what have you. Yeah. I mean, any number of things you're at, you're absolutely right. Yeah. And then say, and I turned out fine. So it must be okay. So that's a perfect example of survivor bias. And what's interesting is I think that's something that I would have even fallen for, you know, or I would have thought, well, gee, that's compelling. Where in fact it's not compelling, but you just, you need to understand that sort of bias.
00:13:17
Speaker
A wonderful book, if you want to know more about biases and how they're used actually by advertisers, is How to Lie with Statistics. It's a book written in the 50s and I would have to look up the writer, but you have Google. You know, an entire episode on manipulation would be fabulous, I think. I'd love that as a topic. Absolutely.
00:13:36
Speaker
You know, it's just a simple notion that, you know, if you assume that data can steer you in the right direction, and I mean, I'm certainly inclined to, you know, large advocate of science and data and where it guides you. But the assumption in intrinsic in using data to guide your decision making is that the data you have is valid. You know, so if it's bad data in, you know, the output is going to be bad as well.
00:14:03
Speaker
So the next thing we're going to talk about is how mathematics can kind of improve itself really, how mathematics can improve the process of doing logical decisions so it's quicker. Right. So something that's come up quite a bit recently and, you know,
00:14:22
Speaker
The development of different tools for military purposes is the idea of decision superiority. So it's kind of similar to air superiority, but the idea is that technology has reduced the timeline through which decisions are made increasingly over time. Now, what kind of decisions need to be made?
00:14:43
Speaker
So a good example would be, let's say you have a target on the ground that, say it's an anti-aircraft missile battery, and it turns on its radar quickly, and then it turns it back off.
00:14:59
Speaker
So you're going to need to quickly determine which asset is going to be used to hit that target, properly locate the target, and then get munitions down on that target in short order.

Decision-Making in Warfare and AI

00:15:17
Speaker
Right. So supplying decisions, things like that. Yeah. Yeah. Or for example, you know, so so there's this in the military, there's, you know, something referred to as ISR, intelligent surveillance and reconnaissance.
00:15:32
Speaker
Right, and so ISR produces information to decision makers, and then decision makers need to quickly make decisions based on that information to prosecute the war. And, you know, enemy countries are going to be doing the same thing, and so logic dictates that
00:15:54
Speaker
You might gain an advantage from having better ISR assets, for example, but you also could gain an advantage from having the ability to make decisions based on what the ISR assets tell you quicker, right? Yeah. So it's some seems like it's both about computational speed and the speed at which you can like relay and gather like relevant data.
00:16:20
Speaker
That's correct. How quickly can you get the data once you have the data? How quickly can you make relevant decisions based upon it? And so are a lot of these decisions made currently by humans with busy schedules? That's the problem.
00:16:32
Speaker
Well, so you have people making decisions, leadership making decisions based on the data in front of them, but their adversaries will be making those decisions too. And the one who makes the decision first gains a certain advantage. So right now those decisions are made, as you said, by people. And so in the future it's postulated that
00:16:57
Speaker
There will be a distinct advantage that comes from introducing artificial intelligence into the decision-making process and getting to the point where artificial intelligence can recognize data as it comes in, process it, and make the appropriate decision in a much quicker timeline than a person could make that same decision. And so that gives you a distinct advantage over your adversary.
00:17:23
Speaker
I reminded a little bit of there are certain traps when you're trying to trap animals where you dig a hole that looks like a hole that was dug by another creature. I don't exactly know how these traps work, but it ends up trapping the animal. I could see an adversary seeing somebody who uses this AI and developing another AI
00:17:45
Speaker
to sort of trap them in a web of decisions and trying to confuse the other person as much as possible by doing bizarre things that take a lot of energy. I've got a very strange little analogy here. I'm reminded of the 1980s movie with Arnold Schwarzenegger Predator. Did you all ever see Predator?
00:18:04
Speaker
I did. Okay, well, Predator has, what is it, like an infrared? Yeah, he sees an infrared. And Arnold just tricks it by jumping into cold mud, which makes him sort of invisible for the time being. So that's actually a very interesting point, because as soon as you have new rules or a new means of getting information, then obviously it raises the question of how do you trick that AI?
00:18:27
Speaker
One thing that I wanted to bring up is machine learning is a fascinating topic for me in part because I don't understand it. And I know that in terms of neural networks, there's a lot of folks who don't understand it.
00:18:38
Speaker
And yeah, I mean, you did some machine learning when you did your project. Exactly. Yeah. Yeah. My evolutionary algorithm. By the way, I want to, I'm going to say this here on the podcast. I want to make that, uh, algorithm or a version of it available for the public on get help. Cause I'd love to get some input from our, our listeners, you know, or I love for them to, to take it and run, run with it. Cause it's a cool project.
00:19:01
Speaker
and we're going to do that go to github.com slash breaking math podcast awesome yay that's the thing i have to do now now the reason why i wanted to bring up machine learning because i i hope i say this with with with uh i tread lightly when i say this but i i want to be appropriately skeptical i know that um i have a brother-in-law who's very interested in machine learning and he works for the government doing something uh i'm not actually privy to what but he talks about how there has been um
00:19:29
Speaker
There's some versions of machine learning that essentially are just a complex lookup table. I mean, is artificial intelligence and a neural network more than just a lookup table, you know?
00:19:40
Speaker
Well, you know, interestingly, I think what artificial intelligence is today will in no way reflect what it will be in 10 or 20 or 30 years. So what artificial intelligence is capable of doing today is probably going to be quite a bit different from what we use it for two or three decades down the road.
00:20:00
Speaker
Just to give you an example, within the last two years, massive improvements to artificial neural networks have been made by giving them the concept of focus. Just giving them a little bit of a memory to say, write down whatever you want here, but remember what you wrote down, and they work better. Just tiny little improvements like that that are incremental have made almost revolutionary advancements. So I see where you're coming from. Fascinating.

The Future of AI in Warfare

00:20:25
Speaker
That's fascinating.
00:20:25
Speaker
You know, I don't know if you guys are so I've read a bit recently about AI and I don't know if you guys are familiar with the Turing test. Oh, absolutely. Absolutely. So a lot of the for those who aren't aware of the Turing test is a test where it's testing a computer with a program on it. The program is trying to simulate a human intelligence and the computer said to pass a Turing test.
00:20:50
Speaker
if a human talks to it and is generally thinks that it's another human. Right, right. And so I've read from a wide range of experts in the field, you know, folks at the AI lab out at MIT, various other folks to kind of because I've always been kind of fascinated with where AI is at and how it's how it's progressing.
00:21:16
Speaker
And what I've seemed to find is that a lot of experts in the field believe that AI will be able to pass the Turing test by approximately 2030. And then you could reach, this prediction varies, but I've read from a lot of prominent people in the field that you could see the singularity happen as early as 2045.
00:21:41
Speaker
And so, I mean, when you think about that, we're approximately a quarter century away from AI being vastly superior to human beings in terms of its capacity for thinking, so on and so forth. Hopefully empathy.
00:21:57
Speaker
Yeah, well, hopefully. Right. And so, you know, what I will be capable of doing with respect to warfare in 25 years, I think is going to be very advanced. I can almost see AI getting so advanced that it prevents wars. But that's what people thought about the Internet. So.
00:22:17
Speaker
Yeah, well, I mean, we'll see. It seems like, you know, war is, you know, whenever you have human beings involved, as long as, you know, conflict resolution is not our strong suit. Something that a lot of people have talked about with reference to AI and warfare that's a little concerning is the notion of autonomous weapons.
00:22:40
Speaker
This episode is all about how mathematics has been used by humankind in developing national defense strategies. One of the topics we discuss is modern and future areas of research that many nations are interested in, especially for national defense. These include, among others, artificial intelligence and machine learning. To that end, our partner Brilliant.org has a course all about artificial neural networks and how they can be used to emulate the type of learning and strategizing that occurs in the human brain.
00:23:06
Speaker
I love how the course takes you through the main categories of machine learning, including the universal approximation theorem, regression, and unsupervised learning. Machine learning is at the cutting edge of research in many industries, including defense, and this course provides a solid foundation in the essentials of this field.
00:23:22
Speaker
To support your education in math and physics, go to www.brilliant.org slash breakingmath and sign up for free. The first 200 breaking math listeners can get 20% off the annual subscription, which we have been using. And now back to the episode.
00:23:38
Speaker
And that's something that I think as AI continues to develop, that's something we could see in the future. And so the idea would be that you have weapon systems that not only are capable of functioning on their own, but the decision-making apparatus is entirely carried out by the AI as well. So there's no human in the loop whatsoever.
00:24:01
Speaker
So, as you can imagine, that would lead to a really heightened level of efficiency and probably reduced error making, but do you want machines making decisions on which targets you strike, so on and so forth? Well, the human still in the loop is the one who gave the orders, right?
00:24:19
Speaker
Well, so the idea would be that you would eventually get to the point where the AI is completely autonomous. They're out there operating on their own. So where's the human link? All Congress? I mean, in essence, the human link would be you give the AI instructions and then you let it go.
00:24:38
Speaker
Oh yeah, I'm just thinking who gives the last human instruction. I would imagine, I mean this is just speculation, but I would imagine if these are used in warfare that last instruction would be given by some part of the military apparatus, but I mean this is purely speculative on my part. And with the last instruction they had no idea what they had done.
00:24:59
Speaker
I know, I know. That definitely sounds like the opening to a Terminator with Skynet. That really, really does. And I guess the reason why I brought up what I brought up earlier is I was thinking in essence. So Jonathan, I hope you don't mind me saying you have a brand new gorgeous computer with a very high definition monitor, right? Yes. So you can have a beautiful picture that looks almost exactly like I'm staring out a window. It's so good. Now, if I zoom in and zoom in and zoom in, ultimately it's still pixels.
00:25:23
Speaker
And the reason why I bring up this comment that I just did is I'm a little skeptical of AI or true artificial intelligence because one thing that I learned in all of my classes in computer engineering is that computers are dumb. They just do what you tell them to do. And that's why I brought up the comment earlier about a lookup table. Like ultimately what it boils down to is a set of algorithms and a lookup table. Yes, it gets more advanced and yes, you have more if loops
00:25:50
Speaker
But it's still dumb. And this was more the philosophical question, which relates to the Turing test of even if you have a super advanced computer or AI that still has to the nth degree of lookup tables and if loops and that sort of thing, is it still a dumb computer?
00:26:13
Speaker
Well, I think that you're running into a paradox similar to the Greek paradox of the notion of a group of things, like a pile of rocks. The paradox we've heard on here before says that if you have a pile of rock and you take away one rock, it's still a pile. But if you do that enough times, you get one rock. So one rock is equal to a pile of rocks, which is a paradox because obviously it isn't.
00:26:34
Speaker
So I kind of feel like it's the same way with this, where, yeah, you might start with if loops, but if you add more and more, there is a point where you have to look outside the system and see what you've wrought. And, you know, not to mention, if you assume, you know, it depends on how you're defining the singularity, but if you get to the point where AI are vastly superior intellectually to the human brain, I mean, couldn't you get to the point where, I mean, we're defining the way this computer functions
00:27:04
Speaker
right now in the way you just described. But isn't it possible that AI could advance quite a bit beyond that limited capacity?
00:27:13
Speaker
And I guess to answer that question one other way, this is actually a little disconcerting, a little uncomfortable way of looking at it is you and I, us three here, are we not already just making decisions based on inputs? Like are we already to a degree doing that? So one answer to that is the approach to the singularity, perhaps it'll be very seamless because we've already started augmenting ourselves the minute the first person used a cane or a pair of glasses or a telescope. Those are already transhumanist things.
00:27:41
Speaker
Any of you guys have seen the Google Assistant that could call on your behalf, right? Wow, no, I've not. Yeah, if you are listening to this, just Google it. But it's essentially what it does is it could call a restaurant for you and make reservations and do things like that on your behalf. And it's the technology that just like they said that existed a few weeks ago. Interesting.
00:28:06
Speaker
I saw on my Facebook feed, actually, one example of an error in artificial intelligence, specifically as it relates to autonomous vehicles. I don't know if you guys all saw this one, but it was an autonomous vehicle that was looking at a bus in front of it. The bus had an advertisement that just had a bunch of bicyclists who are bicycling. It identified the bus as bicyclists because they happen to be the right size.
00:28:31
Speaker
Yeah, so I don't know it's just an example how it always seems to be You know we're always approximating human behavior, but yeah, so I think that as I mean There's nothing scarier than humans
00:28:48
Speaker
Now there's of course the much less pleasant parts of war, even as unpleasant as these have been, such as Sean Gorley's research on the number of people dead in an attack versus the frequency of that type of attack with that number of people dead.
00:29:06
Speaker
And it turns out that when you plot them on a log linear graph, you get a straight line, meaning that it's an exponential graph. Now an exponential graph is like, let's say you did two times one half, you get one times one half with half times a half is a quarter. You would get a line that looked like a, like really sharp at the beginning, but then approaches zero. Interesting. So I think that with that graph, there's a measure of predictability, right?
00:29:32
Speaker
Yeah, well what's interesting about it is that Sean Gurley found that in conflicts that don't have an end, the slope of this line is negative 2.5, and in stuff that does end, it's farther away. And the war that he analyzed specifically in regards to this was the Iraq War, and he found that it had been, at the time of his research, 2.5 for years.
00:29:55
Speaker
Interesting. Now, actually, I think I watched a TED talk with this guy, and he's a physicist. And I think he also analyzed not just the Iraq war, but several other wars. Oh, yeah. And that's how he got the data of how it wasn't 2.5. Yeah. So how would you put in layman's terms, essentially, his findings? That the number of times an attack will happen with a certain number of people dead is predictable.
00:30:22
Speaker
Okay. Interesting. Interesting. And then the mitigating factors, as you said, were if it's an endless war or there doesn't have a clear end versus one with an end. And then what was the prediction? Like increases with time or? No, it's that if it's at 2.5, then it won't have a clear end until it dips below or goes above.
00:30:40
Speaker
Okay. Okay. But if it doesn't, then it'll just continue. Interesting. Definitely. So that could provide a tool for policy makers, at least in terms of some metric for how a conflict is going then. Not just how it's going, but how to maybe end it sooner. Interesting. Wow. Wow. So again, this is an example of people using mathematics to make sense of things and hopefully have a resolution to it.
00:31:09
Speaker
And before, of course, and the one thing I just wanted to talk about is that mathematics can be used and are used a lot in war, but sometimes the way that they are can be short-sighted, like in World War II. I guess they had to minimize, oh they didn't have to, but I mean they tried to maximize civilian casualties in certain places because all this stuff was done in civilian places, all the factories, everything like that was
00:31:34
Speaker
in just normal towns and cities. And so carpet bombing raids were analyzed mathematically by statisticians to have the most number of dead in the most disabled factories with a raid. And it's one of the most grim uses of mathematics aside from maybe the atomic bomb and stuff with John von Neumann that I know about.
00:31:56
Speaker
And, you know, ultimately, with technology advancing to the point where we now have precision-guided munitions, you can still prosecute those same targets with much less collateral damage.
00:32:12
Speaker
And so there are all sorts of interesting conversations that can be had about the advantages of technology. So, you know, making more lethal and precise weaponry by definition means that, I mean, you're able to do far more destruction. But in theory, it also allows you to do it in a much more surgical way so as to limit collateral damage.
00:32:37
Speaker
And of course another aspect of that is the cost of the war because I know that barrel bombs are used in places where they can't afford other bombs and they're just the worst. Absolutely. Absolutely. Barrel bombs just being a barrel filled with munitions and like shreds of stuff to like and dropped on like, uh, I think in Iraq, they're dropped on cat or no, was it Iraq? Um, they use them a lot in Syria and Syria. They use all the time.
00:33:02
Speaker
And are those, with respect to international protocol, what's the stance on barrel bombs? The way that they're using them, it's illegal. Yeah, as I understand it, it's illegal. Yeah. Okay. Okay. So we're talking about not, we're talking about. The Syrian government. Okay. Yeah. Dropping barrel bombs on, you know, the resistance. And I think certain times in civilian populations. Consistently. Yeah. Dropping it on civilians.
00:33:28
Speaker
War is what happens when things break down in communication, and the way that we conduct that violence has to be assessed both ethically and empirically. It is important that we keep this in mind when we celebrate national holidays and sound the call to war.
00:33:49
Speaker
No, no, just enjoyed having the opportunity to chat with you guys. It was interesting. Awesome. Yeah. We very much enjoyed having you. We really enjoy having someone like yourself who has a passion for military history and other folks who we can learn from.