Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
12: Math Factory (Algorithms) image

12: Math Factory (Algorithms)

Breaking Math Podcast
Avatar
617 Plays8 years ago
In a universe where everything is representable by information, what does it mean to interact with that world? When you follow a series of steps to accomplish a goal, what you're doing is taking part in a mathematical tradition as old as math itself: algorithms. From time immemorial, we've accelerated the growth of this means of transformation, and whether we're modeling neurons, recognizing faces, designing trusses on a bridge, or coloring a map, we're involving ourselves heavily in a fantastic world, where everything is connected to everything else through a massive network of mathematical factories. So does it mean to do something? What does it mean for something to end? And what is time relative to these questions?

---

This episode is sponsored by
· Anchor: The easiest way to make a podcast. https://anchor.fm/app

Support this podcast: https://anchor.fm/breakingmathpodcast/support
Recommended
Transcript

Introduction to Chumba Casino

00:00:00
Speaker
It is Ryan here, and I have a question for you. What do you do when you win? Like, are you a fist-pumper? A woohooer? A hand clapper? A high-fiver? I kind of like the high-five, but if you want to hone in on those winning moves, check out Chumba Casino. At ChumbaCasino.com, choose from hundreds of social casino-style games for your chance to redeem serious cash prizes. There are new game releases weekly, plus free daily bonuses, so don't wait. Start having the most fun ever at ChumbaCasino.com.
00:00:29
Speaker
Judy was boring. Hello. Then Judy discovered JumbaCasino.com. It's my little escape. Now Judy's the life of the party. Oh baby, mama's bringing home the bacon. Whoa, take it easy Judy.
00:00:44
Speaker
The Chumba life is for everybody, so go to chumbacasino.com and play over a hundred casino-style games. Join today and play for free for your chance to redeem some serious prizes. This episode is distributed under a Creative Commons Attribution Share-alike 4.0 international license. For more information, visit creativecommons.org.

Website Update and Podcasting Tips

00:01:10
Speaker
Somebody stole our website! Oh no, whatever shall we do? I mean, I guess you could go to the new website, http://breakingmathpodcast.app, with no www for all you old timers. So breakingmathpodcast.app? I mean, if you're into that sort of thing.
00:01:35
Speaker
Hey, Breaking Math fans. First, I want to thank you for listening. I have an important message for everyone. You can start your own podcast right now with Anchor. Anchor lets you create and distribute your own podcast. Just get an idea, record, and upload. It's just that easy. Anyone can do it. I'm on my way to accomplishing my dream, and you can too. Just get on your device's app store and download Anchor. It contains everything you need to make a podcast. With Anchor, you can put your podcast on all the big platforms.
00:02:04
Speaker
Apple Podcast, Spotify, Google Podcast, Amazon, and more. Reach the whole world with Anchor. Best of all, Anchor is free. You have nothing to lose with a free platform. Download the Anchor app or go to anchor.fm to get started.

Episode Preview and Host Introduction

00:02:24
Speaker
In a universe where everything is representable by information, what does it mean to interact with that world? When you follow this series of steps to accomplish a goal, what you're doing is taking part in a mathematical tradition as old as math itself, algorithms. From time immemorial, we've accelerated the growth of this means of transformation, and whether we're modeling neurons, recognizing faces, designing trusses on a bridge, or coloring a map,
00:02:48
Speaker
We're involving ourselves heavily in a fantastic world where everything is connected to everything else through a massive network of mathematical factories. So what does it mean to do something? What does it mean for something to end? And what is time relative to these questions? All of this and more on this episode of Breaking Math. Episode 12, Math Factory. I'm Jonathan. And I'm Gabriel.
00:03:17
Speaker
And today on our episode, we have again ... It's Amy again. And we also have ... Leela. And today we're actually going to do some listener mail,

Listener Mail and Audience Connection

00:03:29
Speaker
Gabriel. That's right, that's right. Recently we are actually very excited. We've been getting a little bit of mail, mostly to our Gmail account, breakingmathpodcastatgmail.com, but also from our Facebook Messenger.
00:03:41
Speaker
Yes, and the first one is actually from Gabriel's dad. That's right. That's right. Thanks, dad. My dad has been listening to us out in Denver, Colorado, and my dad had this to say. Ooh, it's a brief message here. Been listening to the episode, Evolution and Engineering. Not sure I understand it. I'm sure it's fantastic. Love, dad. Thank you, dad. Thank you very much. That's kind of, I don't know. Do you think our terminologies are sometimes too elitist?
00:04:08
Speaker
Occasionally, you guys get a little, you get excited about the things that you understand really well. And evolution engineering, that was the one where you were talking about your thesis, right? Yeah, that's correct. So it's really hard to have perspective on your own work. Yeah, you get entrenched in this world and then you forget that there's something outside of that world. It's like not being able to see a typo in your own essay.
00:04:29
Speaker
Okay, okay. Isn't there a term for that? What's it called? Expert blind? Oh, yeah, the expert blind spot is kind of a common phrase in education. Okay. Well, I'm very thankful for my dad's support. Thank you, dad. And I appreciate your feedback. We always strive for excellence here. We had one more letter. This one was sent to our email, and this was sent in from a gentleman. And if you want to email us, that's breakingmathpodcastatgmail.com.
00:04:58
Speaker
All of the subjects are interesting and I would say that my favorite is episode 8. I'm pursuing my bachelor of arts in electrical engineering. I felt myself wishing I was already working on my master's so I could create an organic algorithm as my final. I'm excited for what you guys have come down the pipeline and I have been telling everyone I know about your podcast. It's like you made this podcast specifically for me. Keep up the great work. Oh and the interactive stuff on the website takes it all to another level.
00:05:22
Speaker
Now, full disclosure, I did not plan that. I did not plan to have two correspondences both about that episode, but it was kind of cool how it worked out. And if you do want to see any of the interactive stuff, it's of course at breakingmathpodcast.com. Thank you. Thank you, Blake. Your feedback is very meaningful and you very much can do organic or genetic algorithms for your masters. There are lots of tutorials online for that. And we are excited that this is a topic that interests you.
00:05:47
Speaker
And now on to the show.

Understanding Algorithms: Basics and Examples

00:05:50
Speaker
So today on this episode, we're going to talk a little bit about what an algorithm is, what asymptotic complexity or time complexity is. We're going to talk a little bit about general observations about algorithms. And we're going to talk about the P equals NP problem.
00:06:06
Speaker
Yeah, and we realize that that's quite a mouthful. Don't run off. We spend a lot of time breaking down those concepts, and we have a lot of great analogies, so we think it'll be quite helpful and quite interesting. So, Gabriel, what is an algorithm to you?
00:06:22
Speaker
Okay, so I've been thinking about this. I think of an algorithm as a set of instructions where you have an input and then you have some kind of an output at the end and there's some predictability, analogies, maybe certain steps of dancing or perhaps a recipe.
00:06:39
Speaker
Yeah. And in the recipe example, the input is all the ingredients and the output is, you know, your souffle. And the actual algorithm would be the steps you take to turn those ingredients into a souffle. Yeah. And you could, there's a lot of, um, the components of an algorithm, for example, loops. Um, when you stir, you don't just stir around the thing one time. You do it multiple times. So you stir until completion.
00:07:04
Speaker
There could be like a lot of, I guess I'd say, parodies in terms of describing everyday life or your diary, but just use, you know, like computer science algorithm notation. And one interesting thing is that due to a consequence of Godel's incompleteness theorem, which we won't go into here too much, you can use math to prove that an algorithm works, but you can't use an algorithm that isn't random to prove that math works, even though algorithms in math are inextricably intertwined.
00:07:32
Speaker
That is pretty mind-blowing right there. I feel like this should be an XKCD comic about describing your day-to-day actions as an algorithm in like computer language that that would be
00:07:43
Speaker
I love that though. Like, you know, a while loop, you know, like keep stirring, you know, like while consistency does not equal, you know, whatever. If traffic is greater than some acceptable amount. I'm so down. I'm just, I'm fascinated by having diaries that can't be comprehended easily. And that's like a cipher. This is bringing it all together, encryption and algorithms and like information. Okay. Okay. I'm going off on a tangent here, but it's relevant. I find the idea of an algorithmic diary somehow a little depressing.
00:08:14
Speaker
No. I don't know why. Nerd. Yeah. Okay. So one of the first algorithms was Euclid's algorithm. Amy, do you want to go into that? I mean, it's just finding the greatest common divisor. And so those words with those little, I mean, if you have two numbers and we're talking whole numbers here, you want the number that is largest that divides into both of those other two numbers. So the process itself is pretty simple as I recall.
00:08:38
Speaker
We recognize that as one of the oldest because, you know, obviously that's from ancient Greece and so it's currently recognized as the only, this is just an out of curiosity question.
00:08:48
Speaker
Oh yeah, and yeah, it's one of the oldest algorithms that we know about. If not the oldest, I'm not totally sure. Oh, actually the Babylonian square root method is probably a little older. But the algorithm is actually really simple and you can imagine it like this. Take two pieces of paper that are as, so let's say you're doing two and 10. You take a piece of paper that's two units long and one that's 10 units long. Then you fold up the two unit paper and the 10 until you get to an edge.
00:09:17
Speaker
you cut off that edge and you can just keep doing that over and over again until you get two pieces of paper that are the same size and that's the greatest common divisor. Interesting, interesting, yeah. You know, I'm doing that in my head as you do it. And again, I know there's a bit of a challenge with talking about something as you visualize it, but I see what you're saying. Yeah, that's cool. Just simple steps.
00:09:37
Speaker
Now, also real quick, so obviously any sort of a shopping transaction at all, any shopping transactions or trade from way back in the day, all those are algorithms. Well, they're related to algorithms for sure. I mean, an algorithm is a series of steps that you do to any mathematical construct, which doesn't necessarily mean numbers, but doesn't necessarily mean not numbers. For example, you could do graphs and a graph for those unacquainted
00:10:02
Speaker
is not like a graphing calculator type of graph, but it's like you have a bunch of circles that are connected to each other with lines. That's a graph. You could put that into an algorithm and get something out, like the largest distance between two nodes, something like that.
00:10:16
Speaker
It's something I've always found annoying that when I watch TV shows or movies and they use the word algorithm, like it's some magical, like fancy. It's really simple. It's a series of steps that you take, that you have a certain something you put in, you take these steps and you get something out. So yeah, when you house it then obviously and someone leaves you a list, boom, that's an algorithm.
00:10:37
Speaker
Well, the algorithm is actually what you do to get the list done. So the input is the list and the output is the chores. Okay, okay, cool, cool. That's a little more technical, I like that. So one of the introductions that I had to algorithms in general is in linear algebra. So linear algebra is in its most basic form dealing with matrices. And so if you're not familiar with matrices, it's just a way of dealing with a lot of numbers.
00:11:03
Speaker
at once so you have you have a an array of numbers and so a matrix will kind of instead of having a bunch of different variables which are the letters which I know everyone who's taking algebra and didn't like it hated having numbers in their math that the matrix actually takes or they hated having letters in the math excuse me
00:11:21
Speaker
that the matrix takes letters away and just deals with those numbers, those coefficients. And so there's something called row echelon form. So when you have a matrix, you can reduce it to row echelon form to solve a system of linear equations. Why might a system of linear equations be worth solving?
00:11:40
Speaker
System of linear equation, okay, so this is interesting. I'd like to break down the answer to this in a few ways. This last week, I had a nephew of mine ask me what a matrix was, and I was suddenly in the position of trying to explain it for him and his little sister. So this is all relevant. So in answering your questions about a system of linear equations, first, we need to understand a matrix. I basically said, you need to imagine a box or
00:12:07
Speaker
And then finally I use the words, you know, a two by two. You've got a row on the top and a row on the bottom if it's a basic two by two. You've also got two columns. It's funny how we take it for granted, how easy that term is, you know.
00:12:21
Speaker
Yeah. I mean, it's kind of after you see it, you know what it is. Yeah. Yeah. But I mean, if you ever used graph paper, imagine you just have a little square that you draw to graph paper and each one of the little boxes you draw a number. That's a matrix. Yeah. It's a stack of numbers and it's arranged in a square and there's as many in each row. It's generally a square, but not always.
00:12:44
Speaker
Yeah, generally when we're dealing with, so we're talking about echelon form, row echelon form, reduced row echelon form, that's actually called an augmented matrix, so it's not a square. You have one more column than you have rows, because that last column is your solution. But that's right, that's right. So going back to a system of linear equations, let's break this down. We know what a matrix is. It's a square or sometimes a rectangle.
00:13:07
Speaker
of numbers. Now, the last row are the, like, if you arrange the numbers in a bunch of equations, where the, I'm sorry, what's the terminology? Variables. Thank you. Well, the coefficients of the variable. The numbers are on the left side, and the answers are on that final column.
00:13:24
Speaker
So this is all very difficult to visualize. So I'm going to give you an example of a problem that is solvable using a system of linear equations. I actually have a really simple one if I may. So say you have a concert and they know that they sold like 2,000 tickets and they know that they made $5,500, but they don't actually know that it's a different cost for kids than it is for adults.
00:13:47
Speaker
They don't know how many adult tickets they sold and how many kids tickets they sold because somebody lost that information, but they need it for accounting purposes, whatever it is, right? Based on that information, when we know how many people attended the concert total, how many tickets were sold total, and how much money they made total, as long as we know how much each ticket costs, we can figure out how many of each ticket they sold.
00:14:12
Speaker
And of course that's assuming, again, that you know the total number of tickets sold, right? Because you have to have as many equations as he has unknowns, right?
00:14:19
Speaker
Right. So I'm imagining this as I have 2x equals 16, x equals 8. No big deal. That's not the point. And then I also have like 3x equals, you know, 21 or whatever. And then in a matrices, it's a silly 4 by 4, which isn't like you said, like necessarily always what you're going to get, especially when you're using the transforms. But it's going to be 2 and 16 and 3 and 21.
00:14:49
Speaker
Yeah, with the answers on that, we have another column, which is the number. So let's say- The X. Yeah, so let's say I say that a hot dog and two sodas cost $5, but a hot dog and a soda cost $3. Then you'd have a matrix where the top row is one, two, five, the bottom one is one, one, three, and you solve that using row echelon form, and you see that a hot dog is $1 and a soda is $2.
00:15:18
Speaker
Beautiful beautiful that that's outstanding. That's outstanding. Yeah, I think that matrices and matrix operations are actually some of my favorite math I mean, you know like you do an operation on a mate and like when you turn it into row echelon form if you if you explain that super brief you just Operate on the matrix until you got a lot of zeros basically the the top top left bottom right diagonal can be like all ones I mean, I mean that's there's so many things that's reduced row echelon. Thank you
00:15:44
Speaker
Thank you. There's so many things that you can do with matrices, but you can simplify all your operations by doing operations until you've got a lot of zeros. Unfortunately, right now, I recognize that I'm lacking the terminologies. It's like in the example that we give where the top row is 2, 1, 5, and the bottom row is 1, 1, 3. What you do is you subtract the one from the two and the other one from the other one,
00:16:12
Speaker
and then you subtract the three from the five, and then you're left with zero, one, two. One soda equals two dollars. You do what's called elementary row operations, and so there are things you can do. You can multiply by a scalar, which is just any number that's not zero.
00:16:29
Speaker
I mean, I suppose you could multiply by zero, but that would be pointless. You can add rows together, you can subtract rows, and then those elementary, you can switch rows. Those elementary row operations allow you to manipulate the numbers in such a way that you can reduce it to an echelon form or reduced row echelon form, which automatically, that last column,
00:16:48
Speaker
gives you the solution for each of the unknown variables. There's applets that do that. You can just Google search. It's on graphing calculators. Any TI graphing calculator, or probably any graphing calculator, you can put R-R-E-F and it'll simplify any size system that you have the patients to type into it.
00:17:05
Speaker
Which is awesome that with, you know, two variables, it's actually it's probably more effort than it's worth to convert it to a matrix and reduce it that way. You could just solve it in that time. But if you have three variables or four variables or five or six or is it's really easy to make a mistake. I'm doing a solving a problem for my job, actually.
00:17:26
Speaker
where there's about 300 variables. Yeah, see, I'm not doing that by hand. Cool, cool. That's why in this modern era of computation, it's just awesome because we can manipulate data so quickly and use these matrix operations. That sounds technical. I think thanks to Hollywood movies, those words are fun. Oh, yeah, and this goes back like thousands of years, this matrix operations. It goes back to the Chinese, actually.
00:17:52
Speaker
Yeah, so for a computer, it's very easy to reduce a matrix down to reduce row echelon form and doing it by hand is very time consuming and it's easy to make mistakes. So it's a very simple algorithm for a computer to execute very quickly, which I think leads us into some other types of algorithms.

Complex Algorithms and Their Challenges

00:18:11
Speaker
So there's a story in computer science, and it's about this painter. He was hired to paint lines on a road, you know, like the lines that you see when you drive down the road, those. And the first day, he paints 10,000 lines, and they're really impressed with him. They only expected him to paint 1,000 lines. So they give him a bonus, whatever, etc. He comes back the next day,
00:18:32
Speaker
They're a little less impressed with him. He only paints about 1,100 lines. But they're like, that's still good. Third day, he paints about 30 lines. And then they go to him and say, OK, what gives? And he's like, well, don't blame me. The bucket keeps getting further away. OK. Now, so for computer scientists, then, can you explain how that's especially funny for computer scientists?
00:18:52
Speaker
Yeah, because there's this thing that we're going to go into now called asymptotic complexity. Don't run away from that. It's not as scary as it sounds. It's basically how long an algorithm takes. So you could have a better algorithm for painting the road where you just carry the bucket with you. And that would take what's called linear time, but we'll get into all these terms in just a second.
00:19:13
Speaker
So maybe explain, because I wasn't entirely clear on this, the importance of knowing how long an algorithm, so we talk about the timing of an algorithm, right? We talk about linear time and sublinear time and all these things that we're talking about. Why do we care? Let's say we want to do a task and we know that the input is a certain size and we know that the output is a certain size based on the input and that it takes a certain amount of time based on the input.
00:19:39
Speaker
Let's say that we know that for a small problem. Let's say we want to do a much bigger problem.
00:19:44
Speaker
knowing how long it actually takes for the algorithm to run tells us whether or not that's feasible in human time or not. For example, there are some algorithms that for input of size two, they run in literally a microsecond. For input of size four, it would take longer than the heat depth of the universe and a billion times over. And what I'm talking about is the Ackermann function, but we won't go into that.
00:20:10
Speaker
So the emphasis on whether or not running an algorithm is even feasible. Why is that important? What does that affect? Why wouldn't we just go ahead and try it and then give up at some point?
00:20:20
Speaker
Well, because we want to be able to predict whether we could do it or not, it's much more of an engineering question than it is a math question, because there are algorithms that run in like finite time for infinite sets of data, awesome things that you can do with algorithms that way, where it's just interesting what the asymptotic complexity is, but it's not necessary to know. But what's interesting about it is that there are certain problems that seem like they'd be really easy,
00:20:48
Speaker
but really aren't. So for example, any map that you have, like you take a map off of your wall, assuming you have a map on your wall, you could color in that map with at most four colors. But to figure out if a map is colorable with only three colors is one of those problems where doubling the size of the problem would make it billions or trillions of times more complicated.
00:21:14
Speaker
So real quick, with respect to the map problem, so we say if we look at a map of the United States, you say it requires four colors. Is that because of the tangential states and the borders? You know what I mean? Like you need four colors, so you don't have two states that are the same color right next to each other? A little bit, yeah. It's a little bit more involved. It has to do with graphs, but suffice it to say that you can't color it, or I don't think you can. I'm not sure if anybody's actually calculated that yet, the map of the United States with three colors.
00:21:42
Speaker
I think that makes sense. No, no, no. That really makes sense. Because if you've got bordering states, and again, depending on the shape, I think you're right. The question is, is it dependent on the shape of the states? Because technically, it's possible to have a state with big states around it that may allow for that. I'm not sure. But that's actually a really good point. I don't know. You've got like Colorado surrounded by New Mexico and Nevada. Yeah, it's a difficult problem.
00:22:10
Speaker
Can we edit that out, please? As I recall, the map coloring problem has yet to have a satisfactory proof, right? Oh, the four-color one, yeah. I mean, there's a computer-based proof that did it perfectly, but there's not an elegant Erdosian proof. What's an Erdosian? Oh, like, you know, Paul Erdos. Oh, Paul Erdos. Erdos. Erdos. My favorite mathematician. Okay, that's right, that's right, okay.
00:22:36
Speaker
But so one problem that's really that kind of illustrates the difference between certain types of complexity is the traveling salesman problem. So if you've ever been on a road trip and you want to plan the fastest route between different places, it's an extremely difficult problem. It takes what's called exponential time, meaning for every city that you add to your road trip, you basically double the amount of steps that it takes to find a possible solution. And that's using the clever method.
00:23:06
Speaker
Interesting. Interesting. Is there a way you can do, break down that analogy a little bit? Yeah, sure. So think about how many different routes there are between different cities. So if you have two cities, there is one route, right? So yeah. So Albuquerque that we are based in and then the next neighboring town, let's just say Santa Fe. So we've got, you know, the freeway is one route.
00:23:26
Speaker
I-25. We're assuming that we already know the fastest route between two cities. For any two cities, we already know it. Let's say we add in Las Lunas. It's faster to go from here to Las Lunas to Santa Fe than it is to go from here to Santa Fe to Las Lunas.
00:23:47
Speaker
But here we have already, we have two different routes. We add in another city, we have six different routes. We add in another city, we have 30 different routes. And it's actually factorial. Oh, okay, got it, got it. Okay, so what we're saying is that there's not- And the next one would be 24 then? Yeah. Yeah. Yeah, I miscalculated that, grossly. Okay.
00:24:10
Speaker
I'm sorry, I suppose you said earlier there's not currently a solution to the traveling salesman problem. There is a solution, but not a polynomial time solution. It's just awful, yeah. It diverges quickly. Sorry. Can you explain what a polynomial solution is for our listeners who are not like how that is different than other solutions?
00:24:28
Speaker
Yeah, sure. It's different because a polynomial solution is something where there's what's called linear time, which is the simplest polynomial one, where if you double the input size of your problem, it takes twice as long. There's quadratic time where you input it and it takes two times two times as long, so four times as long.
00:24:50
Speaker
An example of that kind of algorithm would be sorting a deck of cards in the worst way possible, picking the smallest card every single time and then putting it in a new stack. Wait, are you telling me there's a better way to do it than that? So, real quick, let's just pretend you had mentioned cards. Let's just pretend that I said to you, Jonathan,
00:25:10
Speaker
I said, oh, you mean that there's a better way to sort cards than going through one by one and finding the next smallest card? I had no idea. Can you tell me about it?
00:25:20
Speaker
And I'd say, certainly my good man. What you do is you divide the card into two stacks, deck card into two stacks, you divide each of those into two stacks and keep doing that until you have a bunch of pairs. Then you sort each pair and then you just go back up sorting them back together. And then that's actually called what's called n log n time, which means that if you double the size of the problem, it'll do a little bit more than doubling the size of the input, but not much.
00:25:50
Speaker
Wow, okay, okay. Yeah, so I was aware that there are more efficient ways, but that actually is very helpful. Sorting cards, this is another great analogy for helping non-computer scientists to understand and to grasp why sorting data is an important thing. Yeah, and let's say you already had the card sorted. To find a certain card in the pile would take a very short amount of time. You just divide it in half and divide it in half again and again and again until you find it.
00:26:18
Speaker
Is there an algorithm for the fastest way to mess up a card's deck? Actually, that's one of those really difficult, more difficult than you might expect problems, because there's this algorithm called BOGO sort. And you shuffle the deck of cards, and then you check if it's sorted, and if it's not, you shuffle it again. I'm going to give an example now of a problem that's solvable in a very short amount of time and when it's solvable in a very long amount of time.
00:26:46
Speaker
Let's say I have a convex polytope. All that is, is something like a gemstone, where, Gabriel, do you want to talk about what a convex polytope is real quick? Sure, sure, sure. So those of us who may recall from, I believe, you know, sixth and seventh grade science, convex is typically the term used to describe a lens that is, I guess we'll say, sticking outward. You know, like if you hold a cereal bowl upward, a bowl itself is not convex.
00:27:13
Speaker
Concave. Concave, thank you. So a cereal bowl is concave. So convex, if you flip the cereal bowl around. It depends on how you look at the lens, obviously. And then polytope. So that's many. And then taupe refers to topography. Surfaces, yeah. So convex sticks out like a bowl turned upside down. Polytope, many surfaces.
00:27:33
Speaker
Yeah, so for example, a dumbbell would not be a convex polytope because it has concave parts, which are like little divots and stuff like that. You take a ball that's a convex polytope, if you could approximate it with one, but as soon as you have a bowling ball where it has holes in it, that's no longer convex. What about an overinflated soccer ball?
00:27:57
Speaker
Are they hexagons that are together, that if you inflated the hexagons, that it's convex instead of concave? Oh, that would still not be a convex polytope because the seams go into the ball. Okay, if we had a smooth, over-inflated... Oh yeah, then that would be a convex polytope. There you go. And so let's say we have this convex polytope and we have a light source that's very far away.
00:28:20
Speaker
And we want to find the bright, but it's just far away enough so that basically there's a gradient, meaning a smooth thing that goes from one color to the other on the surface of this polytope. And we want to find the brightest corner of the polytope. What we do is that we just pick a random edge and we just go along the edges of the polytope until we find the brightest one. And that takes a very short amount of time. If there's n surfaces, it takes about n cubed time to solve that.
00:28:51
Speaker
Now let's get an example that's actually very difficult to solve. This takes about two to the n steps. Let's say that we had a grid inside of this polytope, and we wanted to find the grid point that was brightest. So we look at the grid point inside of the polytope, not the stuff on the surface. That takes an insanely long amount of time. And there's no polynomial algorithm that's known that actually solves that.
00:29:20
Speaker
So I just want to point out that it's not necessarily intuitive that n cubed is considerably smaller, I guess, than 2 to the n. That n cubed is n to the third power. If you think about, let's try 10. If I do 10 cubed, that's 1,000. If I do 2 to the 10, do you know what that is?
00:29:39
Speaker
Oh, it's 1,024. So they're about the same there. But let's say we pick 20, then we have 800, I mean, not 800, we have 8,000 versus about a million. You pick 30, you get 27,000 versus a trillion.
00:29:56
Speaker
Yeah. So two to the N gets bigger, much more drastically as N increases than N cubed does. Yeah. So like, let's say you wanted to solve a problem for a polytope with a hundred surfaces on it. It would take about two to the 100th or one novillion steps, which if you did a trillion steps a second would still take much longer than the heat death of the universe.
00:30:24
Speaker
Now, again, for those that don't know, the heat death of the universe is a long, long time. Oh, yeah. I should hope so. So I would like to point out that when we talk about polynomial versus exponential, and there's different versions of this as well. There's linear and logarithmic. So logarithmic is the inverse of exponential. So as quickly as exponential goes terribly as it explodes, that's how slow it is for logarithm to go bad.
00:30:53
Speaker
Oh yeah, so if you double the amount of inputs, you only increase the time by one step. So could you say that one's enthusiasm for a school semester would be a logarithmic thing? Well, I guess it might be exponential decay, actually. Yeah, I feel like that's decay. It definitely is decreasing as time goes on. Exponentially. You could say that the disdain that one has for the school is exponential.
00:31:21
Speaker
Okay, okay, fair enough. I hope not. No, no, we're all nerds. You know, everyone here loves school. Oh yeah, I love school.
00:31:30
Speaker
I want to be done with my masked hands. Sorry, I'm ready. I'm ready to be done. I think that's true of everyone pretty much the moment they started. So we talked about the different asymptotic complexities that we have sublinear time, which is essentially logarithm, which we just talked about. We have linear time, which is just a straight line that pretty much input and output are directly correlated.
00:31:53
Speaker
And then we have n log n time, polynomial time, exponential time, and we talked a little bit about how exponential is about as bad as it gets, right? Or is there something above that?
00:32:04
Speaker
Oh, there's some stuff that's so much more than that. The stuff of nightmares. Oh, yeah. Like, for example, the Ackermann function. So Ackermann function is you take in two numbers, m and n. If m is 0, it's n plus 1. If n is greater than 0 and n is equal to 0, then it's Ackermann function of m minus 1 and 1.
00:32:28
Speaker
And if m is greater than zero, n is greater than zero, then it's the Ackermann function of n minus one and the Ackermann function of m and n minus one. So that was kind of gibberish. What is that in non m and n? So what it is, is it's a function that you kind of pass it back into itself two thirds of the time, or I guess most of the time, and the other bit of the time you increment it. So it's a really simple function, really, if you usually look it up online.
00:32:55
Speaker
So I'm thinking, you know, of a math torture chamber, if there was such a thing, would this sort of be like like the ultimate torture? This is like mathematical waterborne. Well, a little bit like, for example, the Ackermann function of like of one and one is three. The Ackermann function of three and two is twenty nine.
00:33:18
Speaker
But the acronym function of four and two is a number with about 19,000 digits. So you could bring this into an interrogation room and probably get some information out of somebody. I'm just saying. It feels reminiscent of a chaotic system of like a setting up like the Mandelbrot set or any fractal is a fairly simple construct. But if you pick a number outside of the set, it goes poorly very quickly. A little bit like that. Yeah. Yeah. With this. And the thing is you can only calculate this thing one step at a time.
00:33:48
Speaker
Can I just say I need to make a disclosure here for our listeners. We're really not trying to scare you away from algorithms. We're trying to make you love algorithms, so I want to make sure that I am meeting that end. We want you to appreciate them. Yes, yes. Algorithms are amazing, and algorithms are how our world operates, and I think they're awesome. I was always teasing you about the torture chamber thing. That is funny.
00:34:13
Speaker
Now one thing that might scare you a little bit about algorithms is the P versus NP problem.

Philosophical Implications of P vs NP

00:34:20
Speaker
So basically the thing is this, there are certain problems that we know, for example, coloring a graph with three colors, like we mentioned before, like the knapsack problem, which is I give you $19 and 23 cents to spend at a store to find items that add up exactly to that and integer linear programming, which is a thing with the jewel and the grid inside of the jewel. Um, they can be transformed into each other.
00:34:47
Speaker
So they're equivalent problems. They're basically the same problem told in different ways. If somebody ever discovered a polynomial time algorithm to solve that, that would have huge ramifications for our universe. What it would mean is that any problem that could be verified in polynomial time could be solved in polynomial time because we know for a fact that it takes non-polynomial time to determine the answer to a solution for any problem.
00:35:15
Speaker
And so if you could verify any solution in the polynomial time, then that would mean that math would be inherently easy. So quick question before we get to too far down this rabbit hole. When we talk about polynomial time, would you say that that's the acceptable limit of doing an algorithm that like if we discovered that an algorithm has exponential time or hyper, hyper exponential time that we wouldn't bother computing it? Yeah, basically. I mean, polynomial time we would.
00:35:45
Speaker
Yeah, like if you're doing, I mean, not if it's like N to the hundredth, but if it's like N cubed into the fourth, maybe even into the fifth, yeah, you could kind of attempt it. If it's exponential, you're only going to do it on very small data sets. So Amy, as I understand that you researched a little bit about the P versus NP problem, what were your impressions?
00:36:07
Speaker
The main description of P versus NP, as I recall, and correct me if I'm wrong, so we have polynomial and NP, do you remember what that stands for? I think non-deterministic polynomial. That is correct. I remembered it as you said it. The polynomial is the ability to determine if the problem is solvable, is that correct?
00:36:30
Speaker
Yeah, well, to determine if the solution is correct. So if I give you a map and it's three colored, I could verify very quickly that it's, I'd be like, yep, there's only three colors and yep, no two colors are touching. You win the algorithm. Okay, so P is whether or not the problem can be verified, the solution can be verified, whereas NP is whether or not the solution can be found, is that? Yeah, exactly. Break it down? Yeah, and if P equals NP, that means that you could solve any math problem in polynomial time.
00:36:59
Speaker
Boom, that's what's scary. And the reason why that's scary is means that there'd be no such thing as privacy anymore. It would mean that there'd be no such thing as insight, no such thing as creativity. Interesting. Because as soon as we could create the problem, we could also solve it.
00:37:16
Speaker
Yeah. I actually have a quote from Scott Aronson at MIT. He said, if P equals NP, then the world would be a profoundly different place than we usually assume it to be. There would be no special value in creative leaps, no fundamental gap between solving a problem and recognizing the solution once it's found. So then now that has not been solved, obviously. No, it's actually, I think, one of the millennium problems. So if you can solve it, you win a million dollars. If it's solvable. And if you can solve it, everything's meaningless.
00:37:45
Speaker
Basically. Actually, I do wonder if the solution even existed, if that would mean that P equals NP. There might be a way of solving it indirectly in a really creepy way, but yeah.
00:37:59
Speaker
For the most part, I think the consensus in the scientific community is that it cannot be solved or that it cannot be true. There's still a small population of computer scientists, I think specifically, who I think are just like hoping that it's true because it is very elegant. And it means that computers can actually do the work of mathematicians, which I think all computer scientists secretly hope.
00:38:23
Speaker
I mean, I think that they will be able to do the work of mathematicians once they have the same intelligence as humans, but not before then and not in any predictable way. So you say that nothing would be safe anymore. So could you go into a little more detail about what you mean by that?
00:38:38
Speaker
Yeah, that one I'm stretching a little bit on, but what it means is that if you can factor... You're talking about cryptography essentially, right? Yeah, and you can verify very easily that something is signed correctly if you're given a certain set of data.
00:38:58
Speaker
So if you could reverse that, if P equals NP essentially, that would mean that we'd be able to, there'd be no function that's harder to go one way than the other way, basically. Right. So just as quickly as we create the security, we can break it.
00:39:12
Speaker
Yeah, which would mean that there's no fundamental security in the universe. Google's algorithm is a pretty simple algorithm, but it's one of those algorithms that you would think is a lot more complicated than it has to be.

Google's PageRank and Algorithmic Insights

00:39:28
Speaker
So let's break it down.
00:39:30
Speaker
It's called PageRank at its base. And what it is, is you have, let me define the problem. You have a bunch of websites or nodes. They're all connected to other websites using links. You have to rate these links by how important they are. And then you have to determine how important a webpage is by its links. Now it's a little bit of a circular problem.
00:39:55
Speaker
Because if you have a link from a very important website, then you have a very high rank on the PageRank scale. Even though you might have more a rank than somebody with a lot more links, but meaningless things like, you know, viruses. Um, and that's the problem with, uh, that's a problem. And it's one of those problems actually solvable in polynomial time. And the way that it works is very simple.
00:40:20
Speaker
So what we're talking about here is what happens when you type something into a Google search bar. So when you go to Google and type something in and search something, it performs this algorithm that then brings up all these results. And so what Jonathan is describing to you right now is what's happening after you press enter.
00:40:37
Speaker
Yeah, and one thing about a matrix is that it does what's called a linear transformation on a vector. Don't run away. What it means is you just take a bunch of numbers, you do a certain transformation that's very predictable with a matrix, and you get a new vector out. Vector meaning list of numbers.
00:40:56
Speaker
You know, I want to say something really quickly. This is a slightly off topic, but very justified. There are some phenomenal YouTube videos on linear algebra that gave me an appreciation for it and an understanding for it while I was working my way through my master's in electrical engineering. So all of our students out there, there's so many phenomenal videos on YouTube that show you what all this means and the relevancy. And I felt like I had to say that for some of our listeners. Oh yeah, YouTube is a fascinating and
00:41:25
Speaker
thing when it comes to math, especially linear algebra, there's a lot of good things and we do urge you to check that out after the episode. Yeah, just Google it. I was Googling something while we were talking about that and it's really meta and mind-blowing and I would recommend it for our listeners.
00:41:42
Speaker
Now, what's cool about this matrix is that the entries, so let's say entry M rows down, N columns in, means that you connect the Mth webpage to the Nth webpage if it's one. Now, you do a little finacling with this and if a link is near the top of your page, it's probably more important.
00:42:04
Speaker
So you change the numbers a little bit based on that and then you just multiply this matrix again and again and again and again to your stuff using the PageRank algorithm and then it stabilizes after a while. It just kind of magically does that and you get the PageRank of every website.
00:42:22
Speaker
Somehow it sounds very simple and yet incomprehensible all at the same time. And a lot of good algorithms are kind of like that. There's backpropagation, which is as simple as taking the derivative and changing a neural network based on derivatives.
00:42:39
Speaker
but using that we can solve highly nonlinear problems. There's calculating the stresses on bridges using matrices and that just comes out magically too. There's so many magic things about algorithms. Can I ask real quick just to help round out my understanding of search algorithms. You just explained the Google search algorithm. Can we explain some other internet search algorithms and talk about how they differ?
00:43:08
Speaker
Well, there's the naive version of it where you just look for web pages based on how many terms are the same in your search versus the thing that you find. The problem with that is that it doesn't take relevancy into account, so you don't get the awesome Google effect of getting exactly what you want. We're not sponsored by Google. We're just in all of them. Almost every search algorithm now is based on PageRank.
00:43:36
Speaker
OK, OK, yeah, you know what's interesting? I love thinking about the evolution of algorithms, including libraries like the Dewey Decimal System, and just different ways of sorting information and accessing it, because that's a very relevant question in today's world. So that's, I don't know, it's very interesting to think about.
00:43:52
Speaker
Oh, it definitely is. There's also facial recognition algorithms, which run in polynomial time. The way that these work is you have these things called eigenfaces, which represent all the possible different matrices that can be added up to make anybody's face.
00:44:10
Speaker
saying oh my goodness I can face so the face obviously is just this topological shape but that's how we identify each other that's how you know we know you from me you know so I know there's face swap on some of these apps what is it a snapchat something like that so that uses creepy facial recognition software well that's insane
00:44:31
Speaker
I feel like Eigenfaces would be the name of a dystopian movie or novel or something of like Eigenfaces. Like Face Off, but like Eigenfaces. Right, the mathematical dystopia that is Eigenfaces. They see a face and they don't know who it belongs to. That sounds so good. Is there been an Eigenfaces joke about the Snapchat thing yet? There should be. Or the movie Face Off, like Face Swap and Face Off.
00:44:57
Speaker
Can't there be a Nicolas Cage and John Travolta face swap? Sorry, tangent. Hashtag, math puns.
00:45:06
Speaker
I feel like eigenfaces that you'd have to have like because I'm the only keyword I'm coming up with is eigenvalue which I know pertains to matrices in some regard but I don't for the life of me remember what it is or how it's used. Eigenvalue is like if you apply a matrix to a vector and the vector doesn't change direction that's an eigenvalue. So I was gonna say eigenvalues are the unchanging ones consistent through time they're like the sacred
00:45:30
Speaker
So maybe eigenfaces would be those that are unchanging through time. Yes. Well, we found the perfect eigenface and we have to leave it there. There's no more development in humanity.
00:45:41
Speaker
Well, I mean, you could use eigenfaces to make like a quote unquote perfect face if you take these faces and you link them to each other using PageRank. This is like your big double-decker sandwich with all of our topics. We're just putting them together, guys. Just putting them together. And then you use genetic algorithms. Go back and watch episode
00:46:01
Speaker
Episode eight. Episode eight. And then you use the genetic algorithm to further optimize the perfect face. And then you could use a traveling salesman problem to find the best route to display this face in front of crowds. This is your triple decker special sandwich. And what would be your sauce? You know, I have some kind of special sauce here now.
00:46:21
Speaker
the master theorem. No, we're not gonna talk about the master theorem. I was gonna say Shannon's information theory. There you go, that's right. Entropy, dude. Entropy. How could we miss that? Dude, work in entropy here. Come on, you're on a roll. And then we introduce a random component. It falls into cries. It decays over time. You know, the whole sandwich, the whole mathematical sandwich decays over time. That's entropy, right?

Random vs Deterministic Algorithms

00:46:44
Speaker
Yeah, and actually, let's talk a little bit about randomization and what it does to algorithms real quick and just some general design philosophies when it comes to algorithms. Random algorithms are actually a lot more robust than deterministic algorithms. Yeah, so I think random as though like not thought out, you know what I mean? Like you don't say just hand me a random tool, would you? You know what I mean? But in this case, you do.
00:47:08
Speaker
You can almost think of random as being in some sense, and we're not going to go very deep into this because it's terrifying, but free will. Randomness is one of the justifications for the idea of free will. Or almost imbuing algorithms with their own little degree of free will by giving them randomness. In the beginning, God created algorithms and gave them randomness.
00:47:32
Speaker
Something like that. We know very little about this because our general rule of thumb is that if it works more than a third of the time, then it's kind of a good algorithm. It's such a rough estimate. But some other things about algorithms is that, in general, you could trade space and time.
00:47:52
Speaker
So if an algorithm takes a lot of space, which means that if it takes a lot of computer memory to solve, then you could do it faster. So let's think about the sorting algorithm. If I don't have a big, huge table to lay out a bunch of cards on, I'm kind of limited to picking out the smallest card every single time and putting it in a new deck. So you could kind of trade space for time.
00:48:21
Speaker
Also, bigger problems are solvable relatively more quickly.
00:48:41
Speaker
So the thing with the bigger problems are solvable more quickly. An example of that is multiplication. Multiplication, when you do it, you have to multiply every number by every other number, every numeral by every other numeral or whatever.
00:48:56
Speaker
when you're multiplying two numbers together in the way that you're taught in school. So that takes n squared time. However, there's an algorithm that you could do it in n to the log three of two time, which is actually infinitely faster given infinitely large numbers. I'm sorry, I have to interrupt. Can you I have a challenge that I always want to do the breaking math challenge where I say explain what you just said to like a like an eight year old, like how would you say n to the log three to an eight year old? What would you say to him?
00:49:23
Speaker
I would say when you fall down, you go faster and faster every time. That's different than running. So it's somewhere between running and falling in how fast it is. Okay. Okay. Interesting. Interesting. It's like falling with a parachute.
00:49:38
Speaker
Okay. A little bit. So maybe, or falling through tree branches that slows you down a little bit. It's like falling through tree branches with three tree branches get thinner and thinner and thinner very slowly. That's a good math problem. It's insane, man. But yeah. I don't feel like this is going to give the three-year-old a very positive experience with math. Are we going to fail at this? Are we going to fail? Oh, no. Oh, no. So I'm trying to, I'm just trying it out. You're going to associate math with falling through trees. Oh, no. Well, then think, oh, okay. That's horrible.
00:50:08
Speaker
So yeah, but let's say in this called Karatsuba's algorithm for multiplication, and he actually developed it in one week after he saw a lecture where the professor said he can't solve it faster than n squared, and he said, I'll show you. But you have to have numbers that are about 600 digits long to make it worthwhile.
00:50:26
Speaker
I'm sorry, I heard that and I thought of an algorithm for challenge, a confident mathematician algorithm. You know what I mean? Like if you want to find a more, if you want to find innovation, here's your algorithm. Just like find, you know, a bunch of experts and challenge them. No, I'm just kidding. For students in college, apply disappointing lecture.
00:50:44
Speaker
a polite, challenging lecture. Yeah, I know. I mean, that's pretty standard reverse psychology, is to tell someone they can't do something, it's the only thing they want to do. Yeah, and of course, obviously, that's to be taken with a grain of salt, clearly, because there are times where you can't do something, but I was just trying to think of it. That sounds like a challenge. That says you. I could do anything. Okay. Oh, yeah? Can you be a better Gabriel Hesh than me?
00:51:11
Speaker
I don't know that I'd ask Jonathan if he could do something better than you know I don't want to I don't want to face my own horror movie like in my mind funny story funny story we already talked about face-off and I get faces that I don't think I want to challenge Jonathan to take over your face Gabriel

Episode Conclusion and Future Plans

00:51:32
Speaker
Algorithms are a way of interacting with a mathematical landscape, and they give us an appreciation for space and time. We've explored everything from coloring graphs, to Google search algorithms, to the philosophical ramifications of the famous P equals NP problem. It's clear, at this point, that algorithms, or their study thereof, is a nascent science, but a rich one indeed. I'm Jonathan. And I'm Gabriel. And this has been Breaking Math. On Breaking Math, today we had... Amy Lynn. And Amy, is there anything you'd like to plug?
00:52:01
Speaker
I love my school. I work at the Public Academy for Performing Arts, an APS charter school that does 6th through 12th grade. If you're someone in Albuquerque who is interested in performing arts, your child is, presumably, you should send them on over to Papa. Awesome. And we are brought to you in conjunction with K, U and M, Generation Listen. Leela, is there anything you'd like to plug about that?
00:52:24
Speaker
We are wrapping up, it has been our great pleasure to help you gentlemen bust out what, 12 episodes of Breaking Math? Super cool, look forward to having you on my live Freeform show soon and again in the future. And we'll be back on campus quite active and with a vengeance next semester. Look for us at Welcome Back Days, thank you. Now in the meantime, we're still gonna be bringing you Breaking Math episodes.
00:52:51
Speaker
Yes, we are really excited. We will be recording over the summer in a variety of topics and have a variety of guests expanding our topics quite a bit. And we'll keep bringing you the same great content you come to expect. Thanks for listening.