Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
39: Syntax Matters: Syntax... Matters? (Formal Grammar) image

39: Syntax Matters: Syntax... Matters? (Formal Grammar)

Breaking Math Podcast
Avatar
444 Plays5 years ago

We communicate every day through languages; not only human languages, but other things that could be classified as languages such as internet protocols, or even the structure of business transactions. The structure of words or sentences, or their metaphorical equivalents, in that language is known as their syntax. There is a way to describe certain syntaxes mathematically through what are known as formal grammars. So how is a grammar defined mathematically? What model of language is often used in math? And what are the fundamental limits of grammar?

Recommended
Transcript

Episode Introduction and Technical Notes

00:00:00
Speaker
Before we start the episode, I wanted to say that there is some clicking in the episode. In some kind of interference, we're still looking into what happened to make sure it doesn't happen again. But it happens probably for about 20 seconds total in the episode of various parts. I cut out as many parts of it as I could, but I could only do so much. So with that in mind, please enjoy episode 39 of Breaking Math.

Introduction to Episode Theme and Guest

00:00:28
Speaker
We communicate every day through languages, not only human languages but other things that could be classified as language such as internet protocols or even the structure of business transactions. The structure of words or sentences or the metaphorical equivalence in that language is known as their syntax.
00:00:45
Speaker
There is a way to describe certain syntaxes mathematically through what are known as formal grammars. So how is a grammar defined mathematically? What model of language is often used in math? And what are the fundamental limits of grammar? All of this and more on this episode of Breaking Math. Episode 39, syntax matters. Syntax matters?
00:01:11
Speaker
I'm Sofia, and this is Breaking Math. And with us, we have on Alex Alanis, who works at the Nuclear Weapons Center with Gabriel. Thank you for being on the show. You're welcome.

Support and Promotions

00:01:22
Speaker
So if you want to support us, you can go to patreon.com slash breakingmath. For one dollar, we will give you a shout out on the podcast with your favorite math concept. So if your name is Jerry and you like tensors, you can say Jerry tensors.
00:01:33
Speaker
$5 is gold access to Breaking Math episodes early, sometimes we usually don't get to publish them early, and some outlines. And if you want to donate $22.46 a month, you can get a Tensor poster. Or if you just want to buy that outright, you can go to facebook.com slash BreakingMathPodcast and buy our Tensor poster there. Our Twitter can be found at twitter.com slash BreakingMathPod. And of course we have our website at breakingmathpodcast.com.

Guest's Background in Formal Grammars

00:02:00
Speaker
Alex, what is your familiarity with parsing and formal grammars and that sort of thing? My familiarity with formal grammars is limited. I have a master's degree in mathematics, a doctorate degree in physics. I've done a lot of computer science. I hear the term parsing, but it's something that computer scientists are familiar with. In addition to that, I speak three romance languages in English, of course.
00:02:26
Speaker
I'm always listening to differences in languages. I don't have a formal background in parsing, per se.
00:02:32
Speaker
When some of the research of me and you and Gabriel, we've just been talking back and forth about various ideas. And to me, what parsing is, is it's, I mean, there's a formal definition really, but like the best definition I could really think of is like constructing structures out of sequences, like taking a sequence and making a structure out of it. And I remember you were talking about some ideas that you had about artificial intelligence and how it could be used with graphs. Do you want to talk at all about that?
00:03:02
Speaker
Yeah, correct. In fact, I'm looking at a paper here. It says, towards automatically extracting story graphs from natural language stories. And it's work being done at, I see here, Drexel University. And it is a tool which uses natural language processing to extract narratives from Russian kids' stories. And it's an attempt to try to understand what stories are being written about.
00:03:29
Speaker
who the characters are, how the dynamics are playing out. At the same time, I have been working on my own time on artificial intelligence and then the application of algebraic topology to read novels and condense avatars from fictional works. At some point, I will meet the problem of natural language processing. Yeah, and natural language processing is obviously part of all that. That's the first thing we're going to be talking about, really.

Complex Sentence Structures and Mathematical Grammar

00:03:59
Speaker
So buffalo obviously means the animal, the bison, and it means also buffalo in, of course, New York. But a buffalo could also mean to harass somebody. So you could say, for example, buffalo, buffalo, buffalo.
00:04:20
Speaker
Which means Bison from Buffalo, New York harass. So Buffalo, Buffalo, meaning Bison from Buffalo, New York. And then Buffalo, they harass. That's pretty interesting enough. That's new to me.
00:04:35
Speaker
And let's see if you could parse this next one without looking at the answer key. And it has more buffaloes. Buffalo, buffalo, buffalo, buffalo, buffalo, buffalo. It's basically, you have to sit down with paper and pencil, right? Yeah, I see that, yeah. I see a tree expanding and a lot of ambiguities forming.
00:04:56
Speaker
Yeah, that's actually an important point that I forgot to mention. At least the way that Noam Chomsky defines it and his models have been used for grammar all over the place. Grammar is recursive in a definition, so at least natural languages.
00:05:15
Speaker
So you could even go, in that last sentence it means, bison from Buffalo, New York, harass bison from Buffalo, New York, that they themselves harass bison from Buffalo, New York, harass. And then you could say, Buffalo, Buffalo, Buffalo, Buffalo, Buffalo, Buffalo, Buffalo, etc. And we'll post a meme with this, all these buffaloes on the Facebook page.
00:05:39
Speaker
An excellent source for that would be go to leisure Bach the book by Hofstadter Oh recursive language. Yeah, we've talked a little bit about that book So to continue our thing of natural language processing we're gonna be breaking down the sentence Mary had a little lamb. So can you identify? That that you could identify that as a sentence, right? Yes
00:06:03
Speaker
But a sentence has a few parts, noun, verb, etc. I'll just do the first one. You can start with Mary being the noun and then had a little lamb being a verb phrase, the state of having a little lamb.
00:06:21
Speaker
So you could break that down into Mary on one side of the tree, and then the other tree being had a little lamb. So how would you break that down further? I had a little lamb. Well, there's an object, right? I had a lamb. So there's a verb structure in the middle, Mary had a little lamb. And a little lamb is a little would be descriptive of the lamb.
00:06:44
Speaker
Yeah, I think the way that I've seen it broken down usually, it's a little lamb is together a noun phrase because it's an adjective and a noun. And then the determiner A before it makes it still a noun phrase. And the verb before that makes it a had a little lamb. So that's kind of what we mean by breaking down natural language processing.
00:07:07
Speaker
So one of the things that's very interesting is I've been playing with Stanford's NLP core, which is our natural language processing tool. And you can type in Mary had a little lamb into it, and it will parse it for you in many different and strange ways. And it is a tool for my reading that powers many of the natural language processing tools out there, including Google's engine and Siri, etc. It's fun to type in sentences like Mary had a little lamb and see where that you know how it's parked.
00:07:35
Speaker
So how does it parse? I have not played around with this tool. How does it parse ambiguous sentences, like really ambiguous ones? So first of all, it's an incredibly large tool. It's probably a hundred page manual on it, and there are so many different approaches to parsing in it that I've just begun to explore it. I just do the generic default settings right now. So I can't answer that question other than it has lots of different ways.
00:08:00
Speaker
Well, cool. And what's the name of that tool one more time? It's Stanford's NLP core or core NLP. I believe it's core NLP all is one word and you can play with it on their website or you can download it and run it as long as you have, I believe, I think it's Java running on your machine.
00:08:21
Speaker
All right, so now we're going to talk about what a language is defined mathematically. So first of all, we just have to define a few terms. A symbol is like a, b, or c, or the numeral 1, or whatever. An alphabet is a set of symbols, like a through z, whatever. A string over an alphabet is a finite sequence of symbols from the alphabet, so like words, so like, I don't know, soup or chair.
00:08:45
Speaker
And that would be C-H-A-I-R, which would be a sequence of symbols. And then finally, a language is defined as a set of strings, which are the words of the language or sentences. Let me just give a really simple example of a language. Imagine a language and the only words in that language are cats and

Generating Language with Symbols and Strings

00:09:06
Speaker
mat. That would still be a valid language because it has symbols, right? A-M-C-T.
00:09:13
Speaker
Yeah, and so we have two words in that language, and the words are cat and mat. So it's not a very interesting language. Another language could be the set of all strings where it's just three a's and then three b's, or 20 a's and then 20 b's, the same number of a's and b's. That's a language. And that's actually language that's very simple to generate versus something that's a little bit more complicated, like a set of a's and a set of b's and a set of c's where they're all the same length.
00:09:42
Speaker
So do you have any questions about what a language is or how they're used? Your approach is interesting one of the most Historically important developments of a language similar to what you're talking about is is Georg Cantor who developed set theory in the late 1800s and he started with Wanting to formalize mathematics using pure logic pure symbols and grammar and he said okay I'm gonna start with a null set Then I'm gonna follow that with the set that contains the null set that will be my one
00:10:12
Speaker
and the null set being for listeners who don't know it's a set or like just a set is like a you could think of like a bucket with things in it it's like a bucket with nothing in it so sorry continue so right so the the a set that contains an L set would be the one and it's recursive the set that contains the set that contains an L sets two and in this way over several decades and with several other mathematicians contributing
00:10:36
Speaker
they were trying to formalize mathematics so that you could present it with a theorem and after a fixed number of steps, and this is similar to your PC running ones and zeros and ands and or logic, and it freezes on you. It's running binary logic and you don't know if it's going to take an infinite amount of time to resolve or a finite amount of time. That's essentially the computer science version of the incompleteness theorem. So languages are very interesting in that sense.
00:11:03
Speaker
Yeah, and there's actually equivalents in just the formal language world that actually directly utilize the incompleteness theorem. There are certain languages that no grammar can generate, no formal grammar. We'll talk about formal grammar in a sec.
00:11:21
Speaker
First, we're going to talk about context-free grammars. First, we're going to keep doing the terms thing. First, we're going to talk about non-terminal symbols. Those are symbols that are kind of in their own alphabet that are never used in the final sentences produced by the grammar.
00:11:36
Speaker
So they're like intermediary symbols, right? But there's one privileged non-terminal symbol, and that's a start symbol. And usually that's denoted by S. And S is just what you start from. You start with the single letter S. And then there's terminal symbols, which are used in the actual output. So if we're generating it for language, those are the alphabet of the language.
00:12:01
Speaker
And then there's rules, which is a set of pairs of sequences of terminals and non-terminals, and the other just a sequence of any terminals and non-terminals. But the first one has to have at least one non-terminal. So to break that down, we're going to do a toy grammar. Let's say that I start you off with the sentence CAT, right? OK.
00:12:24
Speaker
And I give you the rule, A can go to A-N. What would C-A-T become? Yeah, it would become C-A-N-T. But if I changed RT to L-M, it would become what?
00:12:40
Speaker
So that's essentially how these grammars are. That is how they are used to manipulate symbols. So we start with the starting symbol, and the first thing we're going to talk about is a context-free grammar. So context-free grammar, you always go from one symbol to many symbols. The reason why is because the context of any one symbol does not affect its role within the grammar. Does that make sense?
00:13:09
Speaker
I'd have to think about that. So before we had the rule A goes to A-N, which turns cat to cat, right? Yes. So there we have one symbol going to two symbols. If it were C-A-A-T, it could have become command, right? Correct.
00:13:28
Speaker
So the fact that the A was next to another A didn't matter at all. But if I, if we change the rule to say like AT becomes to, um, OP cat would become a cop and, uh, the context of A matters in that case, cause next to a T. Okay. I'm, I think I'm falling.
00:13:48
Speaker
Yeah, so in a context-free grammar, we basically expand the symbols. So if you're doing this in a word editing program, you would just take the symbol that you could replace it with, and you could replace it with one symbol, many symbols, or even no symbols. So let me give an example of a grammar that generates strings of a, b with equal length a and b. So you start with the start symbol s, right? Correct.
00:14:16
Speaker
And if S can go to ASB, where A and B are terminal symbols, or S can just disappear, which usually is denoted by an arrow and then epsilon. So what happens if you replace the S in ASB with ASB itself? If you replace the S with ASB, then you get AASBB. And if you did that one more time? It would be AASBBB, I'm guessing.
00:14:46
Speaker
Yeah, and then we can replace that with the null string, and then it can just become aaapbb. And that will always generate strings of equal length a and b. So that's a pretty simple grammar.
00:15:02
Speaker
All right, so now we're going to talk about a little bit more of a complicated context-free grammar. And that is the one that defines the normal order of operations, not including exponentiations, or just addition, subtraction, division, multiplication. So you start with an expression, right? And an expression could be like 2 plus 3 times parentheses 4 minus 5 in parentheses. All right.
00:15:28
Speaker
But an expression can either be a term or a term adding or subtracting from a term. So term plus a term is an expression or

Context-Free Grammar and Symbol Manipulation

00:15:37
Speaker
term minus term is an expression. So expression can always be expanded into term plus term or term minus term. Does that make sense? It does. And a term can be either transformed into a factor
00:15:51
Speaker
or a factor of multiplying or dividing from another factor. But a factor is either an expression in parentheses or a number. Using that, you could break down any order of operations expression. So for example, let's say, all right, we're just going to do 2 plus 2 as a quick example for this notation.
00:16:11
Speaker
So two is a number, right? And plus is, it's used in the grammar. An expression can be term plus a term. So we're probably going to refer back to it later, but right now it's just a symbol. And then we have another two, which is a number, right? Yes. Let's say we're going to parse this from the bottom up. A number is also a factor, correct? I believe so. So if we replace the numbers here with factors, we have factor and then a plus symbol and then factor, right? So far, yes.
00:16:38
Speaker
Now a factor can also be a term, right? I believe so, yes. Now then we have a term and then a plus symbol and a term, and that can be an expression. The reason why we brought it all the way back to expression like that is because expression is what we start with. So that is a little bit more of a complex context-free grammar.
00:17:02
Speaker
Just a quick correction here. The grammar I gave is ambiguous. For example, is 1-2-3-4, is that 0? Because 1-2 is negative 1. 4-5 is negative 1. I mean 3-4 is negative 1. And 1-2-3-4 is that negative 1 minus negative 1?
00:17:19
Speaker
Or is negative one minus two minus three minus four. Do you do it in a row? So you get negative eight. It's not something that is clear from here. So the grammar that would be unambiguous would be expression is equal to expression times factor. Factor equals factor minus term and term equals parentheses and expression or a number.
00:17:49
Speaker
or a variable in some things, but that's not what we talked about in the episode, but here the rest of the episode is. And a computer might use something like what's called the shunting yard algorithm to convert it to reverse polish notation. And I remember you were mentioning earlier that you had a calculator that uses this?
00:18:07
Speaker
Yes, I love it. It's a Hewlett Packard 41 CX from the early 80s, I believe. It's a reverse polish machine and its claim to fame is that it could fly the space shuttle if all the computers would go down. Well, that's pretty cool. So how would you explain the process of... So let's say you wanted to do two plus two on a reverse polish notation calculator. How would you do that? It would go to enter two plus.
00:18:36
Speaker
Yeah, because, and the reason why is because you can imagine you have a stack of numbers. So let's say, so when you say two and press enter, you have a stack and you can think of this like a block with the number two on it. And then you press another two. And then when you press plus, it's kind of like pressing enter and plus at the same time.
00:18:55
Speaker
Because it puts a second two on the stack and then takes both of them off with the plus and it generates a four so to parse the the grammar or the language that's described by the grammar that we were just talking about the upper order of operations you can use something like the shunting algorithm and And it's pretty simple. So let's say you have a
00:19:20
Speaker
an output queue or an output series, right? So this is the stuff that will go into the Hewlett-Packard calculator. So let's say we have our 2 plus 2 expression from earlier. So these are the rules. Minus plus divided n times are the operators in order of increasing precedence.
00:19:43
Speaker
Prens are special operators. And when you put an operator on the operator stack, because there's two stacks, I mean there's a stack and there's an output queue. And a stack is just like a thing where you can put numbers on top of or symbols on top of and take them off. So if I put a 1, 2, and a 3, a 3 would be at the top of the stack. If I popped the 3 off, the 2 would be at the top of the stack. If I pushed a 4, the 4 would be at the top. If I popped that 4, the 2 would be back, and so on.
00:20:10
Speaker
So let's actually do 2 plus 2 times 3. So I would do 3 enter 2 times 2 plus.
00:20:18
Speaker
Yeah, three and or two times would get six into the top of the calculator. And then three would have six and a three at the top of the stack. And then plus would pop both of them off, add them together and give you the weight eight. You're right. Cause it was, I think you said two, two times, two plus two times three.
00:20:43
Speaker
Hey, everybody. I have a math podcast. So let's say 2 plus 2 times 3. So we put 2 on the output stack because it's a number. So that's going into the calculator already is the number 2.
00:20:58
Speaker
And then the next symbol we encounter is a plus. Now, that doesn't go into the calculator. That goes on our operator stack. We have another two, so that goes into the calculator. So you could do to enter, to enter, which is a different order than you would have put it in, but it's still a valid order. And then... Yes, it is. Yeah. Because you get a to enter, to enter, three times plus.
00:21:22
Speaker
Yeah, and if you didn't catch exactly why that would work, two enter, two enter, so we have two twos at the top of the stack, and then we have two and a three at the top of the stack with another two below it. If we press times, the two at the top, the two and the three,
00:21:39
Speaker
get multiplied together and repushed so that it's a 6 and a 2. Then when you press plus, it takes a 6 and a 2 and adds them together and produces 8. And that's what's going to be generated by the algorithm. So the rule with the operator stack is that when you're putting another operator on it, it sinks until it basically shoves operators off of it until it is the highest precedence operator.
00:22:06
Speaker
So for example, times is higher precedence than plus, right? If I put a times on there, it'll be taken. It'll be the first one to get off the stack because it doesn't have to just place a plus. But if they're in the opposite order, it would have displaced the plus. So then at the end of the Schrodinger algorithm, we just take all the arguments, all the operators off one by one. So then you get 2, 2, 3 times plus. Yes. Or 2, 3, 2. We get a certain number. It works.
00:22:37
Speaker
All right, so we talked about context-free grammars. Now we've got to talk really quickly about context-sensitive grammars. And we talked earlier about a sequence of three A's, three B's, and three C's, right? Yes. But that is proven that that cannot be generated by any context-free grammar at all. No matter how clever it is, it cannot generate that. You need a context-sensitive grammar.
00:23:03
Speaker
And I actually came up with one over the course of doing this episode. And it has 30 rules, and then I find an example of Wikipedia with 10 rules. So if you want a good example of something that generates this language, just go to Wikipedia and search up formal grammars.

Limitations of Formal Grammars

00:23:25
Speaker
All right, so as we talked about before, there's some limitations to formal grammars. And one of them being that they cannot reproduce all formal languages. And so after this discussion on grammar, do you have any more insight about how that relates to the halting problem or anything like that?
00:23:44
Speaker
It's interesting, and as I was looking at your toy example here, I'm reminded very much of the rules of biology of how DNA is turned into proteins with four letters, A, T, C, G, and codons three at a time. So it's very interesting to see that that language has produced what we see on this planet, lots of different species.
00:24:06
Speaker
Oh yeah, and there's languages within languages there. And then there's hyper-compact data. When it's offset by one, but it produces still viable proteins, which is one of the most insane things in nature.
00:24:24
Speaker
But yeah, another thing we're going to quickly talk about is post's correspondence problem. And it's a problem that seems really simple at first, but you'll see why it's maddening. So let's say we have two grammars, right? And just to make this easy, let's say that these grammars, let's say that the non-terminals are numbers, like 1, 2, 3, 4, 5, 6, 7, 8, 9.
00:24:51
Speaker
And the terminals, which are going to be in the final productions, are alphabetical letters. So for example, you might have like 1 go to AB, 2 go to BC. And so if you did 1, 2, it would go to AB, BC because you expanded 1 and you expanded 2. So does that make sense so far? So far.
00:25:10
Speaker
So let's say we had two different... So let's say we had two different grammars defined, both from numbers to letters, right? Okay. And my question is, given these two grammars, these context-free grammars, is there some sequence of non-terminals in both languages that result in the same sequence of terminals? So it seems like this should be an algorithmically easy thing to do on the surface, right? Seems decidable in a finite number of steps.
00:25:35
Speaker
But it's undecidable because the Wikipedia page is a very thorough job explaining this. Well, a semi-thorough job is that it's possible to set up two grammars in this way such that the answer to this question would answer the halting problem. And are you familiar with halting problem? It's the, from what I understand, it's the, it's the Godel Escher. I mean, sorry, it's the Godel problem of, does the theorem stop? Does the computer algorithm stop in a finite number of steps?
00:26:03
Speaker
Yeah, and it is undecidable. It's undecidable. It's linked really closely to all that, like Godel and the Hillbirds and all those people and all their madness. But yeah, basically you could set up in a really clever way. You could set up a computer that would answer a question that is already known to be undecidable.
00:26:30
Speaker
Right, that's typically the way I understand how the proof works. And it's really interesting when you run into these limitations, especially in computer science, because it's interesting to think of it as... It's easy to think of engineering as something that takes a certain amount of mathematical understanding to master, but it seems like usually it's just you take the mathematical understanding and you find limitations within the physical world with it.
00:26:56
Speaker
Gabriel said you might have had a slight theory about blind spots in physics. I mean I put together probably a 40-page document on limitations of physics and some of those limitations are inherently built into the mathematics like we just decided I mean we've just talked about the
00:27:17
Speaker
Halting theorem, Gรถdel's problem of the incompleteness of mathematics, that's one of the problems that all mathematical systems that are rich enough to contain arithmetic are beset with. There are more famous problems than that. I've heard of the Banach-Tarski paradox, which you should look up on the internet. It's very interesting,

Mathematical Paradoxes and Language

00:27:34
Speaker
but it seems to violate logic of volume and density.
00:27:40
Speaker
Yeah, the basic argument behind the way that this is constructed is it creates like a binary path to every point on the sphere and then it addresses them differently, classifies them into different types of addresses, separates them into different spheres and builds them into two new spheres. That's the correct thing. You can disassemble a sphere and reassemble it from these pieces into two equivalent spheres.
00:28:02
Speaker
Yeah, which is kind of nuts. And of course, there's undecidable problems in physics, such as if you built a machine that posited an undecidable problem. No, I'm not talking about that. And just to go back to the mathematical case, there is an author out there, Robert Gilmour. Probably he was a mathematician. He was probably not a friend to mathematics in the sense that he had a distaste for pure mathematics.
00:28:31
Speaker
In his approach, he would think of mathematics as something that should be useful and was inspired by physics. And the gentleman went in through his books, goes through all the limitations of mathematics that led through, after Godel, through the 1940s and 50s and 60s. And in fact, now we don't know what to do. There are four or five different schools within mathematics on how to deal with these things.
00:28:59
Speaker
Yeah, it's really interesting. It's almost like we covered a little bit about disagreements in mathematics on our paradoxes episode. I can't really remember the name of it right now. But I remember we talked about these rifts in mathematics happening. Usually when there's a big breakthrough that's ready to happen, it seems like that's when mathematicians really disagree, is when they're really going to agree in a kind of new way. But that's maybe me being an optimist always.
00:29:25
Speaker
What my understanding is people are practical and the centricity of mathematical foundations has disappeared in importance and people are just solving problems.
00:29:37
Speaker
Yeah, which is not a bad way to go about it. I mean, that's the structure of mathematics until pretty much got what the 1600s through the 1800s or through the 1900s with like all the different schools of thought in like India and China and all that. But of course now it's a global scale and they're not localized by geography anymore. Something interesting on grammar that we haven't talked about yet.
00:30:00
Speaker
So, going back to Goglesia Bach, the whole book is dedicated to showing that you need full human intelligence in order to do natural language translation, which Google has shown is exactly not true. I mean, they're doing a pretty good job of natural language translation. And it was very interesting how Douglas Hofstetter looked at that problem. In writing his book, I think he had a poem by a French writer
00:30:24
Speaker
Moreau, I believe, that he had it in French, and he would pass it around to many different people who spoke a different first language, and would spin it around the world, and it would come back, and he would see how much it would deviate from the original poem. And there are many interesting examples of that in his book.
00:30:45
Speaker
Yeah, it is really interesting. He talks also about decidability quite a bit in that book. We actually did an episode on Gudela Sherbach if you want to go listen to it. I think it's episode seven or so, G-E-B. We covered about how he thinks about the proven theorems as like a white tree. The unproven theorems is like a black tree and the way that they kind of interact and intersect is the land of all the
00:31:10
Speaker
The theorems that might be true but cannot be proven within like, you know, piano arithmetic which is a formalization of a certain type of integer and rational arithmetic, right? Is Cantor and all that stuff part of piano, do you know? Cantor preceded, he was a set theorist. Who am I thinking of then? Piano axioms are, you're correct. Piano axioms are the axioms of arithmetic.
00:31:34
Speaker
What is the perpetually e-distant or something like that and the Cauchy sequences? Are Cauchy sequences part of piano formalism? They derive from it.
00:31:48
Speaker
Okay, and Cauchy sequences, just because we're getting into this just for a second, they're kind of like sequences like they get closer and closer to some irrational number, and they're sequences of technically rational numbers, and that's how you define an irrational number using rational numbers very cleverly. So the axioms of mathematics, as set up by piano, the axioms that are sufficient to encompass arithmetic are foundational to mathematics, and they're the ones that suffer the Banach-Tarski paradox.
00:32:19
Speaker
Languages are a way of organizing sequences of symbols, and grammars are a way of describing systems that can be used to generate the sequences of symbols that are from a given language.

Conclusion on Grammar's Role in Language and Technology

00:32:29
Speaker
As we have seen, grammars cannot generate all languages and are a difficult subject of study on their own. But from natural language to calculators, grammars help shape our world. I'm Sophia, and this has been Breaking Math. With this, we add on Alex. And Alex, is there anything you want to plug? Not at this moment, but thank you for having me on.
00:32:48
Speaker
Awesome. And I know a few times the term Godel was pronounced Godel, and I know people have complained about that on this episode before, but you all can die mad about it.