Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
14: Artificial Thought (Neural Networks) image

14: Artificial Thought (Neural Networks)

Breaking Math Podcast
Avatar
575 Plays7 years ago

Go to www.brilliant.org/breakingmathpodcast to learn neural networks, everyday physics, computer science fundamentals, the joy of problem solving, and many related topics in science, technology, engineering, and math. 


Mathematics takes inspiration from all forms with which life interacts. Perhaps that is why, recently, mathematics has taken inspiration from that which itself perceives the world around it; the brain itself. What we’re talking about are neural networks. Neural networks have their origins around the time of automated computing, and with advances in hardware, have advanced in turn. So what is a neuron? How do multitudes of them contribute to structured thought? And what is in their future?


--- 


This episode is sponsored by 

· Anchor: The easiest way to make a podcast.  https://anchor.fm/app


Support this podcast: https://anchor.fm/breakingmathpodcast/support

Recommended
Transcript

Introduction to Neural Networks

00:00:00
Speaker
Mathematics takes inspiration from all forms with which life interacts. Perhaps that is why, recently, mathematics has taken inspiration from that which itself perceives the world around it, the brain itself. What we're talking about are artificial neural networks.
00:00:16
Speaker
Artificial neural networks have their origins around the time of automated computing and with advances in hardware have it advanced in turn. So what is a neuron? How to multitude with them contribute to structured thought? How can they be simulated by computers? And what is in their future?

Meet John Gabriel Baca

00:00:32
Speaker
All this and more on this episode of Breaking Math.
00:00:45
Speaker
I'm Jonathan. And I'm Gabriel. And today we have a new guest, and his name is... John Gabriel Baca. Yes, believe it or not, his name is a mashup of the host's names. So Gabriel, what's your deal?
00:01:03
Speaker
Me? Yes, I'm not Gabriel. Jonathan, very proud to be the guest's name. Right, so for clarity's sake, I figure I'll go by John, which is what I go by on a daily basis. So we have Jonathan, John, and Gabriel. Yes, absolutely. Does that work?
00:01:18
Speaker
That works just fine, I think. Alright. Well, thanks so much for having me. I love the show and I'm super excited to be on. We are thrilled to have you. Now, tell us, Sean, you are currently a student at the University of New Mexico. Is that right? That is right. I'm studying journalism and I work at the Daily Lobo, the UNM's student-run newspaper. Excellent.
00:01:35
Speaker
Now, you do have prior experience with podcasting, isn't that right? Yeah, just a little bit. I've been dabbling for a couple years now, but I've not had quite the success that you guys have had, so I'm super pumped to be on such a prestigious podcast.
00:01:50
Speaker
Oh gosh, we are honored to have you. We are honored to have a student studying journalism. And also, I'm really excited because we've had some chance to talk about some, not to go off on a tangent here, but some future involvement with the university and podcasts in general. So we are very, very excited about that.
00:02:07
Speaker
We're certainly glad to have you here,

Complexity of Neural Networks

00:02:09
Speaker
John. So essentially, on today's episode, we wanted to bring John specifically as a non-expert. Jonathan, the host of Breaking Math, knows a lot about neural networks. He's in fact written a few. I know very, very little, aside from the fact that I've spoken to Jonathan about it. And John knows nothing, and that was very much intentional because we wanted to have somebody who could speak for an audience member who might not know a whole lot about it and can tell us, whoa, whoa, whoa, can you guys slow down there? Yes.
00:02:36
Speaker
Now the thing is, you might say that I know a lot about neural networks because I've written a lot of them. But the truth is, nobody really knows much about neural networks. They just kind of work magically. Like honestly, let's say it has 100,000 neurons, which is common.
00:02:55
Speaker
That's like having a formula with 100,000 variables. How do you even process that? You have to look at metadata, things like that to see what's going on. You can't really see what's going on directly.
00:03:08
Speaker
Yeah, that's fascinating. So even though there's a lot of mathematics that we understand, this is a field where it's used very, very frequently in this day and age. In fact, Jonathan, how many things can you think of that specifically use neural networks? Does Google? Google uses it, right? Yeah, especially in their self-driving cars. We can't have self-driving cars without RNNs, recurrent neural networks, which means neural networks are connected to themselves.
00:03:35
Speaker
the cat versus dog challenge. I'm not dealing with that. It's a challenge where you could try to tell the difference between pictures of cats and pictures of dogs. And that uses neural networks. There's pretty much no other way to program that. And back in the 50s, I can't recall his name right now, but he thought that the problem of identifying objects from pictures would take his students a month.
00:04:01
Speaker
In this episode, we will be discussing neural networks, what they are, how they work, their history, and some applications for them. To do this, we are going to discuss what a neuron is in nature, as well as what an artificial neuron is in the world of computer science. We will discuss both types of neurons, how they work alone, as well as how they work together, and how they are both similar and different.
00:04:24
Speaker
Also, why are we doing this on a math episode? Because using different types of neural networks, any mathematical function or process can be modeled. This will be explored. And finally, we will go into what place neural networks have in society, which includes security applications, reverse photo recognition, and everything you may have heard about deep learning. Which here means finding patterns deep in information.

History and Philosophy of Neurons

00:04:49
Speaker
All right, so what we're going to be talking about right now is the history of real neural networks and what they could teach us about artificial neurons. All right, so for the next part here, we're going to jump into the history of neural networks. This is actually pretty exciting. This is less than 100 years old, right? Okay, so neurons were first discovered in the late 19th century by Santiago Ramon y Cajal, which I guess he was some kind of scientist.
00:05:13
Speaker
Yeah, he actually did a lot of research, in fact. Did you know that he did research how the eye processes visual information? Basically, he had an experiment that involves spinning glowing coals. And he noticed that even though you've got a single coal, as you spin it at a certain rate, it begins to look like a complete circle. Kind of like a sparkler. We all do that with sparklers, right? I noticed that.
00:05:34
Speaker
Awesome, I didn't know that about Sunil Oremon. I do know that Godel, in his theory of color, talks a lot about how dark things look bigger than light things. And that's a lot of the history of neurosciences, that kind of thing, making weird little observations about the way that we see things. I mean, that's in some ways a history of philosophy itself.
00:05:57
Speaker
So part of this is due to the neural networks of our own brain, but then part of it is the fact that our brain takes in discrete information. I mean, there's so many reasons why we see things the way we do, and so many reasons why what we see may not necessarily be the way nature is itself.
00:06:16
Speaker
So, let's talk about Horace B. Barlow. Basically, in 1953, he attached a speaker to a frog's eye, technically to a frog's retinal ganglion cell, and he exposed it to different stimulus, and he knows it had different sounds. That's fascinating. Just from hearing that story, I'm confused. What was he initially testing?
00:06:43
Speaker
I'm not sure what his goal was. I think that very often science, the goal and the discovery are very tenuously related. What possible hypothesis could he have been testing by doing this? I wonder.
00:06:58
Speaker
I think he just wanted to know what happened. I mean... And why use sound on a frog's eye instead of light? Actually, that's an interesting thing. Visual information is as core five-dimensional on a four-dimensional manifold. I think I'm getting that right. I'm sure somebody will correct me. I'm lost on that myself. We can explore that later, obviously, but we'll need some unpacking.
00:07:24
Speaker
Yeah, basically what it is is we have all these... You can think of the back of the eye as a theater and you have the projection on the back of the eye, which is what you see. And if you extrude that over time, so just imagine a ball moving... A movie of a ball moving in a circle.
00:07:46
Speaker
It's a white ball moving in a black background. Now imagine that extruding and turning into a spiral behind the movie theater. That's exactly what I mean by three-dimensional visual information. Whereas auditory information is two-dimensional. You have time and you have volume. In some ways it is three-dimensional because you have
00:08:15
Speaker
frequencies, but that's processed. And that's what we're going to be talking about, too, with the visual cortex. So then back to this guy here who was doing experiments on frogs. So he had a frog eye that was hooked up to his speaker. And then he didn't intentionally do this, but didn't he get some kind of a feedback when he paced in front of the eye?
00:08:33
Speaker
I can't remember if he was pacing or if I'm pretty sure he was pacing. That's the story that I heard. It might be apocryphal, but then he heard a buzz or something. I don't know what the sound would actually be. OK, now, of course, that sound would be produced electrically. You know, obviously that's how a speaker works. So so so basically he found out from that information that the frog eye could decipher the his movement in passing in front of it.
00:08:59
Speaker
Yeah, and he noticed that there was a lot of stuff about frog eyes that were specific to the ways that the frogs actually do. And it's important to note here that eyes in different species do different amounts of processing. In humans, we do almost no processing in the eyes.
00:09:17
Speaker
OK. And then in a frog, they actually do processing in the eye itself. Yeah. So this eye was like somehow disconnected from the brain of the frog. How did he know that it that it was being processed in the eye and not the brain? Because he connected it to the retinal cell. It was established at this point that the neural signals travel through neural bundles.
00:09:42
Speaker
You know, now that I think about it, so I think about frog eyes, now I have full disclosure here, I only can think of Kermit the Frog's eyes and I see how it has those two little shapes puked about that. The reason why is because frogs tend to be prey. If you look at, for example, goats, they have horizontal slits in their eyes. Yeah, goats have some weird eyes.
00:10:02
Speaker
Whereas crocodiles, which are predators to the max, have vertical slits. Vertical slits are better for identifying very accurately where something is whereas horizontal slits are very accurately representing
00:10:18
Speaker
if something is. I have this sad image in my head right now. I'm thinking of Kermit the Frog on an operating table. I want to get this out of my head. Anyways, there's a lot of important information that we found out from this experiment. So the point is that that he was kind of getting in between the eye and the brain by going straight to the cells at the back of the eye and seeing that they were they were actually processing information
00:10:44
Speaker
Okay, so that yeah, and then that right what he found out that we didn't know before was what just that that Eyes have signals.

Function and Communication of Neurons

00:10:51
Speaker
Well, the eyes have signals that they could process things like stripes That they could process on versus off that they could detect objects at certain distances on that they work together the so much was learned and it's in the paper is called if you would like to read it and
00:11:09
Speaker
What the frog eye tells the frog's brain. What the frog eyes. So this is interesting. So basically the eye, the frog's eye was acting as like a small brain. I mean, that's way simplified, but I mean, essentially it's doing some processing. It's from removed from the brain itself.
00:11:29
Speaker
Well, why don't we talk about the visual cortex? The visual cortex is divided into layers. And the first one is it basically detects edges and contrast. That's all it does. So you look at like a box and actually do this right now. Look at something in your in the room that you're in or if you're outside. Yeah, wherever.
00:11:53
Speaker
Keep your eye on the road, but you know, look at the license plate in the car. Yeah. So you're going to see dark and light pieces, but the contrast, you know, where the edges are that's detected by the first layer of the visual cortex. Then you have the second layer of the visual cortex, which takes that information.
00:12:13
Speaker
and processes it even further. So it does things like find patterns, sees the shapes of things, how they're oriented, things like that. So that's layer two. And then after layer two, it gets really fuzzy about what does what we're still not very sure. Fascinating. Okay. And then, and then bringing this back to neural network, sorry, bringing this back to both neurons and neural networks. How did this begin our understanding of neurons?
00:12:42
Speaker
Well, what this did was it told us a few things. For example, I mean, before his experiments, we knew that the frequency that which a cell fires is directly related to the amount of light going into the cells. So we knew that cells fired. We knew that it had a maximum firing rate, which is related to what's called the refractory period. We know that
00:13:11
Speaker
we know that it's possible for neurons together to, I mean, we knew this, we knew this essentially forever ago, but this is how we started learning how this is done, how they work together to create more thought. Because I can tell you how a neuron works, and we're gonna actually go over that, but the question isn't,
00:13:35
Speaker
What does a neuron do? It's what do neurons do and how do they do that? Very, very important distinction. So again, one more time, you said it's the difference between how one neuron works and how many neurons work together in a neural network. And that's what this episode's about.
00:13:54
Speaker
So I figure a good thing to talk about now would be, you know, how neurons actually work, like how an individual neuron works. Yes, okay, so individual neurons, and this is both biological neurons as well as artificial neurons, and then we're going to talk about how they're different the two.
00:14:10
Speaker
Yeah, I think we should start with biological neurons. Sure now John Yes, what do you know about how biological neurons work? Well from what I recall from high school biology or college biology They themselves they have synapses. Well, they have what do they call ganglia? I think and then synapses which fancy to
00:14:36
Speaker
are thrown in there so so so for our listeners who are not quite so educated is ganglia is that right I don't even know ganglia it sounded like a fancy ganglia sounds like like a handful of noodles like all the stringy ganglia right is that what it is there is a I'm seeing ganglia I'm not sure if it's what I thought it was but anyway they have synapses
00:14:56
Speaker
And they connect to other synapses from other neurons. The synapse is the space between the little tentacally parts of the neurons. And they send electric signals between each other, which is how brains work. Okay, so real quickly then, so when we talk about a single neuron, essentially, it either sends something or it doesn't. Is that right?
00:15:17
Speaker
Yeah, okay. Because of the way that they send it, I wouldn't say it's either on or off. I'd say because it's on for such a short quanta of time, I'd say that it's when it's on. Okay. So it's almost like, you know, your neuron fires, it means it'd be very slow neuron, but like at 230, 235, and 237. So it's not like a light switch. It's more like a pulse. It's like a light switch operated by a toddler.
00:15:45
Speaker
Oh, wow, fascinating, fascinating. So one other thing now, when I was younger, I used to play a game where everybody would get in these big circles. No, no, no, I'm sorry. When I was younger, I used to play a game where there'd be two lines of people and in these two separate lines, everybody would be holding hands. And this was a game where everyone had their eyes closed, except for the person at the front of the lines.
00:16:08
Speaker
And one of the two was given a green light, and whoever was given a green light, they'd squeeze the hands and they'd send a signal all the way down. And I think part of the game was the tension from, you know, you lose if you sent a false signal. Now there's some obvious analogies to this game and to neurons themselves, isn't that right?
00:16:28
Speaker
Um, you're saying that there's an analogy between this game and the way that neurons work. Yeah. Yeah. In other words, like, so one neuron fires and that gives a signal to the next neuron. Kind of like one person can, if you see a green light and you're doing the game correctly, you could squeeze the hand of whoever's next in line, then they'd squeeze the hand of whoever's next in line, et cetera, et cetera.
00:16:45
Speaker
A little bit. I think a really good analogy might be to extend that analogy a little bit. So let's say that I'm a neuron and I have a bunch of other neurons and they're all people in the sense of the analogy and they're all connected to me. Now they keep squeezing my hand and it's annoying me and it keeps squeezing it and squeezing it more and more and more until I get so annoyed that I squeeze somebody else's hand.
00:17:12
Speaker
that's kind of how neurons work. They have a threshold. Okay, okay. So maybe a better analogy could even be like knocking on someone's door until they answer. Yeah, but then there's actually inhibitory neurons. So there might be, let's say, I don't know, somebody has a crush on somebody and one of the neurons has a crush on the other neuron. So when they squeeze their hand, it makes them less likely to get mad and squeeze the other hand.
00:17:38
Speaker
Interesting. Interesting. Okay, cool. So my analogy wasn't too far off. It was close, but not too far off. No, it was very close, especially in the fact that neurons cause other neurons to fire, except when they are sensory neurons and things like heat differences or light causes them to fire.
00:17:57
Speaker
So basically, we could even break this down into fundamental, fundamental information that is to say, you know, as you said earlier, a difference in heat or a difference in light and that that picks up things. Okay. And those are, as you said earlier, that's different types of neurons. Do we know how many types of neurons we have identified?

Simulating Biological Networks

00:18:15
Speaker
I am not sure, but I know it's in the dozens. Dozens. Okay. Okay. Interesting.
00:18:20
Speaker
So neurons are not only located in the brain, is that right? Oh yeah, neurons are nerve cells. Right, so they're through the whole nervous system. Yeah, whole nervous system. And our eyes as we discussed earlier. Yeah, I mean most of them are in the brain and the brain takes something like 10 or 20% of our calories. That's how much we dedicate to our brain and that's how expensive these neurons are.
00:18:45
Speaker
So I have a question. So are neurons basically like, is it like a binary system where it's either zero or one, or can they send more complex information between each other? I'm almost positive that it's either on or off, but the complexity of the information is when it's sent. So for example, if a neuron sounds like,
00:19:13
Speaker
is a different signal than... So then what we just did there is that was a bunch of nerve firing and it was in a different pattern both times? Yeah. Okay. And in the experiment with the frog eye, the firing of the neuron corresponded to more complex things than just on or off.
00:19:35
Speaker
He said that it is hard to not come to the conclusion that it was about the distance that a frog would try to get a fly. Okay. Interesting. So essentially, and actually I am right here on this paper, it says we can describe neurons as being both binary in a biological system with a smooth time component and non-binary. And as Jonathan, as you said, in the fact that a signal can be delayed or canceled out.
00:20:03
Speaker
Yeah, and Alan Turing actually, when he talked about whether or not neurons could be simulated using computers, this is one of the things that troubled him was the fact that they, well, also the signal itself is, there's no such thing in nature as a jump.
00:20:23
Speaker
Like something, okay, there is a certain level, but normally there's not something just turning on and then turning off. Okay. The neuron will go, instead of going, they're all kind of going to, we'll kind of go, so it's kind of like if you hit, if you can imagine like splashing in the water, it goes up and then down and then up again.
00:20:43
Speaker
Okay, and then it has them and then has a period where it has to rest it can't like if you go to this hand squeezing analogy You can't there's a certain maximum rate these squeeze hands at interesting because it's all chemical right and you need to replenish the chemicals before you can send another signal and
00:20:58
Speaker
Yeah, and there's like over 100 neurotransmitters that have been identified at this point. I do realize that we very, very much glazed over neurotransmitters. But then again, the purpose of this episode, of course, is to discuss artificial neural networks. That's in part why.
00:21:14
Speaker
Yeah, which is interesting because neural transmitters usually say whether or not something is inhibitory like dopamine or excitatory like like norepinephrine. And in artificial neural networks, these are just numbers. So in a certain sense, they are simulated. But we could talk about that in just a moment.
00:21:39
Speaker
Yeah, and actually, the crux of the episode, of course, is how it is that the artificial neurons are different than real neurons. Not only that, but obviously how artificial neural networks are used in current technologies today. Do you think, or do we know? It sounds to me, I mean, I'm assuming that natural neural networks are much more complex, can do much more complex things, and can learn a lot faster than any artificial network that we have. Oh, by orders of magnitude, yeah.
00:22:09
Speaker
would is it the these chemicals um and the the differences between them that makes them more um complex than just like a binary system with numbers that's the thing we're not sure that is an amazing question john i i i hope you don't underestimate how awesome that question you asked is because essentially what you're asking is a chemical or a material component and and ordinarily i would say that a digital component can always always outperform
00:22:35
Speaker
a material component and but of course here we're asking can we ever have an entirely artificial simulated neural network that's written in python or or a programming language outperform a so is it safe to say that we are not at a stage where we can properly say whether it can or it can't well one thing we look at is metrics how fast for example in the human brain can things turn on and off we can look at things like seizures to see that kind of information yeah
00:23:05
Speaker
in a computer, for example, there's these things in the brain called cortical columns. If I recall correctly, there's about 80,000 of them, and it would take a supercomputer a few hours to simulate what one cortical column does in a few seconds. At least a while ago. So just in terms of simulating what neurons do,
00:23:33
Speaker
computers are nowhere close. But there's also the question, what do we actually need to simulate about neurons? Are neurotransmitters actually important to simulate? Or are the weights, which we're going to talk about, fine. For example, when we model a ball that's bouncing,
00:23:51
Speaker
We don't have to care about the color of the ball if we want to know the time that it takes to fall. That's a good point. Yeah, yeah. You know, I do want to ask you one thing really quickly. So oftentimes a major theme right now with artificial intelligence and the advent of Watson is, you know, we often talk about when computers and when artificial intelligence will surpass us. And the way you just described it, it sounded like, if I'm not mistaken, it's still a very, very long ways off.

Progression and Potential of AI

00:24:14
Speaker
but is it still an inevitability? I believe that it is and it's not as far off as you might think, especially if computers keep doubling every 18 months like they have for the past 40 years. As you said, despite the fact that even supercomputers have a really hard time simulating even one, and I'm sorry, what was the name of it again?
00:24:37
Speaker
A cortical column, I think. A cortical column, even though a supercomputer right now is not very efficient at modeling a cortical column, in the future that very much may not be the case.
00:24:47
Speaker
Yeah, and we don't have dedicated, well, we have some and actually some of the original dedicated hardware for simulating neurons goes back to the 50s and 40s. But we, because of the different configurations that we have with artificial neurons, we don't have many like neuron chips, for example, yet, but we will.
00:25:08
Speaker
I just want to ask you, so with respect to the differences between an artificial neuron and a biological neuron, I understand that an artificial neuron can have a smooth, basically the signal is smooth, but it has a stepwise component. Can you explain that a little bit?
00:25:25
Speaker
Uh, sure. So basically when you simulate neural networks, you simulate them in steps. So you have time zero time one, which might be like, you know, like three quarters of a second time to, which might be one and a half seconds. I'm just making up seconds. Um, so you, you simulate it in steps. If you're doing that kind of neural network, there's also the chronic kind of neural network that doesn't use steps at all. That just assumes instantaneous input and output, which is the kind that identifies differences between cats and dogs, for example, most of the time.
00:25:57
Speaker
But the way that an artificial neuron works is let's say that you two are artificial neurons and you're giving me money. Okay, so John and I are both artificial neurons and we are giving you money. I'm sorry. How is this analogy set up? Why am I giving you money? Give me the money, I'll tell you.
00:26:16
Speaker
Okay. All right. All right. All right. So, so for the sake of the argument, John and I are, we are artificial neurons. John, can you do an impression of an Aaron? It helped me enjoy this process. I am an artificial neuron. That is very convincing. Okay. That really was, but yeah. So you guys are giving me money, but then somebody whispers into your ear at number, it could be positive or negative. So now you're either taking away money or giving me money.
00:26:41
Speaker
Oh wait a minute. Okay. Okay. Sorry. I was trying to, I was trying to put two and two together. Okay. So, so let's say Gabriel, let's say you're, you were going to give me $2 and some, but then somebody whispers in your ear three. Okay. So now you have to give me $6 and you were going to, and John, you were going to give me a $4, but somebody was risen to your negative one.
00:27:01
Speaker
Then see you're so now I'm gonna give you four. No, okay. I like this game. Okay. Okay. Okay. Got it. Got it So that's what you refer to as the weight. Yes Yes, that's the weight. Okay, so weight now that that is a characteristic of an artificial neural network That's not really biological is so as far as we know it's similar to biological neural networks because there can be more or less Synapses based on neurotransmitters, right? Yeah, like like more or less neurotransmitters
00:27:26
Speaker
Like hunger for instance, you know, I could make certain signals go off that that don't normally right well, there's a principle in in neural net in neural networks called Trent neural theory called Hebbian theory that says that neurons that fire together wired together Mm-hmm. So if let's say neuron a and neuron B fire like in simultaneously, then they're gonna get stronger connections
00:27:48
Speaker
I have a feeling that that has probably been brought up in a lot of self-help books. Is that right? You know, the whole 21 days for a habit, you know. Right. And also in addiction stuff, they talk about, you know, they use the analogy of water flowing will eventually create a canyon. And they use that as an example of if you keep doing certain behaviors and patterns, it creates a sort of canyon in your brain where signals will continue to travel through a certain
00:28:16
Speaker
through the neurons in your brain. So for our listeners, the next time you have to give a motivational speech, you can, what's the principal name? Hebbian theory. Hebbian theory, and say neurons that fire together, or sorry, is it wire together? Fire together? Wire together. Yeah, birds of a feather flock to get snappy. Penny saved is a penny earned. You know, do some motivational speech about teamwork or something. I don't know, you guys go figure it out.
00:28:42
Speaker
So, okay. So you have all this money. So now I have all this money. It's in a pot. So just John, John took $4 from me. Yes. Um, and then you gave me $6. So on average I earned $2. Okay. And I win.
00:28:57
Speaker
Well, yes, but but let's continue with the analogy. So one question in your analogy, the people that whispered to us, are they other neurons? No, these are the weights and there's something intrinsic to you. You can imagine you yourself as someone with with this with the schizoactive disorder. So somebody's always whispering negative four to me.
00:29:22
Speaker
Yeah, and the thing is the number that you said is based on something that you got and we'll go into that in a second. So I got $2 on average and then somebody whispers to me another number like three. So now I add two and three together and I have five. And now I put this through a sigmoid function. Wait, are you adding? Because we're multiplying.
00:29:45
Speaker
Yeah, I add. You multiply after, but add before, kind of. These are complicated things. So each pass here has a different function. OK, go ahead. So anyway, now I have five. And now I put that through what's called a sigmoid function. A function basically takes a number, spits out another number. A sigmoid function, if you put in negative infinity,
00:30:14
Speaker
It gives you, there's two of your kinds. We'll talk about the first kind. It gives you zero. If you put in zero, it gives you one half. And you put in positive infinity, it gives you one. So it's like an S-shaped curve.

Training Neural Networks

00:30:28
Speaker
Okay, got it. So if you're looking at, so you're looking down at the sigmoid function, the x goes from negative infinity to positive infinity, you've got a fancy little S going on, like a hill. Okay. Yeah, it looks like a hill. I like pictures and I like analogies. So that's good. I like hills.
00:30:41
Speaker
Yeah, so now I put this at the sigmoid function and now whoever I'm giving money to, let's say that it is five and let's say the output is... I mean, realistically, you guys wouldn't have given me more than one dollar unless your multiplier was more than one, but I neglected to think about that.
00:31:07
Speaker
So now let's say I have like 80 cents and then my multiplier is four I'm giving the next person three dollars and twenty cents and that's and so on and that's how artificial neuron Networks work. Hmm. Okay, so the weight what is the analogy or what is what is the the biological? Side of that concept biological version of a weight would not be how to translate
00:31:31
Speaker
Yeah, no transmitter how strongly a neuron connects how more likely it is to make it fire and Whether or not is gonna make it more or less likely to fight so like pleasure Like like a release of the pleasure creating chemicals after you win something or something like that That helps to create those habits. I think yeah, exactly. Okay
00:31:54
Speaker
For you, the listeners of Breaking Math Podcast, Audible is offering a free audiobook download with a free 30-day trial to give you the opportunity to check out their service. Gabriel, you got any recommendations? Absolutely. Today, I'd like to recommend the New Scientist Instant Expert book, How Your Brain Works. This book is put together by editor-in-chief Alison George, who is an instant expert editor for the New Scientist magazine.
00:32:18
Speaker
Also, the editor, Carolyn Williams, who is a UK-based science journalist and editor, who is also the author of Override. This book is a book that covers many topics about how the brain works, including memory, intelligence, emotion, sensation and perception, consciousness itself, also ages and sexes of the brain, and sleep, and many, many more. It is a phenomenal and fascinating read.
00:32:45
Speaker
And if you want to know what happens when your neurons don't work as well as they maybe should, check out Driven to Distraction, Recognizing and Coping with Attention Deficit Disorder. This book helped me deal with my attention deficit disorder, and maybe it can help you too, and either way it's a fascinating read.
00:33:02
Speaker
The chapters are organized in such a way that it doesn't matter what order that you read them in, so that if you have ADD, you could still get through the book no matter how bad it is. And to download your free audiobook today, go to audibletrial.com slash breakingmath. Again, that's audibletrial.com slash breakingmath for your free audiobook. Now, back to the show.
00:33:30
Speaker
Very good. So then for this next segment, I believe we were talking about just general principles of neural networks in general, right? Yeah. Given the description that we told you about neural networks, you might wonder, okay, well, what can you actually do with a neural network? Like what kind of function could you see me? Okay.
00:33:48
Speaker
to our listeners, we're gonna sell you. You are gonna be so impressed by neural networks that you're gonna be wondering, you're gonna be breaking out your credit cards saying, where can I buy one, right, right? That's our goal, right?
00:34:01
Speaker
Yes. You could buy neural networks from, I don't know. You're going to sell neural networks? No, I know. No, no, no. You're going to be so impressed with neural networks. This is going to be the topic you'll bring up at your next date. Watch us. Okay. Just, just, just watch. They're really, okay. So without me selling it up, I will let you take the floor, Jonathan.
00:34:20
Speaker
Well, the way that we wire these neural networks together is you can think of them as in layers. So you have your first layer and that's like, I don't know, like, let's say that's like a hundred, a hundred pins on one of those, uh, one of those boards that they have during, you know, the police shows when they have the pins on the board. Oh, yeah. Yeah. Evidence boards. Yeah. Yeah. All the pictures. You've got your hypothesis and your theory, right?
00:34:44
Speaker
Yeah. So imagine a hundred pins. Okay. Now a fully connected layer means that it means that let's say you have a second row of pins and every pin in the first row is connected to every pin in the second. And then you have a third row. If you have two hidden layers, actually, no, if you have one hidden layer, then you have three rows because then you have an output layer and middle layer in the first layer and so on. So that's what a hidden layer means. So a hidden layer means that you have three layers given just three layers.
00:35:13
Speaker
and enough neurons, the more neurons the better, you could approximate any function that exists. Say that one more time, because I think that what you just said is the crux of this whole segment, right? Yeah, any function using a neural network that just connects forward, just that connects forward, which is not even to talk about neural networks that connect to themselves, can simulate any function possible. Wow. And get it right exactly?
00:35:43
Speaker
You get approximated to any level of... Arbitrariness, right? Yeah, so the more we add them, the better it'll approximate it. Any function at all. You can do that with just three layers. Three layers right there. Top layer, middle layer, bottom layer.
00:35:59
Speaker
Yeah, and not only that, is if you connect, if the neurons connect back to themselves again, you could simulate any program that you could possibly write with neurons. Okay, interesting. So, should we talk math-y here real quick? There's a nice sentence here I'm looking at that has a lot of math-y terms.
00:36:15
Speaker
Sure, let's talk about some math. Given a function f, a tolerance d, a neural net defined using a sigmoid function s, where a is a matrix of weights and b is a vector of biases, basically that means that a times the input plus b is the output,
00:36:38
Speaker
And if you have two of these layers, so you have A1 and A2, B1 and B2, f of x minus n of x, where n of x is the output of the neural network, the absolute value of that is less than d in only one hidden layer. And Gabriel, was that really necessary?
00:36:53
Speaker
You know what I want to add to that? I want to add, what's the sound? That woo wee. Yes. That is some serious math right there. You know, I almost feel like if our listeners are listening in their car, they probably pulled over just to write all that down, right? But here's the cool thing. You don't even need a sigmoid function. Some neural networks use what's called a relu function, which means that if it's less than zero, then zero. If it's greater than zero, it stays the same. That's kind of like the math version of kind of eyeballing it, right?
00:37:21
Speaker
Kind of, but it works for identifying things. Yeah, I find in life eyeballing it kind of works more than you think it would. It's kind of eyeball it, you know? Okay, so now what about things like, you know, with all this transmission of signals, you know, I'm sure a lot of elements of signal processing go into this, including the FOA analysis.
00:37:43
Speaker
Oh, yeah. Using Fourier analysis, you can actually prove the fact that any one layer neural network can simulate any function. And the proof is actually a little bit too difficult for us to go into right now. But if you're mathematically inclined, I highly suggest that you look it up. Nice.
00:38:09
Speaker
Now, and again, I know with FOIA analysis, we often talk about, especially with signal processing, and I don't mind bringing it up because this is a really, really cool topic.

Mathematical Foundations of Neural Networks

00:38:17
Speaker
So there's the FOIA transform, which will take something that's like in the time domain or something that happens over time, and then it represents it happening in a frequency domain. So what that means is that
00:38:30
Speaker
It's like changing what you hear into the notes of the music, kind of. Interesting, yeah. And then also with respect to Fourier analysis, I believe that also includes just the Fourier series too, right?
00:38:50
Speaker
yeah it is used a lot in serious neurology um as is the dirac delta function which is something which is a function that's zero everywhere except infinity at zero and has an area of one yeah it's like the rules to some weird game that you're making up but that's part of what math is right you know basically making up the rules as you go kind of to explain things and in nowhere and in neurology this is extremely typified i mean we've had
00:39:19
Speaker
completely different versions of how neurons work, different models, they're called perceptrons. Yeah, so there's so many different ways. That's a fun buzzword, perceptron. And then actually, this is the first time I've heard it is during the research for this episode. So how do you explain to our audience what a perceptron is? Sounds like a Marvel movie.
00:39:41
Speaker
They've got vision and perceptron. Perceptron is just a fake neuron. Yeah, as used in a neural network, of course. Yeah. Okay, nice, nice. So then why it's cool? Let's talk about why it's cool, shall we?
00:39:57
Speaker
So let's talk about how these things are actually trained. So you have a good neural network. What good is it if it has random weights? If it's random weights, you're gonna have a mess, I think. If they're randomly assigned, I think that you want, am I speaking the truth?
00:40:15
Speaker
There's actually a story about randomly wired neural networks and it's from the jargon file, which we talked about on which episode was that? Oh, jargon file. That was a recent one. I believe we brought that up a few times. The culture of hacking episode 11. Yeah. And it goes like this.
00:40:31
Speaker
In the days when Sussman was a novice, Minsky once came to him and as he sat hacking at the PDP-6. What are you doing? asked Minsky. I am training a randomly wired neural net to play tic-tac-toe, Sussman replied. Why is the net wired randomly? asked Minsky.
00:41:06
Speaker
So yeah, obviously that's the story about like
00:41:11
Speaker
A neural network, even if it has random weights, is doing something. It might be doing the wrong thing, but it's doing something. Well, dude, it's about not having prejudgments from society. It's like a mathematical Buddha right there. Or a mathematical hipster. It's like a sound of one hand clapping.
00:41:32
Speaker
Actually, there are things called AI cones, which are part of the jargon file. Yeah, I love the jargon file. It's deep, man. It's deep. So the question is, how do we train these neural networks? So the thing is, there's three types.
00:41:52
Speaker
of learning in machine learning. There's supervised where we know the question and the answers and we teach it a bunch of questions and answers. There's unsupervised where we just have a bunch of examples and we ask it to please try to find a solution. And then there's halfway between where you could figure out what that means for yourself. It's halfway between knowing and not knowing. It's where you have some elements that are unknown.
00:42:20
Speaker
Interesting. But we're going to talk about the case where you know the question and answers. So let's say we're trying to teach a neural network how to differentiate between cats and dogs. I showed a picture of a cat. It does its thing and it says, dog. And then we say, nope, it's a cat. Now what happens?
00:42:38
Speaker
Oh, now let me ask you real quick, just to specify, when you say do its thing, you have a picture of a dog and a cat and the neural network has some awareness of this. What information does the picture have? Like, has the picture taken in the image and qualified it somehow? You know what I mean? Like, by shape or things like that?
00:42:56
Speaker
Oh, well, the layers of the neurons actually do do that. Like with just one layer, you could identify something like edges. Then with two layers, you could identify if you're doing video, I don't know, things like acceleration and so on. Each layer adds
00:43:16
Speaker
a ton of complexity in practice. Very good. Yeah, that's what I was trying to get. So again, when you give an artificial neural network, the picture of the cat and the dog, it has an imprint of all the information. And basically, it's categorizing the information as you say yes and as you say no. Yes? Yeah. And actually, we're going to play for you real quick a
00:43:40
Speaker
a version of Eine Kleine Nachtmusik that was run through a visual neural network after being transposed from sound into a picture.
00:44:21
Speaker
So what is that exactly? That's kind of that's from Google's deep dream. And that's kind of more or less how a neural network perceives what's going into it. That's so that's what the neural network is hearing when it's looking at that piece of music. Yeah, that's what it might as well be hearing. Wow. Okay. Trippy is very trippy. But so let's say we have this picture of the cat and then we say, Oh, you're wrong. It's a dog.
00:44:50
Speaker
What we do is we go back, and so like we had the people connected together, everybody knows exactly how wrong they were. And everybody changes how—this is a gross oversimplification, but everybody changes.
00:45:06
Speaker
their weights based on how wrong they were. Okay and you say based on how wrong. So how wrong is only established after several rounds right because you know in one round you just know that round you were wrong. Well actually each round you know exactly how wrong you were because you um for example in the cat versus dog thing you might use what's called a binary log logistic regression function which means that you uh you basically say nope you're exactly
00:45:35
Speaker
because it gives you a probability that it's like 60% positive as a cat. So that means that that if it's not a cat, then it's 40% wrong. And then you take the derivative of that if you're mathematically inclined and go back using the using the chain rule. Interesting. Okay. And then you change the weights. That's how you and this is discovered in 1975. Before that, you had to kind of have evolved random neural networks.
00:46:06
Speaker
So this is how a network learns or gets better and better at doing its job. The more examples it sees, the more times it gets it wrong, the more it tweaks what it's doing to get a better answer, to get closer to the correct answer. Exactly. And for example, in my handwriting program, I used the MNIST data set, which had 60,000 digits. So you need a lot of examples.
00:46:37
Speaker
And I can just see how telling the difference between a cat and a dog, it already sounds really complicated, but that's actually a pretty simple thing. Like, and then you could take it to, I'm going to show you a picture of an animal and you tell me whether it's a cat or not, which would become, it seems to me, much more complicated.
00:46:57
Speaker
Yes, but what's weird about that is that if you have a network that's already trained to recognize the difference between cats and dogs, you can actually take that network and use it to just tweak it a little bit to let it tell the difference between cats and ants or something like that.
00:47:20
Speaker
If it gets really good at telling what a cat looks like, then you could broaden it out and it will still be good at being able to tell what a cat looks like. Somewhat. There's a lot of research in this area. Complicated stuff. So how hard would it be to be able to?
00:47:42
Speaker
How hard would it be to be able to have a program that you just show it a picture of an animal and you ask it to tell you what kind of animal that is? Is that even like a reasonable thing to ask at this point? At this point, you would need so because that's one of the bottlenecks right now is that we don't have enough identified data because basically you have all this data generated basically by grad students.
00:48:08
Speaker
that spend a bunch of... Actually, sometimes people are clever and use things like YouTube comments to try to identify moments in video and things like that. But that's the thing with... Even is that if somebody's holding your hand, it's a lot more powerful and you learn faster, but you have to have somebody holding your hand.
00:48:33
Speaker
Right. So is that what those CAPTCHA, those Google CAPTCHA things are when they ask you click on the sign, click on the storefront, you know what I mean, before they let you get to the website?

AI Learning and Image Recognition

00:48:45
Speaker
Are they gathering examples of what a random image, what's in a random image? Actually, that's what they do half the time. Half the time,
00:48:54
Speaker
have the time they are actually genuinely asking and have the time they already know. Which is very clever on the part of Google. Especially when they do like, you could actually mess with Google if you wanted to. I wouldn't suggest doing it because I don't like delaying the progress of humanity. But if you
00:49:16
Speaker
If you go on to, if you ask you for two random words, like a lot of the time you can just type in the first word that asks, and then a random word for the second one and it'll say, it'll let you in anyway. So I think that's just really a great idea. So they basically took something that's kind of a pointless thing, which is just like a security thing, but they actually made it do something useful for society by helping to, to grow computer learning.
00:49:43
Speaker
Yeah, and neural networks can do really complicated things now. Like, for example, in a picture of breakfast, they could identify slices of banana. They can identify where the table is, where the forks are, draw boxes around all of it, and then describe the image as a scene of breakfast. Like, they could do all of that already.
00:50:06
Speaker
And that's what we would expect for a walking robot helper around the house would have to be doing that constantly everywhere it was looking. Yeah, and actually there's an area of research right now where they tidy up rooms and robots watch them and then try to copy what they do.
00:50:29
Speaker
So to start out with, with artificial intelligence, we basically have to teach them everything. And like you said, hold their hands. But eventually it seems like it would get to the point where like, like a little child, they would be able to, you know, start taking what they, what we had taught them and then go from there. Yeah. Which is why I don't think that the first.
00:50:49
Speaker
artificial intelligent, artificially intelligent machine will, it probably won't be that big of a surprise because we probably will spend 20 years teaching it just like a human.
00:50:59
Speaker
how to be a human. That's fascinating. Unless it already exists. Unless it's already, you know, what my space evolved into. I'm just kidding. Hiding out on the internet. No, that's what I mean because like I always talk about the internet and you know, like especially the deep web and all the unaccounted for junk that's still floating around there is basically an ideal digital primordial ooze. You know what I mean? Yeah. Yeah. Yeah. Great sci-fi novel right there. Great sci-fi novel. You heard it here first.
00:51:24
Speaker
I like it, I like it. So I think in which we sparked these topics, we could go into any one of these topics very, very deeply. However, I was curious about a few more things that are unique to say artificial neural networks that you may not see in biological neural networks, things like backpropagation and convolution. Can we talk about those for a little bit?
00:51:43
Speaker
Sure convolution is used in what's called convolutional neural networks. That's basically where the weights are shared and they're shared kind of in a grid. So when you say shared, the thing I hear obviously is you start with two, at least two or rather multiple neural networks, right?
00:52:04
Speaker
Well, yeah, yeah, basically you have a layer, but a lot of the layers, like when that person's whispering the number into everybody's ear, it's one person whispering it into like every fifth ear or something like that. That's more or less what the convolutional neural network is. And that's actually based on the visual cortex, especially the first layer of it.
00:52:26
Speaker
Interesting. So then help me to understand with respect to information, how is a convolution neural network different or better or actually rather just different than traditional neural networks? You mean biological ones? What can they do that the traditional ones cannot do?
00:52:47
Speaker
Well, because they both can simulate any function and they're both incomplete, technically, neural networks can do anything neurons can. So it's not... Oh, great. I'm specifically, in this case, referring to a convolutional. I'm trying to basically say, functionally speaking, when would you want to use a convolutional neural network? Oh, that's when you have, for example,
00:53:10
Speaker
If I want to detect edges, I don't care where the edges are. So that's like, I wouldn't want a different type of weight in the corner versus the center. That's why we use a convolutional neural network. I would use a fully connected neural network when I want to get very like a specific kind of intrinsic data about where
00:53:32
Speaker
Things are what things are, how they are, etc. Wow. Okay. Interesting. So I'm seeing something here about convolutional networks, uh, their feed forward networks. What does that mean? Uh, what that means is, um, going back to the.
00:53:48
Speaker
police board that they go forward. They don't go, they don't feed back into themselves. They don't loop back around the startup. Okay, I got you. That's basically, although you can make them loop back around by adding a loopy component to them, but that's hard to do. Very interesting.
00:54:07
Speaker
And all these are trained in essentially the same way. There are, of course, so many variations. There's things called LSTM units, Long Short-Term Memory, which are related to recurrent neural networks. I mean, this is a field that has become extremely rich in the last 10 years.
00:54:33
Speaker
We're just talking about convolutional neural networks. What about recurrent neural networks?
00:54:41
Speaker
Now what a recurrent neural network is basically. Can I guess? Sure. It recurs? Sorry. It connects back to itself. So it's something that uses its own output as its input, which is similar to the prefrontal cortex. Because most of the connections in the prefrontal cortex are to other cells in the prefrontal cortex.
00:55:10
Speaker
There's so many loops in that part of the brain. Interesting. What good is that? What good is that? It can simulate any program. For example, if I tell you to count every odd number,
00:55:31
Speaker
that's a program and you could do that using your prefrontal cortex. I could tell you to count every, to tell me every odd number that doesn't start with a T. And that's another thing in the prefrontal cortex. It's just a program. Sorry, every odd number that does not begin with a T. Yeah. Okay. Oh, got it. Got it. Three. Yeah. Seven. Nine.
00:55:55
Speaker
Yeah, I get it. So our brain does it pretty quickly. Interesting. So a recurrent one that feeds into itself. Kind of like that Irish snake thing. It's like eating its own tail. Or a boros. Or you're putting your foot in your mouth. I don't know. Okay. Now, I imagine these recurrent neural networks can do some pretty amazing feats. You know, like they can simulate handwriting.
00:56:18
Speaker
Yeah, like if you give it enough examples of different handwriting, you can give it an example of your own handwriting and just give it text and it'll write something that looks just like you wrote it.
00:56:30
Speaker
wow that's amazing so so that you know for like forgery then other other examples and full disclosure i'm just reading off of a fancy list that may have been prepared by a guy whose name rhymes with bonathan this list says it can do amazing things including you know data compression or or arbitrary image recognition now we talked about that earlier you know uh... also short-term memory mechanisms and and and create far-reaching temporal conclusions yes so for example like like
00:56:58
Speaker
Let's say I have like something within a parenthetical and of course the thing within the parenthetical if you saw it If you didn't know about it would be cool. But if you didn't okay, so we're talking about a hypothetical parenthetical As you can tell by my extremely convoluted sentences back there I was trying to do a very a convoluted Parenthetical and as you can see it gets very complicated very quickly
00:57:23
Speaker
Yep. Absolutely. Absolutely. It may take a neural network to understand what we're talking about here. But that's what they can do. If anybody, there's a structure called XML where basically parentheses have names and a recurrent neural network without being told how to do it can learn XML.
00:57:43
Speaker
hmm wow that's that yeah that's actually quite amazing quite amazing in fact it's kind of scary honestly now now isn't it true that like actually every single biological neural network is in fact recurrent every one that we know of except for little different little things like when we were talking about
00:58:01
Speaker
The prefrontal cortex and like just little segments of it now. That's fascinating now. I'm sorry. I totally cut you off there I was about I was a mirror play so like even insect brains. Those are also have an element of being recurrent in them as well
00:58:15
Speaker
Yeah, and because insects need so much automation, their neurons are much more complicated than ours. It's insane. That's crazy. So their neurons do more thinking by themselves, in other words. Pretty much. So we were more general. I mean, we're born with almost no information. So really, what we're all wondering here... I'm sorry to interrupt. It's like instinct over learning, having to learn everything.
00:58:39
Speaker
Yeah, basically. And humans have to learn. We don't have much instinct. So how long will it be until we have a neural network that is asking, why are we even here? Why do we exist? What's the meaning of all this?
00:58:55
Speaker
Well, it depends. Did you teach it to say that or did it decide to say that on its own? And how can you tell the difference? Well, we decided to say that on our own. At least I'd like to think so. Let's just say the latter. Let's just say it's, you know, on its own. Or it suddenly turns around and says, why am I doing this? Like you tell it to do something and it says, I don't know. I did ask to be made. Okay. What's the point of this thing that you want me to do?
00:59:22
Speaker
It's really hard to say because we know so little, I mean, we just have started to figure out that if you give neural networks attention, like humans have attention, that they can have useful memories. Like that's something that we just figured out in the last like, like two or three years. Okay. So basically, so we are not yet at a point where we'll start asking self reflective existential questions.
00:59:49
Speaker
No, but the amount of progress that is being made every single year with neural networks is astounding. In the last 10 years, we've gone to basically knowing that neural networks can do some simple tasks to having neural networks that can do things that only humans could do previously. That essentially is the history of artificial intelligence, but it's a history that is starting to rapidly catch up with
01:00:17
Speaker
the reality of what it means to be human.

Future Implications of AI

01:00:19
Speaker
And these existential questions are going to be something that we're going to have to wrestle with. And a lot of the functions that neural networks are being used for, and we're going to be trying to be using for, can range from anything from data compression
01:00:32
Speaker
to image recognition, to sentiment analysis, where you look at a sentence and you say how sad it is or how happy it is, something like that. See, as a writer, that really disturbs me because I started getting into writing because I figured in 10 years, this is the one job that robots will not be able to do. And you're telling me that they're teaching robots how to learn to be good writers. Have you read the short story by Roald Dahl? Which one?
01:00:59
Speaker
The one where he gets put out of a job by a computer? No, I have not read that. Oh. But it's a great fear of mine. I know that they already have programs that can write sports, simple sports recaps and also financial stories of just, you know, the stocks did this today. They can, they have computer programs that can just spit those out.
01:01:21
Speaker
But see, then we have the thing is that if we can interface with these machines, just if we, because we are learning how to interface almost as quickly as we're learning how to like develop them. So can we use them like you as a writer? Like, could you?
01:01:37
Speaker
have a neural network that you can work along with you to give you story ideas, to get you out of creative slumps. What's the difference between that and using this bizarre technology that allows you to replicate speech using squiggles on paper?
01:01:53
Speaker
No, you're right. And, you know, just having having Google, you know, to be able to just look up any fact at any moment, you know, people from 50 years ago would look at that and say, oh, you have half of, you know, half of your brain is a computer already, you know, like.
01:02:09
Speaker
And I do remember things by Google queries sometimes. Like, sometimes I don't know a fact, but I know how to get to the fact. So part of my brain is Google already, so... Yeah. But, you know, my concern is like, so if you have a computer who's like a writing assistant or something, like, at what point does it get to the point where you have to put on the cover of the book, you know, your name and the name of your artificial neural network that helped you write the book?
01:02:37
Speaker
I think at the point where it feels right, I think humans have a sense of justice that's very accurate. Well, it's going to be pretty complicated to figure out where those lines are, but it will definitely be interesting. I'm Jonathan. And I'm Gabriel. And today on the show we head on... Jonathan Baca, also known as John. And also Jonathan Gabriel Baca, which is just trippy. That's true. That's my middle name.
01:03:07
Speaker
And we're going to talk to you a little bit about a new part of something that we're a part of. Vaguely. Vaguely because it's blank for non-blank. That's exactly right. Yeah. Yeah. So blank for non-blank. Now you can fill in the blank. Yeah. And what's that website?
01:03:28
Speaker
actually it's www.blankfornonblank.com and it is a podcast collective that we are thrilled to be a member of. The idea behind blank for non-blank is it brings subject matter experts in to talk about subject matters for non-experts and there is a chemistry podcast. There are a couple of linguistics podcasts. There's your favorite math podcast, Breaking Math podcast. There's even a few others that
01:03:55
Speaker
are as of yet to be named, but we'll be part of it as well. There's cultural and history podcasts. So next time you're at a computer, be sure to check out www.blank4nonblankpodcast. Oh, actually, let me check if it's blank4nonblank.com or blank4nonblankpodcast. Just .com. Okay. blank4nonblank.com. And we're still, of course, part of Santa Fe Trial Media. And we're going to be bringing you the same content as we always have.
01:04:25
Speaker
We alluded to this change last week and now it has come to fruition.
01:04:30
Speaker
only now jonathan gabriel baca did you want to plug anything uh... if you live uh... in new mexico or if you are a student at u n m uh... just keep your eyes out for a bunch of new podcasts that we have in the works uh... gabriel and jonathan might be helping us with some of those and we're really excited about it so just to watch that space and and those should be found at the u and m daily at daily lobo dot com
01:05:13
Speaker
You freestyle?