Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
83. Intelligence in Nature v. Machine Learning-An Interview with Brit Cruise - Part 1 of 2 image

83. Intelligence in Nature v. Machine Learning-An Interview with Brit Cruise - Part 1 of 2

E83 · Breaking Math Podcast
Avatar
3.5k Plays9 months ago

In this episode (part 1 of 2), I interview Brit Cruise, creator of the YouTube channel 'Art of the Problem.' On his channel, he recently released the video "ChatGPT: 30 Year History | How AI learned to talk." We discuss examples of intelligence in nature and what is required in order for a brain to evolve at the most basic level. We use these concepts to discuss what artificial intelligence - such as Chat GPT - both is and is not.

Help Support The Podcast by clicking on the links below:

Recommended
Transcript

Impact of AI on Life

00:00:09
Speaker
What is intelligence? That is the question that a lot of us have been asking ourselves as AI is showing up in more and more places. Some of us are using it now at our jobs. Some of us may be fearing that someday AI may make our jobs unavailable. Some of us only know AI on our phones and on our other devices. A lot of us have a lot of questions about what AI is and how it works. We wonder specifically, what exactly does AI know?
00:00:37
Speaker
How does AI know what it knows and what is AI not know? These questions are critically important as they relate directly to our safety. They relate to the reliability of the products that we're using that have AI as part of them. They relate directly to our ability to do what we do as humans in our hobbies and in other pursuits.

Exploring Intelligence Across Species

00:00:59
Speaker
These are the questions that we're going to be talking about right here on this show.
00:01:02
Speaker
Real quick, for the last several months, I've picked up a book here. It's called A Brief History of Intelligence by Max S. Bennett. Essentially, what this book does is it talks about brains and brain-like organelles all throughout nature. There are chapters that talk about brains in insects or in fish or in reptiles or in mammals and humans, and it talks about how they know
00:01:28
Speaker
what they know and essentially what hardware, you know, between their ears is required for that. It is fabulous. A quick example from the book. My favorite example is it talks about reptiles and it says that reptiles, unlike humans, are not able to visualize their own body inside. So if an alligator is walking and it steps over an obstacle, it is unable
00:01:50
Speaker
to move its back legs to account for that obstacle. It just runs right into it. Its back legs, I don't want to say are on autopilot, but it certainly doesn't have the ability to visualize. That's amazing. That fascinates me. So I suggest to any of our viewers or listeners, check out this book. You can go along. We also have a discussion happening on Slack at Breaking Math Podcast or Breaking Math Pod, as well as Discord or even our emails here.
00:02:14
Speaker
day.
00:02:44
Speaker
you sir.

Guest's Journey and YouTube Channel

00:02:46
Speaker
Hi Gabe thrilled to be here. I am thrilled to have you here so I'll just tell our audience again since you've been in this field for the last at least the last five years but really you've been doing your channel on computer science the last 11 years is that right? You have a lot to say on this topic can you real quick tell our audience a little bit about the history of your YouTube channel as well as your work at Khan Academy.
00:03:13
Speaker
Sure, actually it's funny because it connects to memories so when I think back there's one strong memory which is I was I think maybe 20 or something and left while I was in university or right after I finished I went and did the classic working on the farm in the summer. I just went way up north Quebec and
00:03:35
Speaker
again just thinking back now as a parent so busy with kids having that sort of like weeks or months of total clarity just literally digging holes and doing farm work and then at night having no internet um towards the end of that trip i remember being like all right i've got to go back to school in the working world and 20 what am i gonna do and in one notebook i was like

Teaching Computer Science: Methods and Mindsets

00:03:59
Speaker
Well, what are you good at? List some things. The two things where I knew I was good is what you're excited about. Excited about video making in all forms. Also having to be good at explaining things. And again, that's an interesting skill. I don't know if it's learned or innate.
00:04:15
Speaker
And so I just wrote like, I'll just do, and I also like some old TV shows like Connections and I'm like, I bet if I did my own show, that would be good. And I wrote literally like I would do a show on cryptography and then I would do information theory, computer science, physics, and one day I'd get to AI and it was at the end of the book.
00:04:34
Speaker
And I know because it's one of those, I kept one or two books in my life and threw it in a box in the attic and that one's in there and actually climbed up and looked like over a decade later. I'm like, wow, that little insight on that farm was a thread that continues to this day.
00:04:49
Speaker
Wow, very cool, very cool. And then, so I know there's a lot more to go from that, to go from, you worked as a computer science teacher, but I saw that you have a very unique approach, and I'm sorry, I don't mean to skip around, I'll go back to the farm story in a second, but... No, skip around. Oh yeah, sure, why not.
00:05:07
Speaker
You have a unique approach to computer science. I've heard traditionally a lot of folks don't like a degree program that focuses on here's all of your tools that you may or may not use. And I know that on your channel specifically, you don't start with the tools. You start with looking at the problem. Can you tell us a little bit about that?

AI Learning: Connectionist vs. Symbolic

00:05:26
Speaker
Yeah and so I can put it in one line which is you've got to teach forwards not backwards and just as you're saying it actually got like chills on like literally the torture of going through let's call it a computer science program circa whatever not this year I don't know what it is this year but since all of history it was
00:05:48
Speaker
a lot of pain and struggle that didn't need to be there. I think it's just kind of imposed on purpose to filter people out. That's an argument about universities. But in the context of computer science, teaching backwards is like trying to keep up with a bunch of modern things, a lot, which won't even matter in the future. And then sometimes in a course being like, oh, and 200 years ago, so-and-so said that.
00:06:13
Speaker
And I remember in school there was one or two moments where I'm like, what? There was something before we even had electricity that matters in computer science. I want to know more. And so the takeaway there is when I was thinking about how I would teach is kind of simple. I would just teach forwards. Teaching forwards means you have to unlearn what you know and go back to blank slate.
00:06:34
Speaker
Oh man, that's important. And if you can go to a blank slate, which is any teacher can just naturally, a good teacher could just jump to blank slate and rebuild an explanation in that moment. That is what you have to do. And so that's how I thought, that's how I approached it. Wow. So here's the amazing thing as you're talking about this. This relates directly to the philosophy of machine learning itself. And I'm sure you're probably aware of this as well. Where with machine learning- Big, big mess.
00:07:02
Speaker
Like, you don't tell machine learning how to solve the problem. You literally just tell it the parameters and say, teach yourself how to solve the problem. And then through trial and error and playing things, it discovers itself. Machine learning can teach itself how to play chess or how to play the game Go or any other game or any other, you know, or a wide swath of other problems. And so, you know, the optimal learning for humans is similar to
00:07:29
Speaker
to AI where we figure out the tools and just have freedom to explore. I love that. You said earlier that you said that you had a natural... Oh, can I interject? Sorry, go ahead. Yeah, so you just gave me a thought, which is on the philosophy of machine learning, you could look at it on the one hand as a very simple thing.
00:07:48
Speaker
can humans take their hands off the controls and the divide is from the beginning you have humans who want to have their hands on the controls because we're smart and we are going to have good ideas and we feel good when we have good ideas and that's that's what we call good old-fashioned ai literally writing the code step by step for how to be smart
00:08:10
Speaker
Um, which is again, the classic example hits a wall with images because it's too complex. So human algorithm won't work. The other camp going back to the beginning was we're going to model biology and we mean it. We're not fake like kind of modeling biology and then writing on this human code around the edge. And so that's a connectionist theory, which is we're going to build.
00:08:31
Speaker
And this is important because it's going to connect to future questions. We're going to build a net, a mesh of connections and we don't even care about those connections. It could be random or we could have all of them and we're going to learn what we need to do to perform based on some reward.
00:08:46
Speaker
and that thread has always been there and people are usually on one or the other very rarely both and so just like a political divide it's been fun to watch the history of this as the people who need to have the hands on the controls and don't and that leads to the divide today and so it's so simple but i can't say how important that divide is oh my god it's just the research today you could put the papers in two different piles

Philosophical Musings on AI Learning

00:09:15
Speaker
How about I do this? How about I ask an AI if, in the style of Jordan Peterson, it can summarize the divide in philosophy between chaos and order, and getting order for your life by using chaos appropriately or something like that, you know? As long as it doesn't have a Canadian accent, I'll take it seriously.
00:09:32
Speaker
Oh, for sure, for sure. Yeah, I love this. Okay, so many things I was going to say before. So you mentioned a pension for education. I'm actually a former educator. It has been told to me that I'm very, very good at explaining things before. I had a miserable time in education. I think I had like anxiety and panic attacks and it like crippled my classroom at management. I had a rough go at it, but I salute teachers who are good at it. They are an inspiration.
00:09:54
Speaker
But still, I've been told at least I have an ability to explain things with stories and analogies, which is also a part of machine learning and how knowledge is stored distributively. We'll get to that here in a bit. So, it's interesting. I want to mention my late co-host, Sophia Baca. She would love this conversation. She would love talking to you, and I miss her very much.
00:10:16
Speaker
We've done an episode already how Sophia passed away this last year, and part of why I want this show to go on is in honor of her and the way she'd like to do things. She was very, very creative as well, but also my math tutor, so I was able to somehow code switch.
00:10:33
Speaker
between creative ideas and analytical ideas, and how do you meet people who are good at doing both? Sophia was definitely one of those people and left their imprint on this show for

Tribute to Sophia Baca

00:10:44
Speaker
sure. Amazing individual, so yeah. Happy to be part of that effort.
00:10:48
Speaker
Ah, thank you, thank you. Also, I admire, if you don't mind, you, I'll say, you and I are not too dissimilar. We both started off having a show that came out of our enjoyment of math, science, and creativity. In your case, your show is Art of the Problem on YouTube, and our show is the Breaking Math Podcast. We are totally writing the coattails of Breaking Math, or sorry, Breaking Bad. Somebody once said, what if you call your show Crystal Math? And I said, no, I don't think we're gonna go that route.
00:11:17
Speaker
So yeah, I love the idea of creative storytellers who are sharing science knowledge and analytical knowledge. So it's kind of a cool way of reflecting on the history of knowledge. Now, in this show, to give our audience a quick preview, we are going to talk about some of Brit's previous videos that follow the same format as this book, where they break down knowledge in nature, in brains in nature, and then they get to the history. A happy accident. Yeah.
00:11:46
Speaker
Very cool, very cool, yeah. So, I suppose I watched every one of your videos these last three weeks and I tried to summarize them. And essentially, whenever I watch a good sermon or a good talk, they usually have like a big five takeaways or a big three takeaways. I had a big three takeaway.
00:12:06
Speaker
from your videos but then I made it a big five takeaways. Three takeaways about layers of learning and then two additional takeaways about what neural networks are and how they learn. Let me know how I

Neural Networks: Capabilities and Challenges

00:12:19
Speaker
do on the big takeaways and let me know if you can elaborate on this.
00:12:22
Speaker
What I wrote is, from watching your videos, the three takeaways with learning are that in nature, you have examples of trial and error, which involves randomly trying something, and then reinforcing it. That is used all throughout nature, including when toddlers learn, or when bacteria is spreading, or just about anywhere we look.
00:12:44
Speaker
There's another layer that's a little more complex, and that layer is what we know of as classical conditioning for those who have studied psychology. That's when, was it B.F. Skinner, I believe, who trained his dog to salivate whenever he rung a bell, because his dog began to associate that type of learning with a bell, you know, from
00:13:06
Speaker
he would always feed his dog whenever he rang the bell and eventually just the bell itself would cause salivating in his dog. So that's associating a sense and an experience. And then the third form of learning is the most impressive. I think it's abstract imagining and according to this book, it's how humans and mammals have the ability to imagine scenarios
00:13:33
Speaker
where you can play out a scenario in your head. You can imagine what happens if I walk over that pothole and I fall, but you don't actually do it. It's a step beyond associative learning and I've heard it called internal modeling and simulation. Would you say that's a pretty good summary on the layers of learning?
00:13:51
Speaker
Yeah, that's really good. And so I'll just compress your summary now, which is that in the three layers, the first one is genetic learning and the reward is life or death. And the way it manifests is our genes spill out
00:14:09
Speaker
into pre-wired connections that they fixed our whole life and that's why insects you were saying earlier yeah he is on all that alligator and it's a he oh yeah is on autopilot in that context but more specifically when I start the video it's it's that insect brain fixed connections that second layer
00:14:30
Speaker
is then we spill out into connections. Our DNA spills out into the ability for connections to change in life, so changeable connections. That's the key. And then what's neat about the third layer, it has nothing to do with connections changing. Its thought patterns in our brain are kind of at a higher level of abstraction making connections.
00:14:50
Speaker
Right, nice, nice. And right now I'd say if we're talking about neural networks, I've been trying to find the list of activities that neural networks can't yet do quite right. This is where there's some ambiguity with our current technology and what neural nets can do. Can you name some activities that right now a neural network can chat, you know, whether it's chat, GPT, or anything else, cannot do?
00:15:16
Speaker
All right, sure. Yeah, two quick points. Point one is be very careful when you hear someone say what neural networks can't do. Yes. 99.94% they're wrong. And I'll give an example where I even found myself wrong because there's a human instinct to draw a line way out and do the meta thing on the machines and be like, you'll never catch me up here. And the one practical one I had is, and this is true,
00:15:44
Speaker
Anytime it has a strongly learned pattern, something it's known very well, and in the human analogy, it would be some over habituated behavior, like looking at your phone. It's very hard to do the opposite. This is also known as, it connects to the no free lunch theorem. You learn something, there's a cost. And so I was using, even in some talks, my favorite was, you can't play tic-tac-toe backwards.
00:16:12
Speaker
and I would and I tried all the models and they all were failing and I was like I get to be up on my high horse my human horse and make fun of it and everyone laughs in the crowd and I remember in one talk like what is this four months ago it feels like 10 years I'm like once it does this and other similar examples then I'll be kind of scared and guess what now it can easily do that when we went to the GPT-4 model
00:16:39
Speaker
And so A, be careful. B, where does it actually also, like, I'm still kind of sure it fails is self-awareness. And so the practical thing there is runaway errors, not being aware of errors as it is making them and it leads to this explosion of errors. By the way, I want to talk about hallucinations because people got that wrong.
00:17:03
Speaker
But even see when I say that, it's not self-aware of itself and it's going to have runaway errors. I already know of research where they're trying to, again, just add another layer of a neural network, in this case, a large language model looking at itself and you can get out of that error. So I'm trying to partake that it's not a list of what it can do and not do. It's a very blurry, everyone's walking in the dark with their hands out right now.
00:17:30
Speaker
Oh, that's so philosophical. That's so existential. And that's the world that we're in right now, isn't it? I love it. I love it. Oh, man. You got to stay that high. Yeah. In fact, that's literally what we're talking about at work right now, is when chat GPT hallucinates. I've got a guy at work that says, well, let's just create an app that does a quick fact check on it that splices out the factual information, which is a patch. But even humans continually make mistakes. Humans, the way our brains work, we always have our own runway
00:17:58
Speaker
So, I think it will be an ongoing process and every patch is going to be between 0 and 100%, 90% effective. With Goedel's Incompleteness Theorem, I think it's always an evolving game. Fascinating topic. Just back to the divide of hands on the controls or not hands on the controls.
00:18:21
Speaker
are those people in quiz to you? Yeah, yeah, absolutely. What's the patches? Where would you put them in? Yeah, so okay, so for those who are following, real quick, you said, you know, there's the three modes of learning or three layers, there's trial and error, randomly trying something, there is classical conditioning, and then there is imagination and simulation. I want to talk about
00:18:45
Speaker
points four and five real quick. These are points about the philosophy of machine learning in a neural network.

Understanding Brain Function for AI Design

00:18:51
Speaker
The first thing that we're going to talk about later in this episode is in a neural network a concept or concepts plural like dogs and cats they're stored distributively like a constellation of stars
00:19:04
Speaker
But like throughout layers of a neural net and multiple concepts like a dog and a cat are going to share a bunch of the same things like they have a lot of similar architecture and a similar attributes and they both got two eyes you know and a mouth but they're different as well. So in a neural network like our own brain or machine learning it's stored distributively and they're also connected. Is there a better way to word that that you can think of Britt?
00:19:30
Speaker
You're doing a great job, by the way. Thank you. I like to go back to feelings. Let's teach forwards. We have feelings, and so let's just use, we can use any example, but even if I use one of a tree, when we both think about a tree, we simulate a tree, and that means a repeatable, unique set of neurons activated in our brain, and there's technology today that can actually see, know what you're thinking just by looking at your neurons.
00:20:00
Speaker
But why that's important is so a unique group of neurons, what is that from our perspective? That's a feeling. When we are feeling different things, and this is the thing there's two layers to feeling, I think about concepts as a mental feeling, so like the feeling of an apple-ness versus a sponge. Those are unique neuron sets, and we just feel them as different feelings. And so neural networks are storing the things we perceive,
00:20:29
Speaker
as just unique. I like how you said constellation in the stars. That's a good one. Yeah, thank you. I use that on your YouTube. In fact, quick story for our audience. I probably put 20 plus comments on Brit's YouTube asking him a million questions about AI and consciousness. And you thankfully responded, which brings us to our conversation today. So yeah, go to the YouTube.
00:20:51
Speaker
YouTube channel, Art of the Problem, and Britt is very responsive and so are the other folks. So, yay, thank you so much. I appreciate that. Now, that concept of distributed concepts, you know, in a constellation, that is a recurrent theme as we talk about what machine learning knows because
00:21:09
Speaker
There's a push to understand all those connections. It's very random at first, but as we understand it, we can say, okay, so at this layer, we're putting together textures. At this layer, we're assembling the textures. That helps us to know what machine learning knows, and it's helpful. Even that itself is knowledge, which is helpful for, you know,
00:21:31
Speaker
Other things, there's an AI that we'll talk about. There's examples of AI. One of my favorite sections here is after our background stuff. We have examples of AI. There's an AI that identifies bread in a Japanese bakery that has now been repurposed. I shouldn't say repurposed. A similar architecture has been used to identify cancer or precancerous images on, I believe it's MRIs, I believe. I'll have to check that. But it's important to know how does it do it.
00:21:58
Speaker
Because if we're able to peel back those layers, then we say, oh, OK, OK, here it's identifying pixels that are in this one pattern. Then we can say, oh, is that something that humans didn't know? And we can now have that knowledge and use that same emergent pixel pattern in other cancers or in maybe skin textures that aren't as successfully read in this AI. What I'm trying to say is we need to know how AI knows what it knows or rather
00:22:28
Speaker
If we do that, it'll help us to better design our AIs. Wouldn't you say that's a good goal?
00:22:33
Speaker
Right? Yeah, interpretability, it's called. And if you look at where it is now, there's a really close boundary in terms of people can understand, like, when I dug into this, like the first two layers, but you always hit this point where I call it like the magic zone. And then our explainability just goes to zero. It's interesting. And so the more we know, the more we know.
00:22:58
Speaker
Yeah. Yeah. It's a little scary in one sense that there's knowledge that exists that we don't grasp yet. Yet still that knowledge is key to improving our own understanding and even like an overall theory of knowledge. Okay. So in this outline, I want to real quick shift to going over your first video. You have some fabulous diagrams here.

Foundations of Machine Learning

00:23:15
Speaker
where you have some diagrams of what's happening in basic brains. Then we have examples. Some of my favorite examples from your first video is there's things like bacteria that can only sense smell. It can only either go in a random direction or a fixed direction. We talk about Venus fly traps as well as, oh, I didn't get the name of the leaf. There's a leaf that curls up, but it can learn to not curl up.
00:23:40
Speaker
Can we pull up the diagram? Allegra, by the way, my producer today is Allegra. She's in the back room. She's pulling up all the images. How are you doing, Allegra? Yeah. Allegra, you're awesome. Can you pull up the diagram, the sense action diagram? I think it's the second image in the folder.
00:24:01
Speaker
Okay. Well, there's a, oh, oh, not that one. I, I, I, oh, that's a cool one though. That's a cool one. Keep it up though. Okay. Oh, that, that's a Chegg one. These, yeah, we have a whole portfolio of them, which is fine. These are some great, that is, is that one of the AIs that are learning about itself, I believe.
00:24:20
Speaker
Sorry, not learning about it, sorry. It's drawing a picture of itself. Oh, there it is. There it is. Okay. Simple, simple diagram. Very simple diagram. There's a blue circle with goals, then it has an action and a sense. Britt, would you mind explaining the simplicity of this diagram?
00:24:34
Speaker
Sure, so in the middle there, I know we might have audio, is just a circle representing, just call it an entity, which has some goal. And so the goal, again, just to be clear, could be survival. It could be something else. It could be a sub-goal of that. And it can sense things, perceptions, arrow going in, and it can act sometimes by changing its body in some way.
00:25:01
Speaker
And so what I've done here is actually looped the line, so the action loops around and becomes part of its sense. And that's a simple but important idea. I'm actually looking at it now, it's so simple, I lose sight of why I drew it.
00:25:23
Speaker
It's all good. All right, Alexa, thank you so much. You can go back to the main cameras now. All right, awesome. So what I love that is I love the simplicity because the question in your videos is, what is the simplest brain out there? And we were talking, and I forget if we're talking on Twitter or X, whatever it is, but we were talking about how when sense
00:25:40
Speaker
folds in on itself into action, and if something is able to smell something and then choose an action from that, that is essentially the most basic type of a brain. It's something that responds to the environment. One example I've brought up is thermostat. You could look at a thermostat as a brain, or a coil, and in response to hot or cold, it expands or contracts. So that's one example. But I like your examples better.
00:26:11
Speaker
We said earlier, the first one was imagine a bacterium, a single-celled organism that only senses smell. And this baffles me because at a philosophical level, what does it mean to smell something? It's a very hard sensation. But from an evolutionary standpoint, is that the first sense that ever evolved to your knowledge?
00:26:32
Speaker
uh yes this um well it specifically smell i just want to say that all senses are the same and so they're just energy levels that that hit this sensory neuron um and i i want to say senses but i i
00:26:47
Speaker
Not 100% sure it felt right. Cool. Yeah, I think that's great sense. You pick up information about your environment and you make a choice on it. And in a bacterium that's one cell, it can just smell and then it can either go randomly or in one direction. And from that, it can find a food source.
00:27:04
Speaker
Yeah, and so the only other one, what's cool about sense is there's, it's kind of like predicting something that's about to happen, which is quite advanced. Yeah. And so this other more primitive sense of just physical contact, which I'm aware of, now that I think of it, likely that came first, right? And we have examples of, you know, you touch coral, you touch a fly trap. It knows what it knows when it hits it. Smell is, you're sensing something before it happens, which is neat.
00:27:31
Speaker
Yeah, yeah. Big question that we have that we're going to explore further in this podcast is how do our senses work? A long-term project is if you were to make an AI that had many neural nets that integrated a bunch of different senses, how could we approximate mammal behavior in some sense to some degree? That's why I ask this question now.
00:27:53
Speaker
Now, real quick, I want to mention something very important here. This here, without any more complexity, this very, very simple brain, this one diagram, one loop, it's a fixed action. You use the word fixed action in your video. That means it can't change. Something like, and I think the example uses a Venus fly trap or any kind of a trap.
00:28:12
Speaker
You know, if you stimulate the sense on a Venus flytrap, it closes. And that's just about it. It'll always close, and it's always the exact same thing. There's a slight evolution here, and I think the example you gave is, let's say you've got a mutation, and you have some more
00:28:31
Speaker
connections pop up inside that brain and you've got some more wiring and there's a diagram as well and a leg right if you could pull up the second diagram or I'm sorry I think it's labeled the third diagram it's it looks just like this last one but it's got more red lines
00:28:47
Speaker
Oh, look at this one, this one. Looking at this diagram, I'm thinking about a very, very early neural network. You may have a different diagram in your videos, but I was thinking of those red lines on the inside. Basically, if you have a neural network, there's a bunch of possible pathways, and not every sense will have the same results, and it can change over time. Thank you so much, Allegra. I appreciate that.
00:29:14
Speaker
And do you remember the example that you used of there's a plant where if you touch it, it rolls up, but eventually if you're not a threat, it'll learn to not roll up with certain sensations. Do you remember the plant, what it was? Yeah. And so it's good to repeat that like, what's machine learning really doing? It's how to act.
00:29:35
Speaker
How to act is given an input. What's the output? I just like to repeat that because things can get confusing really quick to anyone new to this how to act in out and so I was looking for examples where the connection between input and output
00:29:51
Speaker
can change in life, which is a huge advantage. And the way it first changes is not through new connections growing or anything. It's actually just turning down a connection, inhibiting a connection gradually. And this is what's so cool where habituation comes from. It's just not doing something as much. That's kind of like this baby step in in-life learning. And this is a leaf that
00:30:18
Speaker
you know, after a while realizes that getting hit by water is not a bad thing. Yeah, it's funny, it's all on how you explain it, right? You know, I've had a whole lot of philosophical conversations with people about, you know, are plants conscious, whether you talk to somebody who's, you know, no offense to hippies or anything, you know, but like, are plants conscious? I would think in my background, no, no, no, a plant is not conscious. There's no brain. Yet, if we just break down the single task of learning through this
00:30:47
Speaker
slightly more complex internal wiring. Yeah, it doesn't curl up as much when it doesn't die or it's not threatened. And it's not that it understands what you are specifically, it just is able to build an understanding by virtue of still existing and being healthy. The sensation to curl up weakens over time. That's all it is.
00:31:12
Speaker
Yet we can still say, or if you choose to word it this way, it gets used to you touching it, or it gets used to raindrops and then no longer curls up. So it's fascinating. It's really fascinating. So wow, wow. In the next section, I talk more about conditional learning, but I think we touched on that pretty well, actually, where we talk about dogs and bells ringing and salivating. You do talk in the video about what makes you... Again, sorry, just pause.
00:31:38
Speaker
Oh, sorry, just to interject there because this is an example where I get really confused with like, okay, you got the one Pavlov's experiment everyone knows about, but there's so many experiments and you can think that you need to know them all and there's the if you're thinking from the top down, the human brain so complicated, you get lost very quickly. And so I'm glad you moved on because really what you have to know is can a connection change in life?
00:32:03
Speaker
I don't care about the context. We'll get all confused thinking about context. It's kind of connection change.
00:32:10
Speaker
Yeah, and that's basically it. And then other examples in this video, like you move on to abstract thought in human brains and all that. And yeah, it all comes down to these basics. And just through mutations and hardware, we get human brains as we know them today. All right, so what I'd actually like to do is move on to your video series on machine learning.
00:32:39
Speaker
Now this is one where in this outline I didn't have a whole lot of videos. There was so many videos to choose from. I didn't get a sampling here and part of me wishes that I did, but I'd like to talk a little bit about the early research into machine learning and
00:32:58
Speaker
some of the clumsy early models of a neural network, ones where you had to manually change each dial. I was hoping that you could give us just a quick little preview of early research and the clumsy early models.
00:33:13
Speaker
Sure, I'm glad you brought up a dial. So the original dream was let's make a mesh of wires with just neurons and wires connected. And a neuron is like, in electrical terms, it's a transistor. If it gets enough energy it turns on.
00:33:30
Speaker
So you need something to be your neuron and that thing basically you can use a transistor. And the only other thing you need is connections, but we need to be able to change connections. And so this is why I use, and I made this up, I don't think they didn't use this, but I use the dimmer switch to hint at this idea of a dimmer switch as a variable resistor, which allows you to change the strength of that connection electrically.
00:34:00
Speaker
That's all you need. Then you have to give it, again, machine learning, all of learning, input, output, how to act. And so the first experiment was so great. Rosenblatz was just like he used like 50 neurons all connected together, thousands of wires, but he would either draw on like a light, bright style screen, very low resolution, a circle or a square, and then have the machine learn circle versus square.
00:34:26
Speaker
And how do you learn? Well, you got to give it some experience. So he draw a circle, put it on an initially random mesh of connections. And that I need to amplify super clear because there's not much to machine learning if you got the core right. Random mesh of wires? Well, it doesn't work at all at first. What's not work me? Well, the output is forced to be one of two things.
00:34:48
Speaker
and we can call it circular square and in this case that's what he wanted to do and so when you put a but it does nothing at first so you put a circle in this machine and what happens with the two light bulbs at the end the machine doesn't know anything they're both kind of lit up just kind of randomly
00:35:04
Speaker
And so it has to learn. What's learning? Well, in this case, the human kind of cheats a bit where we show it a square. It does some random thing that doesn't work. Then one by one, we go through and wiggle each dimmer switch. And sometimes going one way will help, meaning, oh, I, a human, I'm going to call, let's call the top light bulb a square and the bottom light bulb a circle.
00:35:28
Speaker
Anytime I put a square in and I do a wiggle, when the right light bulb gets brighter, I'm going to keep that wiggle. If it doesn't help, I'm going to go the other way. And you literally just do that through all the neurons. Keep doing that. And you hit a point where you put circles and squares in this network and it doesn't need you to do any more wiggling. That's the time that it has generalized, which means it'll work on what it was trained on.
00:35:54
Speaker
But most importantly, it will work on new circles and squares you draw, different people with different handwriting, different pixels, basically. This was the very first half of my interview with Brit Crews, this first half we focused on what is intelligence and what are examples of intelligence in the natural world, different kinds of brains and that sort of thing. The next half of the interview, which will be airing next week at the same time and place, is all about machine learning specifically and what are the architectures
00:36:22
Speaker
in artificial intelligence that allow it to be so successful, things like attention networks and transformers and things like that. So next week is all about the artificial intelligence side. Stay tuned. We'll have a great interview next week.