Ant Behavior and Human Cognition Analogy
00:00:00
Speaker
What if I told you that the same simple rules that let ants find the shortest path around a rock might also explain your memories, your thoughts, and even the rise of artificial intelligence?
Introductions and Expert Backgrounds
00:00:11
Speaker
I'm Autumn Feneff, and this is Breaking Math. Today, we're diving into the question, is AI really that different from us? To help us untangle it, I'm here with two brilliant minds. Jay McClelland, a professor of psychology at Stanford, whose pioneering work on neural networks shaped the way today's AI models are built, and Gaurav Suri, his former student turned polymath.
00:00:35
Speaker
He's been a consultant, an entrepreneur, a novelist, and now runs his own research lab at San Francisco State University.
Exploration of 'The Emergent Mind'
00:00:44
Speaker
Together, they've written The Emergent Mind, a book that asks us how intelligence arises, whether in people or in machines. And trust me, the answers might just change the way you see both your brain and the algorithm shaping our world.
AI vs Human Brain: Similarities and Differences
00:01:01
Speaker
that I have to ask today is Is AI that different from the human mind? From where you're standing right now, what's the single most important similarity and the single most important difference between human brains and AI? Okay, similarity and difference. So the similarity is in my book that It's useful to understand both the emergence of the human mind and the emergence of intelligence and artificial systems using the neural network framework.
00:01:30
Speaker
What is the neural network framework? that's That's a question. So it's a framework that's inspired by the brain in which neurons that by themselves can't do mathematics or write poetry interact with each other.
00:01:43
Speaker
as a system from which thought and math and poetry emerge, right? So it's a it's ah network, literally a network of processing units that are themselves simple, but in in their aggregate are capable of producing thought and intelligence.
00:02:01
Speaker
That is a neural network. It's inspired by the brain and In the brain, the computing components are neurons. In artificial systems, we call them units, which are inspired by neurons.
00:02:14
Speaker
And they activate just like neurons do. And they influence each other via weighted connections. That means connections, meaning the influence can be strong or weak or positive or negative.
00:02:26
Speaker
So there's a network of things influencing each other, receiving input from the outside world, getting activated, activating each other. And this activation is cascading through the framework and producing some kind of output, maybe an action or a decision.
AI's Lack of Inherent Goals
00:02:40
Speaker
So it turns out that neural networks that Jay and others developed to understand the human mind are very useful in understanding um and And constructing artificial systems.
00:02:51
Speaker
So that's my similarity vote. And my difference vote is that by themselves, artificial systems are not goal directed. And now what is the goal and where do our goals come from? Right.
00:03:05
Speaker
And it turns out that there's many sources of our goals. Our earliest goals have to do with our physiological needs of thirst and hunger, and those want to come back to some sort of homeostatic state.
00:03:16
Speaker
But then there are other inherent bodily needs that are enforced by our environment, our culture, need to belong to a group, for example, need to proceed with autonomy, need to be effective. These These are not physiological needs, but do emerge and they they drive our behavior often over decades.
00:03:38
Speaker
And and and this this thing, which is integrally tied to molecules in our body and our brains, is where these representations we call goals come from.
00:03:49
Speaker
And there is no precise equivalent of this in AI systems. For me, that's the
Information Processing in AI and Humans
00:03:55
Speaker
biggest difference. The crucial unifying factor from the point of view of the actual models that we build when we're modeling the human mind or we're building an AI system for the sake of having a chat with somebody or recognizing images of photographs or whatever it might be, these systems use the concept of patterns of activity represented by vectors of numbers and patterns of connections between populations of neurons represented by matrices of numbers as the core elements of the neural network systems that
00:04:34
Speaker
you know, we use to model our brains and build our AI systems. And in our actual brains, we have these neurons and we have these connections. So it's ah also thought of as a model of the actual physical structure, whereby We have hundreds of billions, we have not hundreds of billions, but about 100 billion neurons in our brain.
00:04:55
Speaker
The pattern over all of those is a huge long vector and the connectivity is, they're not all connected to each other, but they're densely connected to each other so that they speak to each other and communicate with each other as a collective. And understanding that our minds and our AI systems are both those kinds of systems is, I think, a fundamental part of knowing what the discourse is about, about these these ah AI systems that we have today.
00:05:22
Speaker
When I think about differences, Rob and I agree that goals are very important. Another difference, which is very, very striking, is that we as human beings learn far more efficiently than our artificial ah systems do today.
00:05:39
Speaker
And, um, For example, ah the big language models in use today are trained with 100,000 times more training data than a human being would ever experience in their lifetime.
00:05:53
Speaker
So how do we, how does our human brain achieve so much greater efficiency in its ability to learn? That is one of the key unanswered questions right now in neuroscience, I think.
00:06:05
Speaker
Neuroscientists do not understand how to build anything. you know a model that would have the capabilities of these large language models. And they certainly don't know how to do it, you know, with that much greater
Learning Systems Transition in AI
00:06:17
Speaker
efficiency. So there's a huge mystery that remains to be solved about how our human minds get to be such capable learners. Yeah. So those similarities and differences. So what I realized going through this book is that you both played a role in shifting AI away from all of these rigid rules towards learning systems.
00:06:40
Speaker
So would you like to go into that history a little bit with me? And why did rule-based AI hit a wall? And how did your PDP connectionist work really help break through this?
00:06:55
Speaker
Well, I have a very um strong point of view on this particular subject. And I guess my the key point that I like to emphasize in this context is that our our natural cognitive abilities always exist in In a world where the the rules that people try to write don't always work perfectly.
00:07:17
Speaker
So, for example, there's a very simple rule for describing the past tense of words in English, right? You add ed to a present tense of a word to make its past tense. However, that isn't always true. Sometimes you say, instead of sing, you say sing, right? But often enough, you do something that's pretty similar to the standard thing, but you just tweak it a little bit. So the past tense of love is loved.
00:07:43
Speaker
You add a dd there. The past tense of have isn't halved. You kind of like compressed it a little bit, right? You made it a little bit sort of easier to say. So there's a There's a second thing happening there. There's like a word that you say very frequently. You kind of want to make it more efficient. So you kind of compress it a little bit.
00:08:04
Speaker
That means the rules don't fit perfectly. and When we speak to each other in natural language, we use phrases that have idiosyncratic, idiomatic meanings that that you know nobody could capture by rules. And you know it's very hard for people learning rules in a second language textbook to pick up on what those things mean, right? But the neural networks, by virtue of their use of these graded patterns of activation over populations of neurons and connection weights to gradually learn in the context of the whole distribution of possible words and ways people say things can sort of work their way around all those little nuances while also being sensitive to the regularities in the system. And so they end up having the advantage of
00:08:48
Speaker
being able to mold themselves fluently around the idiosyncrasies of things while still somehow capturing a pattern of regularity so that they do successfully generalize to novel instances. And this was ah key debate in the nineteen eighty s could you actually build a system that would generalize properly without building in the rules per se? And it's still a very central issue. There are still part of the limits that neural networks still have. Maybe some of the aspects of their inefficiency in learning relates to this issue.
00:09:24
Speaker
But by virtue of having all that data, they are able to pick up on the nuances and therefore exploit them in ways that systems that were built around neurals haven't been able to do and never, and probably never will,
AI Decision-Making and Human Processes
00:09:38
Speaker
would say. So I want to, Jay talked about the perspective on language, right? So this this idea where people are trying to, we we're trying to maybe me continue trying to systemize rules inherent in language.
00:09:50
Speaker
There's a parallel thing that goes on when we think about our decisions and why we do the things that we do. And, you know, if we if we try to make a rule about our decisions, probably a simple rule is we do things that bring us value, maximize our value, right? Why do we take chocolate over vanilla? Chocolate gives us more value.
00:10:11
Speaker
why do we not Why do we drive, not walk? Because the benefit of the exercise was less than the convenience. So there's this trade-off between costs and benefits and that's certainly seems like a rule. And when you think about decision-making, and to this day, as people want to design systems that that act in the world, they're often basing their system on this rule of value maximization. It is a value calculus.
00:10:34
Speaker
And the neural network approach is sensitive to things that yield outcomes that are beneficial to the organisms, but aren't limited to a single rule, right? Because they're an interaction,
00:10:47
Speaker
And there's the word emergence is in our title. and And I want to underline that word. So there's activations that are related to benefits and costs. But there's also ah activations that are related to attention, what you happen to be looking at or what you did the last time you were in that room or what the person before you just did. Or you're looking at a menu and you see a photograph of something on top, which is closer to your field of vision. All of these are producing activations.
00:11:16
Speaker
All of these are entering your system and they're interacting with, gee, how much do I like that salad whose pictures I saw? But now that salad maybe has more activation because the picture was front and center.
00:11:27
Speaker
And what did the person in front of me order? And all of these things are interacting to produce decisions. So the rule is not missing, the the value is not missing from the neural network perspective, but the neural network perspective is not hard coding or not limiting itself to this single calculation.
00:11:45
Speaker
To me, these issues of rules and language or rules in in decisions, are they they have a parallelism to them, which is the human longing to understand things often requires rules. That's just how we approach understanding.
00:12:00
Speaker
Working with neural networks suggests that it's it's it's useful to sort of think about systems from which these
Scaling Neural Networks and Human Brain Evolution
00:12:08
Speaker
things emerge. So looking at how to scale, whether that's more neurons, more data, or more layers, quantitatively change what these systems can do and that you're building.
00:12:21
Speaker
This is a very interesting question, that has been a focus of study both in the real nervous system and in ah AI systems. And the parallels, again, are very, very striking to me.
00:12:35
Speaker
The argument has been made in some circles that what makes the human mind different from animal minds is some special bit of circuitry, maybe that, for example, can do a recursive computational operation.
00:12:51
Speaker
The alternative perspective is that the human brain has essentially scaled up the architecture of more simpler mammals' brains. And we tend to keep increasing the parts of the brain that are in between the parts that are processing the raw sensory input.
00:13:11
Speaker
We're dealing with... you know, vision or audition or actually, let's say, touch or movement. We keep scaling up the parts between them so that we can do more and more computations on top of them. There are more separate brain areas that are more interconnected, just like more layers in neural network. And the work with neural networks has, I think, been the place where These benefits of scale can become very, very palpably studied by ah ah simple process of just running a whole bunch of different sizes of networks, numbers of layers of networks, numbers of amount of training data.
00:13:55
Speaker
Also gives the opportunity to explore the trade-offs between these things and the The idea that, oh, as the complexity of the data grows, the complexity of the network that's going to be able to exploit that data has to grow.
00:14:09
Speaker
And what exactly is that relationship? These are all things that can be studied almost from an empirical and descriptive point of view. Run the model with different size data and different size, different numbers of layers, the sizes of those vectors and matrices that I mentioned. and you can get these laws.
00:14:29
Speaker
What's interesting, though, is I think we don't understand why these laws hold exactly. And that is part of what remains, you know, um ah part of the mystery of emergence and the mystery of why this scaling can actually succeed in the way that it does.
Emergence: Ant Colonies and Neuronal Interactions
00:14:49
Speaker
I personally feel that we know there's this hundredfold, this hundred thousandfold difference in amount of training data that we in machines require. So there's some, you know, factor, right? A multiplier somewhere that's way off in these scaling laws as we understand them so far. And that's That's an important place for further investigation and further exploration.
00:15:11
Speaker
But it it's definitely true that both in the brain and in our artificial systems, scale changes what can be accomplished has led to things that nobody nobody expected they would see.
00:15:25
Speaker
Now, I know when you're looking at some of the scales and when you're looking at various patterns, you go into one example in the book that was referring to an ant colony. It's essentially, why does emergence matter so much for understanding cognition and AI? And why was this ant colony's analogy so intuitive? Right.
00:15:50
Speaker
Yeah. Thank you for identifying that. That's one of my favorite examples. And for me, really captures the spirit of the book. So what is the what is the example? The example is that if you have a train of ants and they're going from their nest to, let's say, a food source, imagine imagine they're going back and forth.
00:16:07
Speaker
It's a straight line, let's say, just to simplify. And what you do is you put an obstacle midway and now the ants have to decide, are they going to go left or are they going to go right? And let's also imagine that you arrange the obstacles so that one way is the short way and one way is the long way, right?
00:16:25
Speaker
Now, one thing that's, this experiment has been done both in the lab and out in the wild, including by me in the wild. And what you find is that ants in a few minutes will go the short way.
00:16:36
Speaker
Initially, about half the ants go one way, half the ants go the other way. But in a few minutes, most of the ants are going the short way. Now, the question is why? And I remember when I saw this, I asked um i asked my son, you have a theory about this? And his theory was maybe the queen ant sort of can suss out what's the short way and maybe there's a way these ants have a way to communicate. And turns out that ants don't have the cognitive machinery about distance and no, they they can do none of that. They're actually, as individual ants are not terribly smart, but colonies of ants are extremely smart. So the question is, how are they doing it?
00:17:09
Speaker
Well, the way they're doing it is ants are laying pheromone trails and pheromone trails on the short path get more concentrated. and we explain in the book how this happens. You get more concentrated than pheromone trails on the long path.
00:17:21
Speaker
And so when an individual ant has to make a decision, it makes a decision based on I'm just following the greater concentration of pheromones. And it reinforces that decision. And lo and behold, then most of the ants are taking the short way.
00:17:34
Speaker
Now, the reason this is so interesting is the ant colony has the intelligence to figure out what's the short way, what's the long way, even though an individual ant does not, right?
00:17:46
Speaker
No ant, no individual ant can solve this problem. So this is an example of emergence. Emergence is where the property is found in the whole system, but it's not found in any of the individual parts.
00:18:00
Speaker
So this is emergence. Now, hopefully your listeners are saying, okay, so the brain, none of our neurons can by themselves do mathematics and yet interacting with each other, they can. And so this emergence is happening. Ants are laying pheromone and following pheromone.
00:18:17
Speaker
Neurons are activating with these things we call action potentials that are bolts of electricity, bing, bing, bing, at various speeds, and are influencing other neurons to do the same.
00:18:28
Speaker
And from those interactions, our intelligence emerges. And so a useful metaphor for our brains is is a colony of ants, which are interacting with each other are doing amazing cognitive feats that are possible in the aggregate, but not in the individual.
Distributed Representations and Learning Errors
00:18:44
Speaker
Let's get into now how the brain actually stores information and how it compares with AI. So you have a couple topics, whether it's been hallucinations and distributed representations that you talk about in the book, right?
00:18:59
Speaker
Can you explain distributed representations and also how does this approach enable similarity categories and taxonomies? So a distributed representation is a pattern of activation over a population of neurons. In the brain, very physically, that's exactly what we we mean by that. So that, for example, when I'm looking at somebody's face, there are neurons in many, many regions of my brain that are being activated by this. And two very, very similar faces will actually produce quite similar patterns of activation over different populations of neurons.
00:19:39
Speaker
What's but's deeply important and interesting is that, you know, even if the size of the image changes like radically so that the actual neurons on the retina that are activated by it, you know, are very, very different from each other.
00:19:55
Speaker
Higher levels in the brain have sorted this out so that a very similar pattern is produced in a place, you know, in in and a few patches of neurons, which are specifically involved in representing faces. And so, you know, the way the brain solves the problem of variantly recognizing an object in spite of changes of its position in an image or its scale is by figuring out how to map these very different patterns on the input down to a place where they're very, very similar overlapping patterns at some higher level of representation.
00:20:30
Speaker
But it's not like there's a neuron for Jay McClelland or Autumn. There's a different pattern for each of us. And that's the key idea. So two views of the very same person will produce two very, very similar patterns.
00:20:44
Speaker
Two views of two very different people will produce very distinct patterns. And those patterns are what we mean by distributed representation. So where the sort of interesting power of this, but also its potential fallibility comes in, is that if we see many examples, each of which is unique, but which, when averaged, produce a pattern that we've never seen, it can be the case that that pattern that we've never seen seems like the most the one we've seen the most.
00:21:19
Speaker
And This is a natural consequence of the way these patterns work, because if the overlap is sufficiently high, they they tend to average with each other and sort of create a sense of, oh, the average is the most familiar and typical thing, and that's what I'm going to say. That's the one that I'm sure I've seen.
00:21:40
Speaker
So that's one kind of hallucination that this this this so kind of system can give rise to. In general... these models, because they don't store the individual patterns as such, but they store changes in the connection weights between the units that participate in them, they tend to have an averaging effect so that eventually they're they're picking up on commonalities more strongly than differences. And those tendencies are what, you know, essentially correspond on the one hand to things that are rule-like,
00:22:15
Speaker
But on the other hand, can sometimes be, in fact, errors, right? So the over-regularization tendency that we all have to sometimes think that the past tense of an exception is the regular or more typical kind is is ah isn't it is a consequence of this this process.
00:22:32
Speaker
the The key thing that I'm hoping your listeners take away on storage is in a neural network, memory and knowledge are represented in the connections. So if I say green and you say grass, it's because neurons that respond to the sound of green are influencing neurons that respond to the word grass.
Reconstructive Memory: Pros and Cons
00:22:51
Speaker
And so that knowledge is in the connections.
00:22:53
Speaker
And ah distributed representation is a representation of something that is produced by these connections. This is really lovely story that happened to me yesterday. There's a student in my class, he sits sit in the front row, always asks questions. And I look at him, this happened yesterday, and I say, something's off.
00:23:12
Speaker
And I can't figure out what's off, right? I'm looking at him and he's smiling at me. And it turns out that it's the twin. it's not So I'm reminded the story for what Jay is saying. So the distributed representation that my connections produced of his face was very similar to what his twin produced, but not identical.
00:23:32
Speaker
And the discomfort I was feeling was related to that difference and that commonality. The contrast to distributed representation is localist representation, where have one neuron for the twin and another neuron for the original student.
00:23:44
Speaker
But that means those neurons are as different from each other as my neuron would be for you, Autumn. So it's much better to do sub-patterns more efficiently. never had a student swap with their twin in any of my classes. So that that's a new one for me. But essentially, memory isn't like a video recording. What are some of those advantages and disadvantages in these reconstructive systems? Yeah. So, you know, Jay likes to point to um playwright.
00:24:15
Speaker
um Harold Pinter, who talked about the mistiness of memory. I love that expression. And in one of our chapters, we talk about this this TV show, in which is just told from two perspectives. And it's it's amazing how one perspective is so different from the other perspective, even though neither of the persons wants to be lying so or intends to be lying.
00:24:37
Speaker
So the advantage of this malleability, this mistiness, is that it allows for this generalization that that Jay is talking about. So if we have seen many instances that are one way, then we can extend and say, oh, this this thing, this new thing that I'm seeing is probably like that.
00:24:56
Speaker
And without this ability to make concepts, we wouldn't be able to make our way in the world. But it's not a recording. And it has... The fact that it's not a recording that's objectively the same for all people means that we all construct memories differently based on the same events because the constructions based on who we are, who we are, affects the the connections that we make in our brain.
00:25:22
Speaker
That's, you know, that that is a facet of our memory. But without this ability to generalize from things that are resemble each other but are not the same and be able to have knowledge about these things, be able to predict their properties is is a huge, useful property of our knowledge systems. So
Mathematics: Invented or Discovered?
00:25:41
Speaker
if we're thinking about higher level thinking, whether this is logic, math, and how we build some of the models in our world, is formal logic an emergent construction built on top of experience rather than a built-in module?
00:26:00
Speaker
That's what we like to think. The way I like to put it is that for hundreds of thousands of years, our ancestors began to diverge from other animals, other apes, and began to develop language, bigger brains, modes of communication with each other that allowed them to communicate at a distance with language and so on.
00:26:22
Speaker
And the languages of the world that we use today to speak with each other arose naturally in that context as diverse groups broke off and spread around the world and populated it But it wasn't until human beings settled down, had leisure time, had goods and services that they wanted to document and save records of and so on, that people began to actually ask themselves questions about, well, you know, how should we, in fact, optimally record and make records of and keep track of and and make inferences about things like the quantity of objects that we have?
00:27:03
Speaker
Like, you know, if one person has three sheep and another person has four sheep, how many sheep is that altogether, for example? It turns out that this concept of number, just the basic idea of having exact quantification of, you know, how many discrete items like sheep you have. is something that probably didn't come into existence until ah few tens of thousand years ago and maybe didn't really become systematic until people actually had the ability to notate them
00:27:38
Speaker
and form records that they could keep and inspect and think about and share with each other and so on. So the invention of notation systems became externally constructed resources that then we can share with each other, we can teach each other, we can think about, refine, and develop further.
00:27:59
Speaker
And those culturally constructed things then become themselves the emergent objects that constrain the way we think and allow us to go way beyond what our ancestors could do without those systems that we feed back into ourselves and structure the way we think with. So I've been thinking quite a lot about the basic notion of exact quantity, which very strikingly some cultures completely lack. They have no words for even exactly one object, and they apparently don't even
00:28:36
Speaker
Like, think about whether there's exactly the same number of items in two sets. It's just not something that ever comes up in there. Culture has no meaning for them, and they have no system for it.
00:28:48
Speaker
So our human cognitive abilities, as we're experiencing them today, depend on this sort of substrate, this biological process that we've been talking about mostly throughout this talk, that Together with these invented systems that mathematicians and philosophers developed over time that then become the tools that allow us to extend our abilities, keeping that flexibility of the the parallel distributed sort of approximative and linearity.
00:29:22
Speaker
you know, it potentially hallucinating from time to time kind of characteristics, while also being able to like cross check it with some system of ground truth that is in fact a completely constructed thing outside of our native
AI Learning Efficiency vs Brain Capabilities
00:29:36
Speaker
humans. So with that, the big question that I have is math.
00:29:40
Speaker
Was it invented, discovered, or both? And how does this framework actually resolve that classic debate? Yeah, so this is is a question that we both think a lot about.
00:29:54
Speaker
i think what Jay just said can be summarized that we invented the instruments of math when we were counting sheep. So if we were living on Jupiter and there was nothing that had a identifiable object, everything molded into each other.
00:30:10
Speaker
One can imagine there would be no need to invent the idea of number. But when we want to count sheep, right? When we want to count sheep, we got to invent number. Or when we want to divide a field, we got to have some way of measuring equality of area.
00:30:23
Speaker
So our culture nudges us to develop the instruments of mathematics. And we humans are capable of asking questions from one context into other contexts.
00:30:36
Speaker
So a classic example of this is you have you have a set of five things and it's equal to another set of five things. So five dogs and five cats, that's the same number of... And Cantor, a mathematician, starts applying that question to infinite sets. So we do have the ability to start with these basic instruments and then extend and ask questions about big patterns in the universe.
00:31:03
Speaker
Now, the patterns in the universe are there, right? So if we ask a question about the periodicity of planets, we will derive that they go in in an ellipse shape. That's not invented by humans. That's a pattern that exists out there.
00:31:17
Speaker
So in that sense, that's discovered. The fact that planets go around in this elliptical object is a discovery, but that discovery is made possible by the invention of these instruments of mathematics, right?
00:31:29
Speaker
So we invent things to solve problems in front of us. We extend them because that's what we can do. We can apply instruments in one domain to another domain. In the process, we can examine regularities using mathematical language because it seems that this language is extremely well suited to documenting regularities.
00:31:50
Speaker
And it gets to the point where we can just follow the regularity. We can just follow the beauty of an equation and use that to make a prediction. But ultimately, it starts with an invention to solve a concrete problem.
00:32:04
Speaker
And it discovers patterns in the universe. Now, as ai is going to discover these patterns, what tricks does the brain use to learn efficiently than AI, which is still struggling here?
00:32:19
Speaker
There is a kind of a distinction that I can perhaps offer, or at least a clue that's recently been introduced by neuroscientists that has made me think that we'll be learning more about this soon. So when we think about, when we have thought about learning in the brain,
00:32:37
Speaker
up until very recently. There's this notion that Donald Hebb, a very early sort of like, you know, neural network kind of thinker like us, who who wanted to say, okay, how do our intelligent abilities arise from these neurons?
00:32:52
Speaker
He had this idea about learning. He said that when one neuron fires and participates in firing another one, that makes the means makes it have an action potential.
00:33:03
Speaker
then the connection between the two of them will will change. And this is something that neuroscientists have studied and that is the fundamental sort of idea in neural networks. You know, we have little adjustments that occur in connections based on the activation in the in the two neurons.
00:33:21
Speaker
The recent discovery is that actually sometimes neurons do something different that's very different from just having an action potential. Sometimes they go into... An action potential is a little event that lasts for a millisecond or so and then dies out and becomes possible again another millisecond five or ten milliseconds later.
00:33:43
Speaker
But these other events are events that last for a third to two-thirds of a second. They're meta-event within neurons, which makes it possible for an individual neuron to...
00:33:56
Speaker
completely remap its connections and to adjust how it it it how it's learning about other neurons that are fired within a much larger time window than we previously imagined.
Goal-Directed AI and Evolutionary Processes
00:34:09
Speaker
So instead of events having to happen within 10 or so milliseconds of each other, you know, one neuron firing and then the other one firing within a very, very short time window,
00:34:19
Speaker
It now turns out that this time window can be on the order of two or three seconds, and you could still build an association between activities. So this is something we haven't figured out how to integrate into models of how learning occurs beyond the level of what goes on in the rodents in which it's been discovered. It's something that I think is going to turn out to change.
00:34:42
Speaker
Teach us something very important that we will one day be able to use to help us understand both how brains learn more efficiently and how our AI systems also might be able to do the same.
00:34:53
Speaker
So as we're looking at our AI systems running more effectively and efficiently, if embodiment is essentially key, what is the minimum body that an AI would need to become goal-directed essentially without running a mug?
00:35:12
Speaker
Well, this learning algorithm that Jay was describing is crucially dependent on neuromodulators, right? So so this these potentials, these longer term things that occur occur because of neuromodulators.
00:35:25
Speaker
Now, when are neuromodulators released? Well, it seems like evolution has really developed these signals of this event is worth a neuromodulator release for the survival of the organisms.
00:35:37
Speaker
And so there's literally our entire evolutionary heritage has been gone into when these neuromodulators secreted and then their impact on our learning process. Now, it's very possible that we'll be able to get shortcuts for AI systems like hacks.
00:35:52
Speaker
we don't We don't have to replicate the entire evolutionary process. But I will say that there's a lot of complexity here. um I don't have a good sense of whether the hacks are going to get 80% of the answer or 20% of the answer.
00:36:08
Speaker
To me, that's an empirical question. But I love that you use the word embodied. Because am increasingly thinking that it's and the network is in the entire body.
00:36:20
Speaker
The entire body is influencing each other. There was an article I was reading about 40,000 neurons in the heart, which are differentially impacted in heartbreak than other neurons in the brain. This is...
00:36:34
Speaker
amusing to think about. But my point is that the neural network is, in some sense, the network in the body. And we are evolutionarily formed creatures. And can we find useful shortcuts?
00:36:47
Speaker
Perhaps. To me, TBD. I've noticed that you take an agnostic stance on consciousness in the book. And if intelligence doesn't require consciousness, what functional role might consciousness actually serve?
Consciousness in AI: Possible or Not?
00:37:02
Speaker
Could AI ever become conscious and does it matter? I know it's a loaded question. I, you know, in my entire career, I have shied away from thinking about consciousness because it doesn't lend itself naturally to mechanistic study. That is to say the experience of exactly how it is that that delicious chocolate ice cream tasted. or anything like that, that's called the hard problem of consciousness. And it's it's extremely difficult to make progress on.
00:37:35
Speaker
So the reason that i end up feeling agnostic, though, is that there is this sense that one has That the qualities of our experience are themselves things that motivate us to achieve That deliciousness that I experience is something I work for.
00:37:54
Speaker
Sense of chagrin embarrassment that I can have if I make a stupid mathematical mistake is something I work to avoid, right? I feel that happiness. how embarrassing it was for me to have made that mistake. And I'd like, that's a signal for me to pay attention and not let that happen again. Right. So those experiential elements tend to co-occur with things that are extremely important in making the adjustments that allow us to work towards the goals that, that Rob was talking about before.
00:38:26
Speaker
And when those goals become ah you know, really important emergent goals like goals of social justice or equity or things like that, but they tend to lose that concreteness to them of the more physiologically grounded ones that were built around. And so I feel that that, you know, to the extent that they accompany things that are evolutionarily built into us, they are certainly accompaniments of things that that are functionally significant. So, you know, I agree with Jay that the heart problem is...
00:39:08
Speaker
Maybe not the first point of attack just because it is a really hard problem. I think it is possible, as Jay started to allude, is functionally, let's what is the function of consciousness? And here, one can sort of start making some progress, right? So we were talking about, for example, drinking when thirsty.
00:39:25
Speaker
And that's that satisfies specfies some... some cells, it brings them back to homeostasis. But that process, increasingly finding out, takes a lot of time.
00:39:36
Speaker
The cell coming back to homeostasis is not instant, right? So you need a feeling that maybe can can help guide that, oh, this this thing will bring your cell, accompanies the feeling of the cells coming back to homeostasis.
00:39:49
Speaker
I'm not saying that's the purpose of consciousness. I'm giving that as an example that one can think about from a functional perspective, what roles could conscious experience play in guiding our system?
Understanding Humans and AI for Empathy and Morality
00:40:00
Speaker
To me, what's amazing in our book is how far we can come without relying on these these things, right? Because you you'd think you'd think that Consciousness is the mind, right? And freud Freud was wrong about a few things, but what he was right about, which for me was equally earth-shattering as the Copernican discovery, was that the mind is not just the conscious thought.
00:40:24
Speaker
he He intuited this from talking to people. and And what our framework, this general framework shows is that, look, you can talk about perception, you can talk about action, you can talk about language, you can talk about memory, you can talk about even these mathematics to some extent without relying on the existence of conscious thought.
00:40:45
Speaker
I think it's really important to take a functional perspective. And I think it's really important to be struck by the fact how far we can come without a function without consciousness in the system. One question that i have for you both is let's step into the here and now of AI.
00:41:02
Speaker
risks. We know that there's bottlenecks. And how do you use technology responsibly? Now, what's more worrying for people right now? ai with its own goals or humans misusing ai for deep fakes and persuasion?
00:41:20
Speaker
So you're you're raising very important questions. And the perspective I have about this is that it's it's sort of the humans behind the scenes that are the ah crucial elements to worry about, at least in the immediate term.
00:41:36
Speaker
For two reasons, right? For the one reason related to what you just said, people can decide they want to use AI to do things that are nefarious and misleading and and and abusive of people's, you know, creditd credulity and things like that, and that can be extremely harmful.
00:41:53
Speaker
And so just as any technology, you know, could be used for good or for ill, you know, people can be the bad actors in this setting and deliberately so.
00:42:04
Speaker
So this is something we need to figure out as a society, how to manage and regulate. At the same time, the good uses of AI are also, in large measure, the consequences of the goals of their creators. Those goals, even even in the in the, you know, those who think of themselves as being the the most responsible kinds of actors in the world, you know, could go wrong, could go in ways that are unintended. too Many policies have been generated by governments that they thought were going to turn out to be great policies and they turned out to have fatal flaws and devastatingly bad for, ah you know, and needed to be corrected or still need to be corrected ah in our societies. So what I think personally is that in both of these contexts, we need to figure out ways of organizing human oversight so as to
00:42:59
Speaker
ensure that, you know, when the actors are deliberately acting with ah nefarious purposes, or even when they're not trying to, we still have oversight and the opportunity to correct and redirect, and that we need to be keeping ourselves responsible for the outcomes.
00:43:26
Speaker
So what can everyday people take away from this work? So whether that's habits that you may have changed because of understanding emergence even seeing some humans as processes change in the way that you treat others?
00:43:45
Speaker
Yeah. You know, what I hope people take away from it this book is a better understanding of who they are. And for me, that understanding by itself is an end itself, right? It's an eternal question. Who are we? What kinds of things are we?
00:43:59
Speaker
And what the answer to that eternal question reveals some things that we should consider in living better lives. And Autumn, I take you to be asking about, okay, well, great. What can I take away from this view of the mind to living a better life?
00:44:13
Speaker
And what this view of the mind really exposes is that there isn't a central different entity directing controller inside us that's either good or bad or wicked or idealistic.
00:44:28
Speaker
We are processes. We are processes. There's input from the world. It comes in. We have existing connections due to our biology. We have connections that change with our experience.
00:44:40
Speaker
These connections influence how input from the world cascades in our neural network and produces action. And we're a process, like water going down a hill is a process.
00:44:52
Speaker
We are a process of this earth, of this universe, no less than the trees and the stars, but also no more. And the fact that we are a process is just a fundamental fundamentally different way of looking at each other and other people.
00:45:08
Speaker
Because when we disagree, there's two possible attributions that we could say, this I'm disagreeing with this person because their inner controller, their inner machinery is flawed and I reject them and I want nothing to do with them. Or one could say, well, they're process, just like I'm a process, slightly different connections due to accidents of experience in biology.
00:45:29
Speaker
And they're coming to this conclusion. So what is their context? Where are they coming from? If you remove this idea that people are inherently, there's just this inherent goodness or wickedness, you open yourself to understanding them and ourselves as a process.
00:45:45
Speaker
And for me, that's a breakthrough because that gives you a superpower. And the superpower is understanding and patience. We are directed by our biology to act in so of some ways.
00:45:56
Speaker
And sometimes we disappoint each other. And our biological reactions to that disappointment may be anger, may be frustration. But if we can give ourselves that time and that perspective to say this other person is doing this as an unfolding process and have that gap,
00:46:12
Speaker
between our understanding or our experience of what they're doing and our reaction to it so that we can pause and say their process, just like I'm a process, that helps us be kinder and helps us be more patient.
00:46:24
Speaker
And for me, that's the source of morality, right? if we can If we can get there. And then what we have to do is what Jay was talking in respect to machines. Like we we need safeguards for our machines. We also need safeguards for each other because sometimes processes go wrong in ways that would harm other people. So we need safeguards, right? So this is not this is not an invitation to accept everything. No, we need safeguards in order to be a functioning society.
Optimizing Social Systems and Education
00:46:48
Speaker
And the neural network view is is actually underlining the need for those safeguards because those safeguards are context. That's what I hope... your listeners' takeaway? I myself struggle with a part of what I think the ultimate goal could be beyond what Rob has said to a certain extent, which is not only to have safeguards so that we can keep people from falling off the edge of the cliff, but but to have ways of optimizing our education systems, our ways of thinking about each other in in daily life that
00:47:23
Speaker
you know, lead us to find the more constructive path to begin with and not not be in a position where, know, we need to bring in the guardrails all the time. ah so So it's more like trying to figure out, know, what's the best way to ensure that ah ah a child is raised with the right
00:47:45
Speaker
attitudes and perspectives and inclinations to live together with all the other children of of the world who will be the leaders of the future, as opposed to, you know, just building the guardrails themselves. So i I feel that, you know, with understanding ultimately could come better social organizations, better institutions, and better processes for orchestrating our interactions and our learning with each other. But that that that goes beyond mere understanding to social organization in ways that are beyond
Emergence of Intelligence in Humans and AI
00:48:22
Speaker
my control. Thank you both for coming on the show. And what originally started with Ant's dodging obstacles brings us right back to us, our brains, our memories, our sense of self.
00:48:34
Speaker
and the strange parallel story of machines learning to think. Now, maybe the real lesson is that intelligence, whether in carbon or silicone, isn't magic at all. It's the patterns, connections, and emergence.
00:48:49
Speaker
And yet somehow knowing that doesn't make it any less extraordinary. Until next keep finding wonder in the rules beneath the surface.
Curiosity Box Promotion
00:49:03
Speaker
And now a message from our sponsors. What is the Curiosity Box? It's the world's first subscription for thinkers created by Vsauce, the science network with over 23 million fans.
00:49:15
Speaker
Each season brings a new adventure in science, puzzles, and exploration, packed into a box designed to ignite your curiosity. The new box?
00:49:25
Speaker
A showcase of imagination and discovery. Try the 4D tape measure, part ruler, part pendulum, that measures both space and time. Play with Mr. Wiggles' Giggle Jiggle siphons, silly-looking straws that secretly reveal the physics of fluids, and snap-on slap bracelets that double as rulers.
00:49:46
Speaker
cylinder measuring tools, and even tongue-in-cheek time-traveling credentials. Every collection is unique, filled with limited items you won't see again. Descriptions start at $75 quarterly or annually for even more savings.
00:50:05
Speaker
available across the u s canada u k europe and beyond Check out the curiositybox.com slash breakingmath and get 25% off your first box with the code BREAKMATH25. Curiosity Box.
00:50:22
Speaker
Because discovery should never stop.