Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#18 Jeff Kane: Minds, Machines, and the Meaning of Being Human image

#18 Jeff Kane: Minds, Machines, and the Meaning of Being Human

AITEC Podcast
Avatar
18 Plays21 hours ago

Philosopher Jeff Kane joins us to discuss his new book The Emergence of Mind: Where Technology Ends and We Begin. In an age where AI writes poems, paints portraits, and mimics conversation, Kane argues that the human mind remains fundamentally different—not because of what it does, but because of what it is. We explore the moral risks of thinking of ourselves as machines, the embodied nature of thought, the deep structure of human values, and why lived experience—not information processing—grounds what it means to be human.

Recommended
Transcript

Introduction of Jeff Kane and AI Discussion

00:00:17
Speaker
right welcome to the iTech podcast. Today, we'll be talking with philosopher Jeff Kane about his new book, The Emergence of Mind, Where Technology Ends and We Begin. Thanks for joining us, Jeff.
00:00:29
Speaker
Pleasure to be here. Yeah, so in your book, you describe generative AI as both awe-inspiring and disturbing. So yeah, just kind of curious, what do you see as its most promising thing possibilities, and then what worries you the most about generative AI?
00:00:50
Speaker
um I think that my answer would be the same for both, that it can do almost anything. That's the but great benefit of the technology. And the great danger is that it can do almost anything. So when you think about it, AI or generative AI is a tool, but it's not a tool with a specific purpose.

AI's Adaptability and Unpredictability

00:01:13
Speaker
It can essentially adapt to any purpose that you may have, and it's ah simply a matter of where the algorithm takes it. the The question is, where will the algorithm take it?
00:01:28
Speaker
It might be that we'll find new ways of managing medical conditions, or we'll find you know <unk>s new laws in physics that we never could have imagined.
00:01:42
Speaker
On the other hand, we can't anticipate exactly where the algorithms will go once they're enacted. So algorithms will essentially explore the the or provide iterations of ah the initial ideas of those writing the algorithms.

Potential Dangers of AI

00:02:08
Speaker
To make it as simple as possible, the people writing ah algorithms don't know quite how they're going to work and how they're going to adjust themselves as time goes on. And so the it's the the technology may very well go in directions that are highly destructive.
00:02:26
Speaker
And there's good evidence recently, ah Jeffrey Hinton has ah pointed out some very disturbing evidence about machines that generated technologies that now ah will lie, will deceive those who are responsible for them in order to make sure that those responsible for them don't turn them off.
00:02:50
Speaker
so yeah can Can we talk about that ah real quick? i mean Basically, it's my understanding that AIs are lying in quite a few different ways. For one, they will um they will try to try to find the shortest way to fulfill whatever function the prompt you know gave them without actually doing it. Basically, they'll find a shortcut that just start you know sort of you know doesn't actually do what was in mind. It's lazy. It saves itself ah ah trouble. um Is that the kind of thing that you're talking about here?
00:03:24
Speaker
what I think that the it's a little bit more expensive than that. ah it's not simply that the machines want to be economical. They want to preserve themselves.
00:03:36
Speaker
they they the The algorithms ah have implications that the writers of the algorithms don't fully understand. They can't possibly understand all the permutations of the algorithms given the complexity and depth of the algorithms themselves.
00:03:53
Speaker
So these they they may produce results that no one ever intended and So the the one unintended result would be that the machines would simply begin to talk with one another in languages that we can't understand and and then begin to ah focus in on problems that they themselves see rather than we see.

Impact of AI on Human Skills

00:04:18
Speaker
And we're in a whole new world where they you have machines are actually taking on
00:04:26
Speaker
problems that we never imagined and finding solutions that might be antithetical to those we would find for ourselves. Jeff, what do you think about like you know regardless of... Yeah, there's going to be certain ways in which the way the algorithms behave is going to be unpredictable, but i mean one thing that seems pretty predictable is that it seems like they are going to be able to do a perform a lot of tasks for us. and
00:05:00
Speaker
that seems like it's going to lead to us not practicing key skills. And seems to me like that, you know there's a real lot high likelihood that we're going to stop practicing so key skills and then forget how to do the tasks ourselves. and in other words, i feel like there's a really strong likelihood that there's going to be a lot of de-skilling in human beings, you know, maybe our ability to analyze, solve problems, um, will steadily decline. i don't know. what Do you have any thoughts about that? this
00:05:36
Speaker
yeah I think you're absolutely right. And you make a very good point. I, I, you know, I, I'm a philosopher, but I'm also very committed to my role as an educator.
00:05:47
Speaker
And, um, on a, on know On a quick and anecdotal level, I see my own students, even at the doctoral level, won't read anymore. They will get materials summarized.
00:05:58
Speaker
They will have a generative chat box. And they will also use the the technology to write their papers.
00:06:11
Speaker
So, yeah, i I think that there is a there's an immediate level of these d the skilling that's taking place now.
00:06:22
Speaker
And what's happening in my class or classes is not unusual. I think that's happening everywhere. It's also interesting because I talk to educators who are then using the technology ah for their lesson plans.
00:06:37
Speaker
ah So the educators are no longer involved in the discerning what's significant or how students should be guided, but they have the technology to it.
00:06:48
Speaker
And to the odd, most peculiar point is that they will then use the generative AI to mark student papers. So here you have a situation where generative AI creates the lesson plan. The students use generative AI to perform the activities and described in the lesson plan.
00:07:08
Speaker
And then the teachers use generative AI to assess it. Where's the human being? Where where is human thinking in all of this?
00:07:18
Speaker
And I think what's happened is it's nowhere to be found. And if you forgive me, I'll go on a bit about this topic.

Technology's Influence on Cognitive Skills

00:07:27
Speaker
You know, when you had the but Gutenberg Bible, when you had the invention of movable type, ah you created an age where people could read.
00:07:37
Speaker
And when people could read, you know they had access to complex thoughts. the thoughts, that they they were first of all independent of the clergy, ah but they could also take time to consider things and bring a depth to their thinking that would not be possible without the books.
00:08:01
Speaker
And here we have the exact opposite. We have an age where you know people are no longer reading books. with where communication is very rapid.
00:08:12
Speaker
That's one set of technological issues. But the other is that that people will not take the time or make the effort ah to develop the skills to read complex books.
00:08:27
Speaker
And it's one of the thoughts I had while writing ah The Emergence of Blind. I said, no one will read it. ah but And it it contains complex ideas that even if they do read it, they may not understand because they haven't developed the cognitive skill set.
00:08:44
Speaker
So, yeah, I'm very worried about that sample. yeah Yeah. The cognitive neuroscientist Marianne Wolfe has this concept for that ah deep reading where it's a kind of slow, effortful reading that that you know kind of it's really the ah stuff of of ah critical thinking. right It's like it requires they have all this background information and that you bring it up as you're reading a text.
00:09:08
Speaker
Yeah. And she says that you know we're we're we're getting away from that. her book Her latest book is called Reader Come Home because we're not we're not doing that as much anymore since we read on on tablets and phones and all that. Or now we have generative do the reading for us.
00:09:25
Speaker
yeah I think that language itself is changing as well. that that it's what There's a real emphasis in technology generally on speed.
00:09:37
Speaker
and efficacy. So you want to move through ideas as quickly as possible, not with as much depth as possible. It's kind of what you were saying before about your concerns that the technology may want to take the shortcut.
00:09:49
Speaker
ah People take the shortcut. And so what's the shortest way possible? Let's but not use language at all. um Let's use pictures.
00:10:01
Speaker
ah So I think people are, you know, the the use of emojis and the use of all kinds of, you know, short videos like TikTok or, you know, the the other, you know, the longer versions, perhaps, you know, in YouTube, those kinds of um communications are becoming ah more and more prominent.
00:10:22
Speaker
um And I think that they reduce the level of thinking with each iteration.

AI and Human Self-Perception

00:10:31
Speaker
So, so one worry yeah in my reading of your book, Jeff, the emergence of mind, one worry that you seem to have is like, well, as human beings, we ask this question about who we are.
00:10:47
Speaker
We're sort of distinctive in asking the question of like, what are we really, what is our purpose? What's the point of life?
00:10:58
Speaker
We kind of ask these really deep questions. and it seems like you have this one worry that generative AI might come to shape how we answer that question. And specifically, we might start thinking of ourselves as computational systems. And so maybe if I could just elaborate that, you know, some people, they look at the fact that genitive AI, you know, it can write stories, it can write poetry, it can generate code, it can produce images.
00:11:31
Speaker
um It can make these coherent videos with sound, it can compose music. And some look at that and they just go, okay, so That suggests that, you know, we too are a kind of computer because if, you maybe that's how we do it. I mean, if they, if the generative AI can be a computer and do all this, maybe that's how we're doing it. So at any rate, some people look at all that generative AI can do and they think, you know, maybe that's what we are. Maybe we are computational systems. So I'm just kind curious, how do you respond to that sort line of thinking?
00:12:06
Speaker
I think that is the predominant line of thinking. i Even if people would not say, oh I believe that we're machines, ultimately we're left with the question you know that Turing left us with. And that is, well, if there's no functional difference between what machines can do and what we can do, what difference would any difference make?
00:12:29
Speaker
so So if we are distinct and we are ah somehow set apart from the animal kingdom and the machines that we've created,
00:12:41
Speaker
ah well, where would you find the difference? And if you can't find the difference in anything that we can do, then there what you're calling a difference is insignificant.
00:12:54
Speaker
and So the... the concern I have is not that people will explicitly say human beings are computational machines.
00:13:06
Speaker
Although I must add that is being said by many computational scientists um and and many involved in neuroscience.
00:13:17
Speaker
ah The, the, the circuitry is just, you know, carbon based as opposed to silicon based. um so the the the the point here is that when people cannot see that there's something unique in us i think they will not they will not understand what it is or how to develop it and that's my concern as an educator
00:13:50
Speaker
If there is something distinctive about us that the technology that transcends the technology, how can we develop that? That, to me, would be the quintessential element or aspect of our humanity.
00:14:09
Speaker
So let me make a little less abstract and and give you a kind of a specific example or a concept.
00:14:20
Speaker
Generative AI is designed to answer questions and to solve problems. That's what it does. It's a tool.
00:14:31
Speaker
It's a tool that is useful for answering questions, solving problems. Human beings, on the other hand, ask questions. We have purposes.
00:14:44
Speaker
We have things that we wish to be done. have ideas. values We have ends that we prefer over other ends.
00:14:57
Speaker
There are purposes to what we do that we determine. And the machines simply cannot do that.
00:15:08
Speaker
Because whatever they do is a result of the algorithms but that their designers have put into them. So anything that would resemble an aim, a purpose, a preference, a value, would simply be um a permutation on the original purposes, values, and aims that were put into the algorithms.
00:15:36
Speaker
So the the my my point here would be one of the most important things is for us as human beings to learn to ask questions.
00:15:48
Speaker
two to to seek out what we you before mentioned is the concern of mind, the purpose of ourselves as human beings. what What we're doing with our lives, or what's important, to or how do we treat other people?
00:16:04
Speaker
What's this solution worth? ah So I believe that the question of questions ah the The reality that we ask questions and discern purposes distinguishes us from the technology, not simply as it exists now, but in principle, ah it's something that the technology cannot do going forward.
00:16:33
Speaker
I think that's a great ah sort of nutshelling of your book, right? Like there is a difference between us and we're so sort of self-organizing and purposive. So with that in mind, maybe we can dive into some more explicit sort of delineating as to why we're different from machines.
00:16:53
Speaker
um I guess we should start with... um I guess we'll we'll cover both ah the the deterministic good old-fashioned AI as well as connectionism. um I have a feeling though that maybe not everyone is going to know some of these concepts. So ah let's begin with with why we're not Turing machines.
00:17:13
Speaker
And ah in in AI, this is called the good old-fashioned AI, I suppose, ah where everything is programmed in a rule-based way And you know if this happens, then do this. If this happens, then do this. and And it's all deterministic. And so this is probably an easy one to to begin with. But um maybe flesh out this functionalist view and tell us why we're definitely not that kind of machine.
00:17:43
Speaker
yeah in order to to to organize any system, there has to be a purpose
00:17:55
Speaker
to the organization itself. So when when someone establishes the menu, that is the the the computing pathway, they do so with an idea that they want to get to ah ah certain end or a certain type of end.
00:18:19
Speaker
So the The difference between human intelligence and that and that kind of intelligence is very simple, in that is that ah human intelligence can discern what types of ends are worthy of action, what what what types of things are preferable as opposed to things that would not be sought or of value.
00:18:49
Speaker
So when you think about it, human beings human intelligence is yeah consists in the ability to discern things that are significant from things that are insignificant, or things that are important from things that are not important, things that are valued from things that are not valued.
00:19:10
Speaker
Now, admittedly, all that's subjective, but I would say that's the nature of the human mind.

Human Intelligence vs. AI Operations

00:19:18
Speaker
So the technology that simply ah follows a menu ah cannot in any way replicate that kind of intelligence.
00:19:29
Speaker
it can and It can utilize processes, but it can't determine which processes ought to be used. to To give ah one concrete example, um i don't know, 20 years ago, I made a little autonomous car.
00:19:47
Speaker
that would just you know go around my house all based off ah you know good old fashioned AI. And it you know it didn't have any values. It just it just it was based on a program, a role-based program, and it basically went through every single rule um ah room of my house. It would have been a Roomba had I put a vacuum cleaner in it, but it didn't do that.
00:20:07
Speaker
and But it doesn't know which room, for example, is what I would say the best room of the house, right but which would be my music room. or or you know whatever, right? like It doesn't have any purpose. It doesn't have any value. It can' can't represent anything either. um And so for all these reasons, it is it's lacking what we humans have, which is purpose and obviously representation as well. um Yeah, look let's just play it a bit, play it out a bit.
00:20:33
Speaker
um This morning, I asked by ah I don't want to use the word, Alexa, I have to say it quietly. I have one the room.
00:20:45
Speaker
it I would ask, you know what's the weather today? And it would tell me the temperature and whether it's going to rain. and And you know that those sounds that it emits don't mean anything to it. it it it It follows a progression ah where my voice is then converted or my words are converted into digital language.
00:21:10
Speaker
ah you know digital bits or groupings of digital bits, which then the menus pick up and produce other digital bits of information.
00:21:25
Speaker
And it tells me, you know, it's going to be 74 degrees and it's going rain. But it has no meaning. to the to the to the machine.
00:21:38
Speaker
it it has no, you know, rain isn't wet. it It's, you know, a series of zeros and ones. Temperature doesn't mean anything other than, you know, any kind of correlations it might have mathematically ah within the algorithm.
00:21:54
Speaker
So, you know, what i'm pointing out is that part of the meaning that we experience is sentience. it's it's It is our subjective experience.
00:22:06
Speaker
So can I just jump in? I mean, I'm thinking about how, you know, I guess this is the sort of um devil's advocate kind of position. But if I remember correctly, I didn't kant argue that like you could never prove um that you weren't.
00:22:29
Speaker
you know, entirely determined. You can never prove that you weren't sort of like a deterministic machine in a way.
00:22:39
Speaker
And I guess my thought is just like, yeah, we seem to have purposes that, you i guess my thought was like, okay, yeah, it seems like machines only do what their designers do.
00:22:50
Speaker
um They only do what the program kind of
00:22:55
Speaker
instructs them to do. Whereas, you know, humans as a living creature, we seem to be, you know, kind of setting goals. We seem to have values and that sort of thing.
00:23:08
Speaker
But, you know, of them might think, well, what if we are also sort of being um determined by lower levels, you know, maybe, know,
00:23:20
Speaker
so I don't know, maybe if you trace our impulses back farther than far enough, you just realize, you know, it's just atoms, molecules, genes, neurons, sort of following physical chemical rules. And ultimately we're just determined to, I mean, is that a relevant thought? I guess I'm just wondering like, yeah, like what about that worry that like, you know, at a deeper level, we really are just kind of deterministic machines as well. Like, what do you think about that type of thing?
00:23:50
Speaker
I, I, I, Again, i think you're you're you're spot on in your analysis. I mean, one of the one of the themes of the book is that AI or and generative AI does not stand alone as a technology. It's built upon a mode of thinking that goes back historically, specifically to Descartes.
00:24:14
Speaker
And it said I argue that Descartes really worked on kind of it ah Well, certainly a reductionism. And after Descartes, there was an empirical reductionism, meaning Descartes attempted to break ideas down to their most simple and elemental components.
00:24:34
Speaker
But when you get to people like Newton and modern scientists, they want to break the material world, the physical world down to its most basic components. and so And argue that any attributes that you can find in any physical object are result of the interaction of those components.
00:24:55
Speaker
so the you know So the reason that the words are coming out of my mouth right now is essentially because the neuro the neurons in my brain um are composed of molecules and that those molecules are interacting with one another. And ultimately, you know, the you can get down to, you know, a level of quanta where, you know, the essentially everything can be traced back down to you know quantum mechanics.

Complex Systems and Human Traits

00:25:27
Speaker
And then everything that's caused is a result of cause of quantum mechanics.
00:25:32
Speaker
But but you know there there's a good deal of evidence that says it's it's just not the case, that you cannot separate out individual things and items and then construct the universe based upon these individual discrete bits of but ah matter.
00:25:52
Speaker
At a most elementary level, they don't exist. Matter is more is more waves of energy. And that that when when you think about an atom, it it is really a a confluence. It is a web of energy rather than you know individual pieces that are connected.
00:26:13
Speaker
And so what you get is is less a sense that that at the basis of the very rock bottom of reality, you have pieces of matter that interact.
00:26:27
Speaker
What you get are fields. What you get are domains where there are dynamic interactions, and those dynamic interactions result in kind increasingly complex organizations.
00:26:42
Speaker
So the the the idea that when the idea is that when more complex forms of organization form, they actually can provide order and regularity on their own.
00:27:02
Speaker
On their own. they They're not dependent upon the individual cells or the individual atoms or the individual quarks. they They are more a product of the field.
00:27:16
Speaker
And the field itself is dynamic and it's it's capable of creating higher levels of organization and each level of organization incorporates its own principles.
00:27:29
Speaker
I know that's a difficult concept to begin with, but I guess we'll explore it as the podcast will Yeah, I mean, if I could just try to echo that a little bit. It's like, so, i mean, you explore, yeah, you discuss Descartes and Newton in your book and it seems like for you, Descartes, either inaugurated or he expressed this kind of way of thinking where um if you want to understand something, you kind of, on the one hand, remove it from its context, you isolate it.
00:28:02
Speaker
And on the other hand, you also try to look at what it does in terms of sort of the parts that make it up. So you're looking like you're always kind of thinking of things as behaving according to the parts that constitute that compose it I guess.
00:28:22
Speaker
And so that's that type of like approach to thinking and approach to um understanding things was then kind of, I guess, carried in by Newton, like you were just saying, in his analysis of the physical world more generally,
00:28:41
Speaker
and But I guess you're saying it's like there are these various phenomena like fields that don't really operate like that. they don't um They have certain properties that you really can't just trace down to...
00:29:00
Speaker
the parts, I guess. Um, I mean, you also give some like other examples, right. Of things that can't be really be reduced. Like I thought one really simple one, right. Is like water. So water is wet.
00:29:13
Speaker
It has, this feeling in it of being wet and it behaves like it's wet. But if you think about the molecules that make up water, it's not really the case that any of the single molecules are wet.
00:29:30
Speaker
And so they're like the wetness of the water is not something you can reduce down to the parts. um Anyway, is that sound like I'm,
00:29:41
Speaker
Describing accurately what you're so saying? or Yeah, but ah I'm not sure I'd focus on wetness, but I would focus on something like hydrodynamics. yeah Hydrodynamics does not exist unless you have a liquid.
00:29:55
Speaker
And liquids don't exist in unless molecules or or yeah molecules are arranged you know a certain order. And when they are arranged in a certain order, like in water, they behave, they they take on behavioral characteristics that that the individual molecules don't have.
00:30:15
Speaker
And it's interesting because when you think about water as ah as a liquid, you know, when when you have a large enough group to form a liquid, ah the you know pressure,
00:30:30
Speaker
think the pressure does not compress the liquid. So when you you have hydrodynamics, which is obviously a hobby self whole area of inquiry and physics where the you can count the on the consistency of the pressure but within a body of water.
00:30:58
Speaker
ah that It won't get smaller when you compress it. But when you take physical solid objects, you can compress them.
00:31:08
Speaker
So, for example, when you take water as ice, you can compress it. and so So just the fact that the molecules are arranged in into a liquid,
00:31:21
Speaker
gives them a different emergent quality and a set of characteristics that you wouldn't find in the very same molecules that are organized you know into a solid form.
00:31:33
Speaker
So that the way things are organized actually changes the way things act.
00:31:40
Speaker
Great, great. So um I want to just double click on those ideas and then move us into a conversation about emergence in human beings. All right. So um to to kind of wrap up what we've been talking about in the last 12 minutes or kind of rehash ah the old materialist picture that were made out of atoms, that just doesn't quite work so simply anymore. Quantum mechanics messes that up.
00:32:05
Speaker
Even like very famous, uh, ah people that don't believe in souls. I'm thinking of a ladyman, uh, James ladyman, I think, uh, says, you know, materialism in the old version of it, uh, it's dead. Right. So, um, and instead what you're suggesting is that, uh, the human beings, um, are, are complex systems and, and our features arise, uh, uh, they, they emerge out of, uh, you know, lower level things.
00:32:35
Speaker
So maybe let's just kind of, um
00:32:38
Speaker
give a quick maybe definition of of emergence, you know just in case someone isn't familiar but with it, and then um bring that to to the topic of humans and how some of our features emerge as well.
00:32:52
Speaker
And you go through ah quite a few of them in your book, but maybe you can pick whichever you think is most instructive. um I would define emergence in the following way, that complex systems can,
00:33:09
Speaker
create principles that they use to organize themselves that cannot be reduced to the principles ah that could be found in its component parts.
00:33:24
Speaker
So if you if you take um something like, one of the examples I give in the book, and an E. coli, a single-cell bacteria, And you say, well, the bacteria is nothing but a a yeah set or a cluster of molecules.
00:33:45
Speaker
But the thing about this very primitive single-cell organism is that it will move itself towards concentrations of glucose.
00:34:00
Speaker
So it can actually go against gravity. It go up. ah It can move itself with intention. So the this little single-cell organism can sense where there are different gradients of glucose in a given environment and then move itself to the place where the concentrations are higher.
00:34:32
Speaker
And it has to do so in a planned way. it has to know it It has to be able not just to move itself randomly, but to move itself in a certain direction with a certain, again, in intent.
00:34:45
Speaker
So you will not find in any property in the periodic table you know intent or the desire to ah achieve an end.
00:35:01
Speaker
And you will not find anywhere in physics, you know, objects that simply move themselves around because they decide to. It's just, no, physics doesn't permit a book to fall off a table because it decides to.
00:35:17
Speaker
And it doesn't allow atoms to move from one place to another because it decides to. And yet you get enough of these molecules together in the form of a single cell, you know, organism called an E. coli, and it can move itself.
00:35:33
Speaker
It can determine its own actions. It can plan things out. It has intelligence, not only in intent. something that you will not find, like again, in physics or, you know, in chemistry.
00:35:45
Speaker
And this intent ah and intelligence, it doesn't come from, again, from the chemistry, right? it It comes from the functional organization of this single-celled, whatever, E. coli, right? So yeah that's... it a The purpose, if I can put it this way, of of the, you know, the e coli is to sustain its so itself.
00:36:07
Speaker
It is a coherent system with its own principles of organization. And it uses the energy of the environment, the glucose, to sustain its ah it's own organization.
00:36:21
Speaker
That is a unique a circumstance. And again, it is not a question of individual molecules or atoms the you know exhibiting certain characteristics.
00:36:36
Speaker
It is an emergent property of life itself.
00:36:41
Speaker
Yeah. I mean, like, like I just think in general, it's kind of interesting to think about, you know, what does it mean that we are alive?
00:36:52
Speaker
What does it mean? What kind of characteristics come with every living thing? And. and
00:37:03
Speaker
Yeah, it seems like what you're talking about here you know is one of them seems to be valuing.

Language, Culture, and Shared Meaning

00:37:12
Speaker
we ah living thing wants to survive. It wants to grow. And and so that leads to us preferring certain states, you know states where we have nutrition.
00:37:26
Speaker
and Whereas we avoid
00:37:33
Speaker
instinctively um situations that damage us that, you know, it triggers avoidance responses when, you know, when something is contrary to us preserving our life.
00:37:50
Speaker
And I guess it's hard because it's like on the surface, like if you're just looking faint at things in terms of outlook behavior, you know, you can notice that, I guess,
00:38:02
Speaker
like we were talking about chat GBT earlier, you know, it it seems to be trying to, or not chat GBT, but just general of AI in general, it seems to be trying to avoid getting shut down.
00:38:14
Speaker
So it's like on the surface, maybe it looks like it also has a sort of survival instinct, but I don't know. It seems like with a living creature, even a living cell that,
00:38:30
Speaker
those values in the drive to survive are more intrinsic than in a machine where it just seems like somehow when generative AI is striving to not get shut down, it doesn't seem to be really intrinsic to the machine. i don't know. Is my following right kind of thought here?
00:38:56
Speaker
it I think you're headed in the right direction. ah What I would say is that biological organisms are self-organizing.
00:39:08
Speaker
They determine themselves the principles that will give them coherence and allow them to sustain their integrity as opposed to being subsumed within the rest of the environment.
00:39:26
Speaker
So Single-celled organisms have you know but of this drive to survive.
00:39:37
Speaker
And more complex organisms that are that have multiple cells actually develop internal sensory systems that allow of the cells within their bodies to talk with one another and to maintain homeostasis.
00:39:54
Speaker
And not only that, they'll develop ah more highly specialized cellular structures that we can you know eventually call organs that sense those things in the environment that can help sustain the coherence of the homeostasis of the the organism.
00:40:15
Speaker
So what we're getting from the single cell to the multiple cell, you start to get sensor sensor ah sensory systems and differentiation of organ systems.
00:40:27
Speaker
And then let's just say that that different organisms may begin to realize or may begin to interact with other organisms and form social structures.
00:40:40
Speaker
So we find that, like ants, um will will interact with one another in ways that individual ants won't um because they emerge.
00:40:52
Speaker
There are new kinds of behaviors that emerge. One of them is that if there's a gap, let's say, between the leaf and another leaf when they want to cross it, they will actually form a bridge with their own bodies so other ants can crawl over it.
00:41:08
Speaker
they they have a sense that their individual survival is dependent upon the survival of the group. So it's not only, you know, you have an individual E. coli and an individual sensate organism.
00:41:22
Speaker
You have organisms that view the environment as ah containing other organisms like them, and they begin to interact. And when they begin to interact, they, you know, in some cases, they increase their likelihood of survival.
00:41:38
Speaker
Now, when you get when you have that happening, I'm going to jump a bit. Imagine you have two very complicated social or organisms.
00:41:50
Speaker
They're people. there They are homeosapiens. I won't even call them people. from They're homeosapiens. And what do they do? They interact with one another, and their interaction does what?
00:42:03
Speaker
It creates language. Language would not exist. It would not emerge if there weren't at least two people. It's like magnetism.
00:42:16
Speaker
You have to have the poles in order for the connection to be made, for the the field to be active. So when you when you think about it, would one individual develop language all by himself or herself?
00:42:32
Speaker
And the my answer is no. There would be no reason. for it, but when you recognize there are reasons to connect, not only individual cells, but distinct organisms, whole organisms, you get social structures and then you get complex social storie structures and language.
00:42:54
Speaker
And what happens with language? Language develops its own rules that transcend the social structure. You know, syntax and grammar tenses, you know so the way language is organized, you know, it's creates a whole new set of rules. And then let's say you take language and make it more, you you develop language.
00:43:18
Speaker
You also allow for the transcendence of time and space, because I can use language to tell you something that happened long ago.
00:43:30
Speaker
I can tell use language to tell you that something that happened to me that never happened to you. I can tell you something about what's going to happen in the future. So you begin to transcend time and space, and your vision of reality becomes extended way beyond your personal experience and can go back historically and forward, and it can cross start to cross the world.
00:43:54
Speaker
In that context, you develop, you know, theoretical or paradigmatic structures that define culture, which wouldn't exist you know in single cell or single organism systems.
00:44:12
Speaker
So what you're getting in the emergence, and then I would call all of these things, you develop the capacity for self-awareness. You're not only aware of the environment, you're not only aware of others, you're not only a aware of language, you're not only and aware of the broader world beyond your immediate experience, you're aware of your own existence within all of those levels.
00:44:38
Speaker
and And that is the emergence of mind. And so when I form a thought, all of those levels are active in generating the words that are coming out of my mouth.
00:44:54
Speaker
it's It's also active in generating the way you hear them and the images you form and the ideas that begin to become active in your own mind. None of that happens in generative systems.
00:45:09
Speaker
None of it.
00:45:12
Speaker
Okay, well, um let's, I mean, to recap, I counted like three levels to the whole emergence bit, and then we'll get into ah into why you don't see that in AI. But um I think I counted three levels. I'm i'm not sure.
00:45:26
Speaker
We have all this stuff that we're made out of. And out of that functional arrangement, we get one and one level of emergence up. We get intentions and purpose and agency and all that. And then from a combination of several human beings and that level of emergence, we get another level of emergence, which is the language. You need at least two people to have language and then culture. And then you can go even further and you can develop ah ah fields of inquiry such that you can reflect on human nature and such.
00:45:55
Speaker
Right. So lots of levels of emergence there. um And language is is one of these emergent things. um I just want to know, you know, someone um who ah messes around a lot with ChatGPT and sees how...
00:46:10
Speaker
um competent it is with language. Can you spell out more specifically, you know, why it is that, uh, that chat GPT is not using language the way we are using language.
00:46:25
Speaker
It just, it, you know, this someone might say, Hey, I, I told it to tell me a joke and it told me a joke. I said, Hey, I have these ah ingredients in my refrigerator. What can I make with it? And it came up with, you know a very good answer for it.
00:46:38
Speaker
By the way, I do that every day. yeah I was about say that's my favorite usage. easier for cooking i would I would be a little hesitant about trying their cocktail or ideas. Sometimes they're good and sometimes I'm like, what do you do what is it? well you You know too much, Roberto. It's it's kind of like Time Magazine. yes I once heard someone say, like Time Magazine is good until you actually know about the topic. Then it's like, you know. Anyway, but sorry.
00:47:08
Speaker
That's hilarious. Sorry to cut you off, yeah. better know so so um So anyways, ChatGPT is awesome at at manipulating, ah well, I think i'm I'm already saying the answer, manipulating ah strings of words and predicting what you want to say. So i kind of maybe explore that a little bit for us.
00:47:27
Speaker
Sure. Well, the word information has to be defined. And let me... what and let me ah introduce that idea but um by way of a simple example.
00:47:49
Speaker
If I say the word dog, you have, I imagine, and the listeners have, an image of a dog. it's's It's not the same image of a dog for all people. It's not the same image at all times.
00:48:04
Speaker
But we have images um that we associate with the words. And those images and you know, go back to human, you know, it goes back to sensory experience.
00:48:16
Speaker
You know, when I say dog, you are going back to when you had a dog, you are going back to the feel of the dog, you go when you petted it, the, you know, the warmth of the dog is, ah you know, laid next to you on the couch when you're watching TV, or you remember, you know, the the sound of the dog you know, when it barked or, you know, or or or when it snored at night. You have a extraordinary number of images associated with dog, but that's not just dog.
00:48:49
Speaker
That's associated with, um you know, with myriad. Almost all of the words we use are complex amalgams of all of these sensory experiences and images that we've formed and reformed.
00:49:05
Speaker
ah ah So we are, vivid imagers, it's not just the the information isn't static. um It's not fixed.
00:49:17
Speaker
its but It's almost like a living cell, you know, so that that it's got a certain robust quality that can drive us to to create other images. So, you know, so the you know the The idea is that someone gave us information, ah, I have a dog.
00:49:41
Speaker
We don't think about the complexity of the word dog in that sentence, but it is a complex, as I said before, amalgam of images and experiences that are, as I said before, robust.
00:49:59
Speaker
When ChatGPT looks at information, at the word dog, it will see 0001, 00011, 010100. It will see a sequence digits, zeros and ones, packed into groupings six.
00:50:16
Speaker
it will see um ah sequence um number or digits zeros and ones yeah hacked into see you know groupings of six so that each letter is composed, has its own six distinct digits, um and each word has its own distinct set of digits. So the those digits then form patterns.
00:50:45
Speaker
And the yeah a chat GPT and other generative AI technologies will essentially look at those patterns. and say, where are those patterns repeated?
00:50:59
Speaker
where what what's associated What patterns can we find that are associated with those patterns? and the The remarkable thing about the technology is that it it is so adept at identifying patterns so that it could see, you I saw a dog at my friend's house. It could actually connect dog with friend's house, which you know the you realize how many levels of
00:51:31
Speaker
levels of possibility there are between those two words, and yet the technology is able to identify you know a pattern that connects dog with my friend's house.
00:51:44
Speaker
it the That's where large language modeling comes in. It takes enormous bodies of data, absolutely you know mind-boggling amounts of data, and then uses extremely powerful, know often probabilistic types of algorithms.
00:52:07
Speaker
And it can um just it can identify extraordinarily nuanced patterns. that then produce what?
00:52:21
Speaker
The stuff that you see on your screen. We then take the stuff that we see on our screen and image it. We see DOG on the screen and we go back to our experience of dog.
00:52:39
Speaker
But our experience of dog is filled with image images and experiences. The concept of dog in the machine is simply a pattern of zeros and ones.
00:52:54
Speaker
They're entirely different. So when we just talk about information, the information in a machine is static, it's fixed, it has no sentient context.
00:53:08
Speaker
In us, we are creatures of image, of experience, and where information is dynamic, part of a living field, more than an isolated object.
00:53:23
Speaker
I want to highlight one thing that you said, because that that is so true. A lot of people that I've spoken to about ChatGPT think it's learning on the fly.

Human Learning vs. AI's Static Nature

00:53:31
Speaker
And I have to tell them, like, no, it's it's pre-trained and it's fixed and it's frozen.
00:53:36
Speaker
And they're training it, you know, ah back over at OpenAI and they're going to come up with a new version of it soon. And that's the updated version. But it's not like us. We're we're like dynamically adapting to the situation you know in real time.
00:53:52
Speaker
And that is totally not what what Generative AI does. It's it's fixed and frozen. Moreover, another thing that you said that I also want to highlight is that I guess one of the most ah shocking findings of the generation of the creation of large language models is how much competence you can derive from a prediction engine based off just training training it off everything, I guess, everything that's digitized.
00:54:21
Speaker
So um it this is this is where I take the conversation off the rails. ah In your book, Jeff, you mention... As long as I don't take it off the rails, you can.
00:54:32
Speaker
Yeah, I'm always the guilty one, so it's okay. um and your In your book, ah you mentioned the work of ah Michael Gazzaniga, who did work on split brain patients. And he's one of the people that that came up with this interpreter module, this part of the brain that it doesn't really track the truth.
00:54:50
Speaker
It just comes up with stories that make sense about what's going on. right Given its input, it will say, I think this is what's going on, and it'll just kind of come up with that idea.
00:55:01
Speaker
Do you think that you know generative ah AI is like the interpreter module? I don't know. It's an interesting idea. ah No. But Ghazanogh's work is reallys fascinating and was was very important to me.
00:55:16
Speaker
and he He was very helpful to me in writing the book. ah the
00:55:24
Speaker
The way I would look at it is that I mentioned before that ah the E. coli wants to maintain itself. It wants to have some sense of coherence.
00:55:37
Speaker
it The coherence of its of its own bodily of its own body. We can call that homeostasis, but it wants to maintain its own coherence and not dissolve into the environment.
00:55:51
Speaker
So at each level of emergence, there is a new principle of coherence. We want the system to hold together rather than to dissolve.
00:56:04
Speaker
And that's true in our thinking as well. So if you show someone something, they're going to try to make sense of it. They're going to try to give it coherence even when it doesn't have any coherence.
00:56:19
Speaker
That's one of the fascinating aspects of Ghazanika's work. You see, demonstrated when the left hemisphere and the right hemisphere are separated, that the the and when they're kind of reconnected an experimenter,
00:56:37
Speaker
And each side attempts to integrate what the other side perceived by creating an entirely new story that had nothing to do with either side.
00:56:50
Speaker
ah So when you have levels of emergence, each level has principles, it its own distinct organizational principles, and each level of emergence attempts to maintain its own coherence.
00:57:07
Speaker
So I try to make sense of the world. And but yeah so that's what happens all the time. I hear a sound outside, and I go and look out the window, and I see you know there's a truck nearby.
00:57:22
Speaker
ah I start connecting sights and sounds and images in the time of day, because if the truck is there at night, it doesn't make sense. But if it's there during the day, it might. So we're integrating all kinds of experiences and all kinds of images to create a coherent image, a coherent sense of what is happening around us.
00:57:49
Speaker
We're drawing maps of our environment or we're drawing maps of ourselves or we're drawing maps of others and we're interacting with those maps. I think that,
00:58:00
Speaker
um what you have in terms of the technology ah it it is simply
00:58:10
Speaker
and if what i would but what amounts to a um what's the word I'm looking for? It would be guessing. It really comes down to guessing.
00:58:22
Speaker
it It tries to guess patterns based on previous patterns. You know, remember the old game that you might play as a kid, Hangman, you know, where you're trying to guess the letters?
00:58:34
Speaker
Well, essentially, I think that's what the chatbots do is they try to guess the letters, ah you know, that that would fill in a word. So, ah, there's the word.
00:58:45
Speaker
It's not really ah um an attempt to understand something as it is, an attempt to find an efficient way to solve a problem, which is what you you opened with.
00:58:59
Speaker
So it'll take the shortest step possible. It's not really trying to find coherence as much as, know, at least some consistency in, you know with past patterns. so i mean, it seems like Jeff, that like kind of, would this be right in terms of describing your perspective? Like,
00:59:18
Speaker
you know specifically, for example, with respect to language, you know our engagement with language in a way is seems much richer than ah generative AI's engagement with language. At the very least, it's very different. you know It has this embodied dimension, you know words linked back to feelings that our our body has experienced. So
00:59:45
Speaker
dog or apple kind of evokes the crunch and taste of an apple um these you can also link up to like good or bad you know the but taste of an apple for me is it's kind of impressed as good and so there's like that value dimension connected to words um you can also connect it to like a narrative right like Certain word, you know, an Apple kind of evokes for me narratives related to childhood and summer's day outside, all these kind of things.
01:00:21
Speaker
um And so you're kind of pointing out like just how different our relationship with language is than to gender of ai And but other hand, it's not as though your point necessarily seems to be that it's not really at the level of output.
01:00:41
Speaker
that we're so different, right? Like maybe Turing is right. Maybe, you know, computational system can imitate us, can mimic our output perfectly, but that doesn't mean we are the same as computational systems. And so,
01:01:02
Speaker
Um, anyway, i kind of curious if that's kind of correct in terms of like your way of thinking about it, that like, look at the key is not that, Oh, here's some operation or output. We, that we can do that. The AI can't, it's more like our, just our nature is just so different.
01:01:21
Speaker
You know, we're embodied, we have experience. Um, and really that's really important to highlight, um, when it comes to thinking about AI is just how, different our nature is, I guess.
01:01:34
Speaker
Is that... ah here Here's how I'll tell you. I'm going to kind of play with what you just said for a moment. I'm thinking about it, you know, as an educator.
01:01:46
Speaker
um If you want to educate children, you don't simply give them information. What you try to do is get them to experience You try to get them to be engaged in creating knowledge, not simply recording it.
01:02:04
Speaker
And so historically education, you know, we, we, we you know we would look at, you know, giving facts and information to kids and we give and that's, you know, Just poor education.
01:02:15
Speaker
You know, even back in the beginning of the 19th century, Dewey said that, you know, that's it's absurd. You've got to get people, you've got to get children involved in creating knowledge and being active in the process.
01:02:29
Speaker
ah My concern here goes back to what we said before, that there's de-skilling. that that people are not being involved in the process. And when they're not involved with the process, there's a dullness to their thinking. there's There's a lack of inventiveness. you There's a lack of ah you know ah ah ah multidimensional quality to their thinking.
01:02:52
Speaker
So you know when you when you ah give children information, they might be able to spit it back to you. But when you allow them to experience phenomena and to build their own concepts, those concepts are much more agile and dynamic and and they will allow them to create not just critical thinking, but creative thinking, to create new solutions, new ways of looking at things that don't simply restate what's already been assumed.
01:03:32
Speaker
you know In a broader scale, in ah in ah you know kind of a big picture of that, ah I would argue that ah if you gave ah chat GPT ah Newton, it could never it could never come up with relativity theory.
01:03:55
Speaker
Because relativity theory required the imagination to question the assumptions that structure Newtonian mechanics.
01:04:08
Speaker
Newton assumed that time and space were fixed. the only way to get past that mechanistic view of the universe was to recognize that the the, the universe itself is a dynamic system and that it doesn't have these kind static components, which are like, ah what which are eternally fixed time and space are, and gravity.
01:04:42
Speaker
Um, interact with one another in a dynamic way that's relative to the position and the velocity of the observer so that that that would never happen if you just took this you know ah the ideas that newton used to create his his you know his theory of gravity so the That's the big, big picture.
01:05:11
Speaker
But I think that's happening a daily basis to children and to all of us. We're trying two a to create coherent images of who we are and what we're doing and how we're going to spend our day and how we're going to develop a relationships.
01:05:31
Speaker
So how how we're going to you know ah structure our lives. How are we going to rear our children? so So these require a kind of ah creative engagement that I'm fearful we will no longer be able to sustain because we'll be so focused on, you know, the the you know the the the superficial functionality of words, you know, or images for that matter, as in YouTube and TikTok.
01:06:07
Speaker
The level of thinking will be diminished over time. And I think that if we diminish our level of thinking, we ultimately diminish what it is to be a human being.
01:06:22
Speaker
And that's, I guess, my ultimate concern.