Introduction to Dr. Stephen Koslin
00:00:16
Speaker
Hi everyone, and welcome to the A-Tech Podcast. Today, we're speaking with the psychologist and neuroscientist, Dr. Stephen M. Koslin. Dr. Stephen M. Koslin is a professor emeritus at Harvard University, and he is former chair of the Harvard Psychology Department and dean of social sciences.
00:00:38
Speaker
We will be talking about his recent book, Learning to Flourish in the Age of AI. Stephen Koslin, welcome to the A-Tech Podcast.
Career Summary and Interest in AI
00:00:46
Speaker
um Thank you very much for having So this first question ah you know has the potential to and take a very long time to answer because you have a very interesting and storied career.
00:00:59
Speaker
ah But if we could ask for the CliffsNotes version, ah maybe just tell us a little bit about your background and the arc of your career maybe and how it led to writing a book dealing with AI.
00:01:13
Speaker
Yeah. So I'm an academic. um I spent three decades on the Harvard faculty. I was, as you mentioned, chair of the Department of Psychology and Media Social Sciences. I was also co-director of the Mind Market Lab at Harvard Business School.
00:01:30
Speaker
I was also in the neurology department at Mass General Hospital. did a lot of brain scanning and so on.
Active Learning Sciences and AI Education
00:01:36
Speaker
I left there to go back to Stanford, where I'd done my graduate work ah to run the Center for Advanced Study Behavioral Sciences and and be on the faculty there.
00:01:45
Speaker
um It was a bad fit for me. ah lasted just a couple years, and then I got lured into Silicon Valley. I had long had an interest in applications. I'd written some books on using psychological principles for visual display design and even PowerPoint presentation.
00:02:01
Speaker
ah it's things like that. So I was interested in applications already. So I got um drawn into Minerva. It's the first academic I think they hired. I was founding dean and chief academic officer there.
00:02:14
Speaker
I lasted almost six years. was incredibly fun. You know, i'll build a university from scratch. ah Then I left and started my own college, which was kind of the opposite of Minerva.
00:02:28
Speaker
Minerva is really elite, takes like 1.7% of the applications. Whereas I started something for working adults, two-year program, where the goal was to teach them skills and knowledge that would not be easily automated.
00:02:41
Speaker
So that that went on for a few years, eventually got bought. And now I'm running something called Active Learning Sciences Incorporated, which is a small company that basically builds educational programs based on active learning, obviously, from the day, science of learning more generally.
00:03:00
Speaker
So we've done things like help build a new university in Seoul, South Korea, Taegei University, ah built a summer institute for the Indian Institutes of Technology on on soft skills and lots lots of other things. And we'd become entirely AI oriented now.
Books on Active Learning and Human Skills
00:03:16
Speaker
So we use AI for just about everything. So it was a pretty clear arc in that I was interested in applications early on and just found more and more ways to use the basic research results that I had and things I knew from the literature.
00:03:32
Speaker
to apply them. So the the book itself, Active Learning with AI, is actually the third in a series. was a buildup to it. the The first of them was a book called Active Learning Online, which I wrote during the pandemic. it was pre-AI.
00:03:47
Speaker
I think it was 2020. And it was reaction to people just lecturing into their computers on Zoom during the pandemic, trying to devise lots and lots of ways to use active learning and take advantage of things that you could do online, like quick breakout groups and so forth that you couldn't do so easily a classroom. So that was the first book. Second one was Active Learning with AI, which built on the first book.
00:04:14
Speaker
So it it used five principles from the science of learning. and showed ways of using AI to exploit those principles in teaching with many, many examples. And then the third book is the one that you guys read, apparently.
00:04:26
Speaker
I'm delighted that you did. i um ah Learning to Flourish in the Age of AI, which originally was going to be called Parent for Prometheus, by the way. um And the publisher vetoed the title.
00:04:42
Speaker
The subtitle ended up being the title, for better or worse. That that that one is more about trying to figure out if AI can do all this stuff for us, a lot of critical thinking, creative thinking, now it's digging out facts more reliably and so forth.
00:05:00
Speaker
What's left over that we should be focusing on as humans in order to be able to flourish in the new world that's emerging?
00:05:08
Speaker
this is This is a ah great start. and And now I have ah all your other books on my wish list. So ah that will eventually happen. You might get more emails from us. um Thank you. Well, our background is in philosophy, of course. And so we were going to start with this question of a flourishing, right? So there is a rich tradition in the field of philosophy of eudaimonia, happiness, flourishing, sometimes translated as thriving as well.
00:05:37
Speaker
And so let's begin with that philosophical question. So when you write about flourishing in the age of AI, and what do you mean by flourish? and And how will we know that we're actually flourishing?
Concept of Flourishing and Philosophical Insights
00:05:52
Speaker
Yeah. So I cheated a little bit in that i actually looked at the philosophical literature on this, figuring that a lot of really smart people had already thought about this deeply. So why don't I stand on their shoulders, so to speak?
00:06:05
Speaker
So let me answer two parts. First, the the way I talk about in the book is that to flourish requires doing much more than merely surviving or getting by. It requires achieving a sense of autonomy and control,
00:06:21
Speaker
having fulfilling relationships, feeling satisfied at work, having enough money, having a good work-life balance. ah In addition, flourishing requires personal growth and developing our abilities and talents And above and beyond just feeling satisfied, we need life goals that give us a sense of purpose and meaning.
00:06:44
Speaker
So the second thing is we know when we are flourishing, when we don't perceive the opposite, such as not having autonomy or control, and when we receive positive feedback from our actions and others that we are, in fact, making progress on these goals and and purposes and so forth that we've set for ourselves.
00:07:06
Speaker
So it sounds like a lot of these ideas, and I'll just sort of mention some again. So there's a need for autonomy. There is a need for feeling competent, ah money um balancing or managing our emotions, work-life balance. These are, they're they're not only philosophical ideas, ideas which have been endorsed by philosophers since you know Aristotle, but they're also empirically backed. Would you, is that correct?
00:07:31
Speaker
Yes, it is. Okay, great. Well, then... I have a quick follow-up on that? Sure. Yeah, some people are kind of... I've heard you know psychologists sometimes worry about um coming up with a concept of flourishing that's empirically blacked because they'll be like, oh, you have to rely on self-reports and self-reports or questionnaires are kind of unreliable. like you know For example, i was thinking of the example of like,
00:08:01
Speaker
You could be going through something really hard and difficult and lot of sufferings involved and So at that moment, you might be thinking, oh, I'm not flourishing, I'm doing terrible. But it could actually be the case that this is like a crucial step in your formation and you're actually on the way to flourishing.
00:08:21
Speaker
So anyway, I'm just kind of curious, what do you say to the people who are like, oh, you know there's never really gonna be a scientific you know scientifically valid concept of flourishing, because it depends on self reports and that kind of thing. What do you kind of
Reliability of Self-Reports in Flourishing
00:08:35
Speaker
think about that? Well, some of it depends on self-report, subjective parts about how you're feeling and so on, but some of it's pretty easy to actually document.
00:08:43
Speaker
um So if its if it's the case that... um Part of it is achieving a sense of autonomy and control. Well, we can look and see what you're actually doing.
00:08:55
Speaker
And to the extent that the choices you make get cashed out, you actually can act on them and achieve goals that you've set up. ah That's part of control as well.
00:09:06
Speaker
Fulfilling relationships are many ways to document that. There's a big literature now about loneliness, which is kind of the opposite. ah Feeling satisfied at work. Well, satisfied is subjective. So we probably have to ask you about that.
00:09:19
Speaker
And so forth. So I think you make a really good point, though. um You know, steering wheels and cars, you know, they have this slack in them. If you know, if you move like two inches to the right, the car doesn't suddenly least start going to the right.
00:09:32
Speaker
You know what i mean? You have to also average over a couple weeks or something. I mean, the reason you don't, you have Slack and the steering wheel, by the way, is that if it were too sensitive, you'd end up in the ditches and stuff on the side of the road.
00:09:45
Speaker
So similarly, when you ask people about their subjective experience, you need to phrase it in a way where they reflect and they integrate over at least a couple weeks or a month or something. It depends what exactly trying to get at with the question.
00:09:59
Speaker
But the way we're looking at it right now with respect to flourishing as in as part of one's life it's not going to be just the jiggle on the steering wheel that's momentary we're going to want to know something that's more consistent as a part of a trend maybe so for that i would phrase it uh differently than than i than i could otherwise to have them focus on integrating over a period of time
00:10:27
Speaker
very good very interesting yeah i just read the the book on the good life about the the um longitudinal study, I believe it was Harvard, right? And, you know, yeah all all the data points that, you know, connections are so incredibly pivotal. And it was a combination of questionnaires, but also objective measures and So, does it great I just want to build on this for a second because you just triggered something. It's really important.
00:10:53
Speaker
So, taking a step back, the book has two key ideas. I know we're going to get to this later, but let me just play them out briefly now because it hooks in with what you just said. um One of the ideas, of k two key ideas about how to flourish, they're really the core of the book.
00:11:09
Speaker
One is that humans are particularly good and responding to um open-ended situations where context must be taken into account. So by open-ended, I mean ah you don't know which factors are
AI Limitations in Open-ended Situations
00:11:23
Speaker
necessarily to be relevant. In fact, new factors that you didn't anticipate at all may suddenly come into being.
00:11:28
Speaker
Like, you know, think about the world on November 2022. two thousand and twenty two The next day, November 30th, is when OpenAI announced ChatGPT 3.5. I mean, that was a sea change. I mean, a totally unforeseen set of factors suddenly came on and are changing everything in many ways.
00:11:51
Speaker
Not yeah absolutely everything, but many things. So we're really good at dealing with that. And I think it has it has to do with the relation between ah emotions and hunches and the kind of thing Antonio Damasio studied years ago that that AIs can't do, that underlie our our intuitions and hunches. It's not just logic.
00:12:11
Speaker
It takes into account rationality and so forth. So we need to double down on the skills and knowledge that allow us to do this, many of which are acquired by interacting with other people.
00:12:24
Speaker
So a lot of what we pick up on that allows us to have these hunches, allows us to deal with these open-ended situations and take context into account, generalizing beyond our quote-unquote training set, as it were, in a way that AIs have trouble doing, really comes out of our relationships and the depth of the relationships. There loops upon loops and so forth.
00:12:47
Speaker
So I think a lot of going forward is going to be increasingly about what we do as humans with other humans. And then the other part, the other big part of the book was the the so-called cognitive amplifier loop, which is about how humans ought to make sure they're in the loop with AI.
00:13:03
Speaker
It's not just about AI, it's about how humans interact dynamically with AI. We can talk about that later though, if you like. But i bet I really do think that this idea of human connection is going to turn out to be even more important than it has been previously.
00:13:18
Speaker
So quick follow up on that. Yeah. So those two ideas are super interesting. So i hope we get to talk about them a lot. But um in terms of the first one, you know, it's like you're saying that AI or at least like the current AI, generative AI, that's the one that's kind of transforming our world right now, generative AI, I guess.
00:13:35
Speaker
And that one, i guess, is maybe not as good at dealing with open ended situations. Right. um Could you maybe describe that a little bit? Because I'm thinking, you know, in some sense, it does seem like it's it's pretty good with context. You know, if I tell it, hey, don't know, I can give it context for its responses, right? I can tell it, look, I want i don't want something, you know, I don't want to write an email that's like, you know, um super long. I want it to be more tight. Anyway, like I can i can give it context in that sense.
00:14:10
Speaker
um But on other hand, you know, It seems like there are types of factors that we're better at dealing with when you... So you just brought up the kind of human to human interaction. Like obviously AI, you know at least like chat GPT, it doesn't have sensors for like telling my facial reactions.
00:14:33
Speaker
So it can't take into context um my facial reactions. um I guess, okay, so just to tie so just to tie what I'm trying to say is like, is the issue that it can't incorporate talk context or it's just not yet good at recognizing certain factors? Like we can recognize emotional cues, facial cues, yeah and it can't do that currently.
00:14:57
Speaker
Or is it more like there's a deeper what sense in which generative is not good at extrapolating and I think it's deeper. so um So AI is, this is something Gary King is a one-liner that he had, I really like.
00:15:13
Speaker
um AI is good at interpolating, not at extrapolating. So I think he was right about that. that Look, the the fundamental mechanism underlying AI seems to be, we don't really know, by the way, just be really clear on this, no one really knows because exactly what's going on in these gigantic networks, but it appears to be pattern matching.
00:15:34
Speaker
And it gets trained up on tons and tons of patterns at different levels of scale and integrates them in interesting ways. But what happens when you break pattern? that That's what open-ended situations are about, where new factors that can be completely unexpected, come out of nowhere.
00:15:53
Speaker
ah and And to make it even worse, those new factors, the way we interpret them depends on the context. It's not just about context per se, qua context. Sure, you're right.
00:16:04
Speaker
You can give it the context and it does fine. But how does it know what the relevant context is when a new factor comes out of nowhere and it's broken the patterns that it had before?
00:16:18
Speaker
so So it sounds kind plausible to me, but just again, just to like clarify, I'm just going ask this kind of question. I mean, what if someone said, but you know, think about how I can introduce a really unexpected factor in the sense of I could go on chat GPT and be like,
00:16:37
Speaker
Chad GPD give a criticism of ah you know Donald Trump's recent speech from a perspective of Heraclitus. And it'll come, I mean, it's like, who would ever think of hair clo bringing those two things together, Heraclitus and ah you know Donald Trump?
00:16:55
Speaker
And so in some sense, that's like a very bizarre thing. context, yet it can somehow respond to that. And anyway, I don't want to push too far, but you kind of get what
AI and Context Changes in Decision-Making
00:17:05
Speaker
I'm saying. It's like, yeah, in two plates.
00:17:07
Speaker
And, and by the way, hallucinations are almost a feature, not a bug. Hallucinations are the flip side of what it does normally. Okay, it's going to fill in. It's going to interpolate. So when you give it those two contexts, it's going to find points of contact and this shared associations and so forth that it'll then spin off on.
00:17:27
Speaker
So it it's not it not about that it can't deal with context. It's really about the open-ended situations where don't know in advance what factors are relevant.
00:17:38
Speaker
Or even even when new factors come out, they've never been considered before, like generative AI was on November 30th, 2022. But there are many, many examples. My book, I have a page and a half or so of examples that I had it generate, by the way, that's in there of open-ended situations.
00:17:56
Speaker
And if you have a dialogue with it about it, it's interesting. It... it it um It's hard to know whether it's agreeing with you because it wants to be to please you because that's how they fine-tuned it or whether it, in fact, is the result of it's actually computing a certain way. I don't know. but um But if you do have an interaction with and and an LLM about the ways it deals with open-ended situations, it's it's interesting the extent to which it concedes, as it were, that these are kinds of situations where it does have problems.
00:18:28
Speaker
And I think it's because it breaks patterns and i think it's trained on patterns.
00:18:36
Speaker
And I think humans have this advantage over LLMs because of the Damasio thing. that the So there's a part of the brain called the ventral medial prefrontal cortex.
00:18:48
Speaker
Ventral bottom, medial the middle, prefrontal towards the front. And there's an area in there which seems to integrate emotions and logic. thoughts, a very integrative area.
00:18:59
Speaker
And Damasio has a ton of data in his group showing that hunches and intuition seem to grow out of activity in that area. If you have damage there, you don't you don't have but selective deficit.
00:19:11
Speaker
And it it seems, at least as far as I can tell from reading and looking at all this stuff, that that that process is of coming up with these kind of not through an axiomatic logic, you know, chunk chunk, chunk, chunk, here's where you come out, or even pattern matches, other kind of thing that involves emotion, allows us to do something that the current AIs don't do well.
00:19:35
Speaker
Could I actually, could you expand on that a little bit more? Because I think that's super interesting, the way in which actually there's a lot of literature, like the Masya, like you're referencing, where hunches, intuition, it's actually considered, ah And then forgetting his name, german there's a German psychologist. you Yeah, exactly.
00:19:56
Speaker
His work on this too. It's super interesting how you know a lot of people today, I think, have this idea that our hunches and intuitions are like totally bankrupt and they're disaster and they're all like not trustworthy.
00:20:07
Speaker
But you're actually saying this is like really crucial aspect of us, which gives us a leg up. on AI and kind of keeps a space that maybe we still are only able to occupy and that we can't offload to AI.
00:20:24
Speaker
Not yet anyway. Who knows what the future will bring, but and the crowd in all of themselves seem to do it. Yeah, I think people give short shrift to the the role of these emotion-driven hunches and intuitions and so forth, particularly in decision-making.
00:20:38
Speaker
So one of the things I said in that book was, if you have a really hard decision to make, ah like, I don't know whether to buy an apartment or some big, big, ah marry someone, you know, something really...
00:20:50
Speaker
It's going to change your life in some ways. And you're on the fence about it. You're going back and forth. you know Ben Franklin said, write down the pros, write down the cons. you know Well, you can do that.
00:21:01
Speaker
And at least in my case, I look at them I say, okay, I don't know how to weight them. i don't I don't really know how this is going to wash out. What I've discovered is ah do all that.
00:21:12
Speaker
i think about it. I go to sleep. I sleep on it, which is important. Sleep integrates the stuff. And then the next morning, I flip a coin. I say, all right, I'm going to buy the apartment or do this or do that if it comes up heads.
00:21:25
Speaker
And I flip the coin. And the key is to see how I feel about the way the coin came up. So it's not about whether it was heads or tails. It's about my emotional reaction.
00:21:36
Speaker
it's It's sort of a way to externalize something that's sort of buried in there.
Intuition and AI in Personal Decision-Making
00:21:41
Speaker
because And AI can help us do these kinds of things, by the way. which is something I hit on when i was writing that book, that I could have ai not just a simple binary thing with a do it or not do it. I could get much more subtle.
00:21:53
Speaker
I have an AI kind of tap into these very human kinds of reactions we have.
00:22:01
Speaker
So this is ah this is great. Let's transition with that into this this method that you've devised for helping or for having AI help you plan some some long-term life goals.
00:22:15
Speaker
So... Just for the listeners, um I'll say this much. We are covering Stephen's book in sort of a reverse order. He begins by by explaining how it is that we should prompt AIs in a way that you can help them help you.
00:22:31
Speaker
But we're going to start right at the promised land we can show people where they can get using these methods. And so using these methods and so Let's talk about that now. So, you know, you mentioned that you use the coin. Well, according to your experiments, how did it work out? How did it pan out? What are the mechanics behind using AI for some of these big life planning goals like career changes or what you're going to do when you're retired or any of that kind stuff?
00:23:02
Speaker
Yeah. Yeah. um and Okay, ah let me let me go through four of them.
AI in Life Planning and Brainstorming
00:23:08
Speaker
I'll try to do this quickly. um So the the first is brainstorming.
00:23:13
Speaker
ah You can prompt the AI the ai that you want it to play the role of a coach and a wise counselor and tell it to have several concerns about your personal future and ask it to help you reason about this concern or these concerns by asking leading questions.
00:23:29
Speaker
So after you answer after you've answered each question, ah you want it to give you feedback and help you reason about the trade-offs that are implied.
00:23:39
Speaker
So it'll do that. It's actually quite a good brainstorming companion. And then there's simulations where you can they have it simulate what it would be like if you made a certain choice, if you if you actually followed up on a certain goal.
00:23:55
Speaker
and And, you know, have it put you through the paces of what you'd actually be doing in a day in the life of X. It's pretty good at this. And that's a good example of the the coin flipping thing, by the way, because you can start recognizing what what you're enjoying and what you're not um by doing these simulations.
00:24:12
Speaker
ah the The third thing is um i ah helping figure out deltas. ah That is disparities between where where you are now and where you want to go. So, for example, say you're a sales professional who wants to move into a management career in human resources or something.
00:24:28
Speaker
It's the example I have book. ah You tell it a bit about your background and tell it what your goal is. And then you you can ask it what sorts of skills and knowledge you need to acquire in order to qualify for this dream job.
00:24:41
Speaker
And it can do it. It'll it'll give you at at multiple levels of granularity down to level of individual competencies. And you can decide if you want to do that lift. It's a lift too heavy. And then the last thing is just in general decision-making.
00:24:54
Speaker
um It can help you flesh out the trade-offs and, as I say, help you anticipate what the consequence of decisions would be you can see how you feel about it.
00:25:05
Speaker
So all of those things, and there's more, but those are the four big ones that that i that I personally have used and I talk about in the book with respect to thinking about long-term goals. The other thing I did in there, which I don't know if we want to get into, was I spent a fair amount of time thinking about the humanities and what you can get from the humanities and in terms of helping you interact with ais to help figure out your life goals and so forth.
00:25:32
Speaker
But we we don't have to go down that path, but that that's in there. so So one thing I think is really cool about the way you're suggesting you use, you know, ChatGPT and these generative AI is that, mean, it kind of harkens back to your research and active learning. It's it's helping a person, i guess, lead, want to say, an active life. In other words,
00:25:57
Speaker
I think a lot of people, the way they use ChatGPT, unfortunately, is is almost to keep themselves less active in a way. Like, for example, I get an essay prompt and for a class, and it's like, oh, okay, like I can use ChatGPT to avoid the activity of writing an essay, and I can just kind of short-circuit that, and and it'll make...
00:26:27
Speaker
And I'll get straight to the outcome, which I'm desiring. But anyway, it seems like you're kind of coming up with ways where it's like, no, like we're still, we still want, you know, we're still trying to pursue a career. We're going to, we're going achieve activity, but, um,
00:26:44
Speaker
we're going to use ai to facilitate our activity and and again i kind of think about also just since we were talking about flourishing the connection to aristotle where it's like the act of life is something like that's like key to a flourish anyway i just just the comment i guess um yeah no there's an interesting You're touching on something I find extremely interesting, which is right now there's a lot of obsession about how to prevent cheating with AI in both high school, at least high schools and colleges.
00:27:18
Speaker
And I find that whole way of looking at things. i'm I'm told now that high schools are starting to use blue books. ah in part to prepare them for college, which colleges are now using blue books. I mean, it's totally retro. It's back a way to prevent cheating using AI.
00:27:34
Speaker
And what this kind of reminds me of is the difference between a closed book and an open book exam. That is, in the real world, there really are no closed book exams. And in the real world, the world that we're all moving into right now, you're going to be able to use AI.
00:27:51
Speaker
So what is it we're trying to teach people in school, if if not to how to function in the real world as a person, a citizen, you know someone employed, and job, someone one has relationships, and so forth and so on? it's all It seems to me that school really should be about...
00:28:10
Speaker
helping you be able to flourish, to succeed, do well in the real world. And it's it's not a closed book exam. So why don't we teach people how to use AI in ways that they'll want to be able to use in the real world to further their own goals?
00:28:25
Speaker
I'm kind of confused by all this. So so i I hear what you're saying. I mean, it's interesting what you bring up. I mean, I guess I'm kind of sympathetic to the Blue Note or the Blue Book direction.
00:28:38
Speaker
um i guess my thought would be like, my worry is that the chat GPT genre of AI is so powerful now that someone can almost get away with um relying on it like almost entirely to the point that they don't develop certain fundamental skills.
00:29:00
Speaker
And so my thought is like, yeah, maybe you want to use the blue book so that you can't so that you can use AI more cleverly, right? Because like when I see someone using AI in a clever way, I'm like, oh yeah, look, they developed certain basic skills. And so they were able correct to not just be a passive, like answer my essay question for me, but they were able in virtue of those skills to use it in a clever way. Anyway, I don't know. What what do you think about that? i would totally I completely agree with you. In fact,
00:29:31
Speaker
a way of reading that book, um which I didn't make explicit, but is it is a way of looking at it, is that it's about what skills we need to actually interact with an a AI effectively so that, in fact, it can do our bidding and help further our goals.
00:29:47
Speaker
So I completely agree with we need to to learn a set knowledge, not just skills, also foundational bedrock knowledge. Otherwise, we're not going to be able to do much with it I mean, it's like this.
00:29:59
Speaker
Lately, A lot of faculty, I'm told, are using AI to write their syllabi, to write lesson plans, to grade papers. I mean, there's a lawsuit I read, ah i think it's at Northeastern, where someone's suing the university for they want reimbursement of some of their tuition because they discovered the faculty are using AI to grade papers and whatnot.
00:30:22
Speaker
um I think that's it's very um unfortunate because ai is limited. And if you're going to be interacting with humans, you've got to use it in a way that's going to be good for humans.
00:30:37
Speaker
And you know more about that than it does. So I think this idea of the cognitive amplifier loop is about... the kind of skills that we should be using to interact with AI, to kind of maximize what it does well, while maximizing what we do well.
00:30:54
Speaker
So I think people need to learn that.
00:30:58
Speaker
on On that point about you know that there are aspects that we do extremely well and that AIs can can't, this reminds me of you know past methods of trying to you know make our life decisions through algorithms that really kind of lack that human component. um um I wrote, I didn't write it, I read this book a long time ago ah called Algorithms to Live By, I think.
00:31:23
Speaker
And they're essentially taking some computer science ideas and like, this is how you use this you know principle from computer science to choose a lifelong mate. And so, you know, you have to sample, i don't know, 40% of the population. And, you know, so there's it's very but much like, you know, do this, so this, and, you know, and so that's ah that's wonderful and all.
00:31:41
Speaker
ah A, I wouldn't admit to any spouses that that is how you came to their that decision. and But B, it really is missing this like you know this this human, affective, emotional kind of stamp of approval or disapproval that that you know A, you know it seems to be key to our our flourishing and our thriving.
00:32:02
Speaker
But B, also, you know it's it's what makes us uniquely human and and and in a sense still you know better than and AIs or more capable. Yeah. Yeah, i think I think that's right.
00:32:13
Speaker
i mean, take this for example. So I've been married a long time, very happy. And one of the things that I um was very conscious of during the whole dating thing, which, well, um was how the other person made me feel about myself.
00:32:33
Speaker
It wasn't all these other kinds of things that you probably would put in that algorithm you just described, but it really was, you know how did they make me feel about myself? How do you quantify that?
00:32:43
Speaker
mean That's such a deeply human thing that I just don't see AI replacing, at least the current AIs.
00:32:55
Speaker
Right. Good. Yeah. Okay. So Okay, so we're using AI to develop some of our life goals and helps us brainstorm. It helps us you know maybe envision what that sort of life will be. And we can just imagine that maybe you know we're we're one of my students, right? They're they're coming out of undergrad and they're they're figuring out where this new economy, you know what are the opportunities here?
00:33:18
Speaker
Let's say they've chosen a path. But one thing that happens to us whenever we are um when we've decided on a goal is that sometimes we we start to lose our motivation. I remember I took up ah French ah in high school and I thought, oh, I'm going to get really good at this.
00:33:36
Speaker
But after the two years that were required, you know, foreign language, I stopped taking it and i can't I don't remember anything now, to be honest, about my French classes. So what are some ways in which AI can not only help us choose our goals, but keep us motivated? Now, this is this is some fun stuff, so I'll let you
AI in Identifying Personal Motivation
00:33:56
Speaker
take the wheel. off Yeah, it's something I'm obsessed about in the book.
00:34:00
Speaker
Well, okay. So from my perspective, the the first step is to to note which factors are personally motivating. people are different. Different things motivate different people, least to some extent.
00:34:13
Speaker
So what I did, and I had an example in the book of this, I went to Google and I searched for articles that reviewed the scientific literature on motivation. ah There are a ton of different, not a ton, there's a good half dozen theories of human motivation, and there's a lot of literature.
00:34:28
Speaker
um So I found ah really good comprehensive review, and I fed that into chat tbt I think it was four the time. And I asked, I wrote a prompt, I asked the AI to ah figure out which factors motivated me in particular.
00:34:50
Speaker
So asked them to go through the factors one at a time. and give me a situation where that factor would be relevant. And then ask me how motivated I would be in that situation.
00:35:03
Speaker
And I rated it on a, think it was a four point scale. And so it went through a bunch of different scenarios where it was picking out different factors and coming up with ways of embodying them. At the end, it did a little diagnosis and it summarized, I thought, pretty accurately ah which factors really mode it be. For example, autonomy and and control were really important, probably more important than connectedness and and seeking certain kinds of rewards and stuff like that. So it it kind of did a nice profile.
00:35:32
Speaker
So so that that was a good first step. But we shouldn't just take the AI's pronouncements at face value. They do hallucinate. They're not perfect. So ah again, i have this, the coin flip thing. See how i feel about it. So it tells you, here's the way it's diagnosing you.
00:35:50
Speaker
but it Does it ring true? Does it feel right? um I think you can take a look at what it says. And if you are sensitive to the way you're reacting to it, you can get a lot of personal insights about what kind of factors motivate us.
00:36:04
Speaker
So once you have a sense of what factors motivate you, then it's pretty easy to work with an AI to figure out um how those factors would come into play in achieving specific kinds of goals. You could ask it for advice and particular strategies and so on to use, and it it will offer them.
00:36:24
Speaker
And again, take them with a grain of salt, like everything else from AI, and make sure that that it it feels comfortable. I did this experiment myself. Sorry, real quick, Sam. i just I tried it out, and I did exactly what you're suggesting. How does that...
00:36:41
Speaker
How does that ah resonate with you? you know And so it said that I'm you know partly ah extrinsically motivated. I certainly need and ah some sort of renumeration. That's how you get Roberto to do something. You put wine at the end of it.
00:36:57
Speaker
No, sorry. yeah But it depends. It's got to be a Pinot, please. Anyway, um so that's part of it. ah But another part of it is it said, you know I forgot the language that it used, but I sort of have to believe in the you know, whatever the product or, you know, the the task, I have to believe it's worthwhile.
00:37:15
Speaker
So I have a high degree of intrinsic motivation too. um as long as I actually you know ah accept the the the the value of whatever project it is.
00:37:26
Speaker
And i I thought back on my life, and I've definitely, you know i call it optimal quitting. you know i've I've quit on things because I didn't believe in them, and it sort of really aligned with what the AI was saying. So at least in my case, I feel like it was...
00:37:41
Speaker
you know at least at face value, pretty accurate based on my history. But sorry, Sam, you were going to ask him. Well, that's great, but I should add something. um All the prompts that are in that book are on the Rutledge um website.
00:37:53
Speaker
So you can easily paste them in if you want to if you want to use the prompts that I that i used and illustrated in that book. But anyway, sorry, keep going. I was just going to say, just picking up on the intrinsic motivation thing.
00:38:05
Speaker
I think that's a really interesting concept. Like I think about that with teaching a lot where, you you know, I guess, yeah, there's this distinction between intrinsic and extrinsic motivation. So like for a lot of students, it seems to me that, you know, it's really hard to get them to be intrinsically motivated to learn the material for them. It's more like,
00:38:29
Speaker
look, if this is on the test, yeah okay, I'll learn it. But if it's not on the test, I'm just not really that interested. In other words, so there the idea they're extrinsically motivated to learn the material because they only...
00:38:46
Speaker
are interested in learning it in virtue of the fact that it's helping them pass a test rather than and virtue of like the intrinsic features of the content. And um I guess, i don't know, do you have any thoughts about that? Like, okay, it seems like the especially with, you know, your suggestions and your prompts and stuff can be helpful for discovering your Intrinsic motivators maybe, but I guess ah you know in terms of having intrinsic motivations in the first place, is it like you know some people, i guess it's not going it can't really help you develop those basic passions in the first place, I suppose, right?
00:39:27
Speaker
Or maybe not. That's a really good point. I don't know. um So the intrinsic stuff, you're not supposed to really develop. I mean, there there is a process where extrinsic motivation gets absorbed and becomes intrinsic that the theorists talk about.
00:39:48
Speaker
But mostly they focus on, say, the self-determination theory stuff. um They talk about autonomy, control, social relatedness. So it's a three big intrinsic, meaning we don't learn them. They're they're wired in.
00:40:04
Speaker
That's the claim. That's the claim. So you don't really need to develop them, but you may need to apply them. You may need to figure out how they get cashed out, how they get used in particular situations, how they come to bear, I guess is way to say it.
00:40:23
Speaker
Okay. Okay. Well, yeah I mean, okay. Interesting. Yeah. Because I mean, on the one hand, that makes perfect sense. It's like, who's not already motivated by relatedness? It's like everyone wants to be, it seems on one hand, it seems like everyone is is motivated by human connection, genuine togetherness, that kind of thing.
00:40:44
Speaker
other hand, I guess my thought is like, isn't that a big problem with humans though? That like a lot of us, like, it seems like some people don't have passion in a way. And it's like, don't know.
00:40:55
Speaker
Do you have any thoughts about that? So two two observations. One is ah around every measure of central tendency, there's a distribution. So there's variation.
00:41:06
Speaker
I mean, some people may be more motivated by autonomy than others, for example. i I'm sure that's true. In fact, they've measured it. It's not even a, it's an empirical observation.
00:41:18
Speaker
um You can have scales, of these things, and people score differently on them. so So that's one. ah Two is, i think everyone's interested in something.
00:41:32
Speaker
That's why they get out of bed. ah The question is, how do you hook up what they're intrinsically interested in to other things that are going to help them succeed?
00:41:43
Speaker
Because a lot of people don't see the value of investment. That is, a lot of what we do is on the bet that going to school and learning that stuff and getting the good grades so that we can then get a good job and so forth.
00:41:59
Speaker
In the future, it's going to pay off. So we're betting on that. We're willing to put in time and effort now, even if we're not particularly interested, so that we can get to something downstream that we really are interested in.
00:42:11
Speaker
So it's it's that bridge that I think maybe we need to be focusing more on, that with students who are not intrinsically motivated to learn this stuff Well, can we find out what they are really intrinsically motivated to do?
00:42:27
Speaker
Whatever it happens to be. Is it playing video games? is it but playing the stock market? I don't know. People have all kinds of interests. And find out ways of bridging what is being taught in a particular class with their particular interests. So AI as a tutor can do this, by the way.
00:42:44
Speaker
ah So it's really good. that The courses that we design, at the beginning of every course, we have the students fill out a questionnaire where they tell us what their interests are, what their hobbies are, what their aspirational vacations would be, favorite foods, favorite movies, all kinds of stuff like that.
00:43:01
Speaker
And then the AI, when it does tutorials with them back and forth, uses the their responses on that questionnaire for analogies, examples, stuff like that to motivate them, to make them make it more relevant.
00:43:14
Speaker
And it can do this on a one-on-one way.
Tailoring Education with AI
00:43:17
Speaker
that's really hard to do in a traditional classroom, right? Where you've got a group and you can't really tailor it to each individual student. So this is something that AI is really quite good at, the the current LLMs, that may turn out to have much more profound consequences than we anticipate right now in terms of helping people get over these kind of local humps.
00:43:38
Speaker
Yeah, that that's fascinating. And I think... ah um ah definitely something we need to explore a little bit more, but I want to kind of add to this because definitely intrinsic motivation something we want to help capture and yeah have AIs help us, you know, capturing ourselves and and our students.
00:43:56
Speaker
But there is this other idea that I think ties in very well ah to to this, you know, this idea trying to get people to to align their actions with their values. So let me let me just spit out this question so that it makes a little more sense.
00:44:13
Speaker
ah In your book, you talk about um something they like you know digital cognitive behavioral therapy or something to that effect, right? So it's basically having AI help you manage your thoughts, especially those that become sort of like an emotional disturbance.
00:44:30
Speaker
Because in my case, in in my classroom, sometimes what blocks people from progressing or from really caring is that they have some, you know preoccupying thoughts, something negative in their life, you know. And so ah all of us have to deal with um negative thoughts sometimes. And you are having AI, you're training AI to help train negative thought trains out of yourself, right? So Let's talk about that a little bit.
00:44:57
Speaker
In particular, maybe you can have us or tell us about what it is um that you are having AI ah help us you know undo, what which which kind of thoughts you're you're having AI unravel for us and and how successful you were with that.
00:45:13
Speaker
Yeah. So the the first thing is helping you identify what those negative thoughts are. um which it can do in multiple different ways.
00:45:24
Speaker
What have found repeatedly very useful is not having it just ask me something and expect me to answer it because often i don't have conscious access to this stuff.
00:45:35
Speaker
But rather give me scenarios and then ask me how I feel about them, what my reaction is, rather than try to retrieve information, try to recognize when a situation ah rubs me the wrong way. And then then based on that, we can try to figure out what the negative thoughts are.
00:45:54
Speaker
So that's a general technique that I use in the book. Sort of recognition is easier than recall, as it were, which is a standard psychological phenomenon, by the way.
00:46:06
Speaker
um And then once you have some idea of what those negative thoughts actually are, i the AI can help you reframe them. So there are different ways of looking at things.
00:46:18
Speaker
So you've... not necessarily consciously chosen, you may have hit on a way of interpreting them, of framing them, that is negative and self-destructive.
00:46:28
Speaker
It may be that you interpret it in terms of inherent characteristics that you have, and it reflects that. But you could take a step back and say, well, wait a minute, some of this is going to be contextual. It's going to be from the environment.
00:46:44
Speaker
It's going to be about particular situations I'm that maybe draw out certain things in me, but those things aren't always there. And I have other things that I could use to kind of compensate for them or override them.
00:46:56
Speaker
So the a the AI can really help on these kinds of things. However, however ah this sound this sounds a lot like ah cognitive behavioral therapy. um And i would not say I would not say it's the same thing.
00:47:11
Speaker
And the reason for that, it loops right back to what we were talking about earlier, that humans are particularly good at responding in these open-ended situations that require taking context into account.
00:47:22
Speaker
So a human therapist is going to be way better, ah at least currently, at interpreting what you say and how you're looking at things and so on than an AI can be at this point.
00:47:36
Speaker
So I think think AI can be really helpful up to a point.
AI as a Stoic Tutor and CBT
00:47:42
Speaker
So one ah experiment that I did based off this off this chapter in your book is that I i basically had ChatGPT sort of be my my Stoic tutor. Of course, in Stoicism, it's very famous for having various methods for reframing some event and in a more positive light.
00:48:04
Speaker
For example, Epictetus would reframe or have us reframe some negative situation as a challenge from Zeus, right? So you can show off how how crafty you are in coming up with a workaround.
00:48:17
Speaker
So I had ChatGPT be like a Stoic tutor for a little bit, and I would input ah just random little you know negative things, some of them actually happening to me, some of them I just came up with them, and just helped me reframe them. And I had it him or him...
00:48:33
Speaker
i I had ChatGBT be Seneca. And so I had Seneca respond to me ah with sometimes interpolating actual quotes from Seneca. So i just wanted to see how that lands for you. Can jump in there, Roberto? Because like like one question i'm wondering, what if someone says like, okay, you had ChatGBT be Seneca, but how do you know it was accurately embodying that stoic perspective, right? Because I mean, it kind of brings up the whole issue of like hallucinations and yeah, so it will confidently embody a stoic persona. But I guess someone might be wondering, you know, how trustworthy is that persona? How trustworthy is that
00:49:17
Speaker
Because that mean that would be... i mean Think about what we're saying here. right like Wouldn't that be insane if like if you really could get just wise counsel on demand? Think about how like rare that is in normal life. like who can i Who's a wise counselor? like are by Are you? like Then I value you and I'm going to stay close to you because you're you're an amazing thing if you're a wise counselor.
00:49:38
Speaker
So it would be incredible if, yeah, you could just go on to ChatGBT and get what... But I guess, yeah, what would you say, either of you, to that? like Are you sure it's trustworthy in this anyway?
00:49:51
Speaker
Well, it's demonstrably not, but that is the whole translation problem. But you can make it much better if you feed in ah set of resources and prompt it to only use those resources in responding.
00:50:11
Speaker
And it does that pretty well. So that cuts down on hallucinations a lot. You can put these into vector database in a RAG, a retrieval augmented generation system, if they're big. Although the context windows, the amount you can feed in now is getting so big.
00:50:29
Speaker
For a lot of applications, you don't really need it. You can feed in hundreds of pages along with the prompt. But if you do that, you can you can really start to get a better sense of what reflects ah that body of thought
00:50:47
Speaker
and i I guess that just brings up, oh, just real quick. like Sorry, Roberto. yeah You got the next. i was just going to say, like it kind just shows though, back to your basic thesis, that it's you know at the end of the day, it's a cognitive amplifier.
00:51:00
Speaker
It's a tool because if you wanted to give you wise counsel, then you would have to know what wise so what resources to feed in, which would be wise in the first place. right so like if i Anyway, so the point is that there's no escaping ah ourselves in a sense, right? It's like it's not like we can get jump out of ourselves and it's, okay, now we're going dispense wisdom from ChadGBT.
00:51:29
Speaker
No, because you know we have to know what resources to feed it and then we have to then know what wisdom is to know which the resources will give us ultimately wise counsel. That is so true. it is very I think that's a profound observation, by the way.
00:51:45
Speaker
um And and it's it's deeper than that, I think, because it's it's also about prompting. You have to know what your goal is. This is a cognitive amplifier loop. You have to know what your goal is. Be really clear on what you're trying to achieve.
00:51:59
Speaker
Then you've got to write a prompt for the AI that's going to tap into that goal. Then it's going to do something. You have to evaluate. So this requires human discernment.
00:52:09
Speaker
There we go, which is critical thinking and some creative of problem solving goes into that too sometimes. And then typically revise either the prompt and or the goal, depending on what it does.
00:52:22
Speaker
And you kind of go around this loop, cognitive amplifier loop, until finally you get something that feels right that you want to just refine, tune it up. So at every stage, what you just said applies.
00:52:36
Speaker
That is human input, human wisdom, human judgment. Come up with that goal. Formulate the prompt in a way that's going to get out of it what you really want to get out of it. Evaluate what it does in a bigger context.
00:52:51
Speaker
Revise appropriately, but the goal sometimes, often the prompt, usually the prompt a few times, and be able to recognize when you finally got to where you want to be close enough that you can now tune it up by hand.
00:53:04
Speaker
So everything you said applies in each of those stages, and that's the cognitive amplifier loop. And that's what I think people need need to learn how to do.
00:53:12
Speaker
So maybe we can double click on that idea at this point. um So they're there, and and we can also double click on the fact that it looks like philosophy is necessary if we have to figure out what our our values are first.
Critical Thinking and AI Skills
00:53:26
Speaker
And so, hey, whenever philosophy gets a win, we always want to highlight it real quick. um But ah secondly, um it looks like ah critical thinking is is is fairly key in getting the most benefit from ah from this ah use of AI. and So, I mean, I guess I have a question ah before I ask the question.
00:53:48
Speaker
ah When I was reading your book, I seemed to get the sense that you were thinking about critical thinking as not not exactly an abstract skill, but also you know it's it's almost like it's built on a lot of lower level skills. Am i even reading that correctly? I mean, can you just tell us what critical thinking is first?
00:54:09
Speaker
So at the most general, vague, abstract level, it's about analyzing and evaluating. But I don't think that's very useful. i actually think critical thinking is not one thing.
00:54:20
Speaker
i I think it's a couple dozen separate things, which barely overlap in some cases. So I have a table in that book, which goes on like a page and a half, I think. There are four big categories and each of those has, I don't know, four to six subcategories. I have examples of each one and so forth.
00:54:39
Speaker
it's It's overwhelming. And what i I do is I feed the table into an AI to have it help me with a critical thinking. um Because it's just so much that it's really hard to keep it in mind. But if you do this a few times, it's like the training wheels start to come off because you have to recognize what it's doing so you can evaluate it, a cognitive amplifier loop.
00:55:05
Speaker
um and after time you start seeing the different kinds of critical thinking that it's doing. But you're quite right. um Each of these things gets broken down into more granular kinds of skills and competencies, and those are the kinds of things that we eventually have to teach, I think.
00:55:21
Speaker
Might I recommend, Professor Koslin, that you next write a book on critical thinking, ah just because i that's one of the courses that they have us teach at my college. And it is I've seen those textbooks, and I politely decline whenever I'm asked to ah to teach that course. I just stick with symbolic logic and inductive logic instead.
00:55:43
Speaker
ah So yeah, there's a lot of confusion, I think, on that front. They they treat critical thinking as if it is one specific thing, and it's a Yeah. Very irksome. Could I just... Oh, sorry.
00:55:54
Speaker
Go ahead. Go ahead. Go ahead. I support that as well, the critical thinking textbook. I was just going ask about like, because we're kind of drawing to a close here at the hour.
00:56:07
Speaker
I was just going to ask you kind of, when it comes to looking ahead, how do you predict
00:56:15
Speaker
things to go in terms of like, do you think human beings, do you would you say it's likely that human beings are going to use generative AI for their flourishing? Or do you think instead we're going to kind of use it to in a way to replace the act of life and it's going to become more of a de-skilling? Because I guess that's what I'm worried about is like the de-skilling possibility. don't know. Yeah, I'm just kind of curious. Like when you look in the future, you like,
00:56:45
Speaker
hopeful things will go. Yeah.
Future Integration of AI in Life
00:56:48
Speaker
flourishing So I think the answer is both, by the way. And I think it probably is not even, it's not even that some people going to use it to help them flourish and some won't. I think it's going to be even within a certain person, it'll be used in some contexts in a way that'll promote flourishing. And in some contexts where you don't care as much and maybe you're lazy, you'll use it for shortcuts.
00:57:12
Speaker
So I think it's going to be nuanced. Yeah. And I think it's going to get you integrated in as like the internet was just another kind of part of our environment that it's in a unique kind because it's so interactive and it's it's so flexible pride and and adds so much.
00:57:29
Speaker
But I, I, I do think that it's going to change a lot in ways that we, we really can't anticipate. And um we're going to have to be on our toes to make sure that we're aware what,
00:57:41
Speaker
what is happening and ways that we can adjust to. So it's about us and helping us flourish.
00:57:49
Speaker
um As we begin to move toward wrapping up here, um I have ah just maybe a very open-ended question here for you that you can kind of um go on however you see fit.
00:58:02
Speaker
ah So, You close your book with with a challenge. Don't just use AI to get things done faster, but use it to become wiser, right? So I wanted to ah challenge you now to to give us one thing that someone could do today ah to begin that journey toward flourishing with AI. And I would like to also add that you cannot say buy my book, because I will say that now once more, the book is ah Learning to Flourish in the Age of AI by Stephen Cosselin. But yes, Professor Cosselin, what can someone do today to begin their journey toward flourishing?
00:58:43
Speaker
So a lot of people that i and I know, when they first encounter generative AI, they treat it like it's Google. They ask it some factual question and it you know evaluate its response, which I don't think is is the best way to use generative AI. I mean, you can use it that way, but be careful hallucination problem.
00:59:04
Speaker
I think it's way better to use it in in ways that'll help you discover generative AI. ways to think about things, ah ways to approach certain problems, ah ways to deal with things you really care about, ah to come up with suggestions that'll get you thinking in new ways.
00:59:23
Speaker
So I would suggest going to a large language model and describing something you really care about and identifying something that's bothering you, that's not working out quite the way you want it to, and ask the AI for insights and suggestions.
00:59:43
Speaker
And see what it does. And be critical in evaluating what it says. You're going to be surprised. going to a lot. And a lot of this can sound very, very compelling. It's almost glib in some cases.
00:59:54
Speaker
So you really need to be on your toes to be take a step back and evaluate it critically and use this emotional response that we've got when things ring true to help you evaluate what its responses are and see if, in fact, you get some insight that helps you you actually deal with something you care about.
01:00:16
Speaker
That would be a ah very first step to to starting to learn this CAL, this ah Cognitive Amplifier Group, which is about building our strengths, in this emotional evaluation kind of thing we do and helping to compensate for our limitations, which we didn't talk much about here, but a I can certainly help with that as well.