Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
#33 Michael Gerlich: How AI is Stealing Your Ability to Think image

#33 Michael Gerlich: How AI is Stealing Your Ability to Think

AITEC Philosophy Podcast
Avatar
31 Plays1 day ago

Are we trading our critical thinking skills for the sake of digital convenience?In this episode of The AITEC Philosophy Podcast, Roberto Carlos García sits down with Michael Gerlich. Michael is the Head of the Center for Strategic Corporate Foresight and Sustainability, the Head of Executive Education, and a Senior Faculty member at SBS Swiss Business School. Most recently, Michael summarized his research on the interaction between LLMs and humans in The Convenience Trap: What Happens When AI Becomes the Mind Behind Our Lives. In this conversation, Michael shares his interdisciplinary research into how AI is "creeping" into nearly every aspect of our existence. We explore the dangerous phenomenon of "cognitive offloading"—the tendency to let algorithms make our choices, from the music we hear to the news we consume—and how this creates a "convenience trap" that narrows our perspective and weakens our mental "musculature". Michael argues that for AI to be a truly beneficial "sparring partner," we must do the hard work of thinking first before engaging with the machine.

Join the conversation and learn how to keep the power of thought in your own hands at ethicscircle.org.

Links:

Recommended
Transcript

Introduction to Michael Gerlich and AI Impact

00:00:17
Speaker
Hi, everyone, and welcome back to the A-Tech Philosophy Podcast. Today, we are joined by Michael Gerlich. Michael Gerlich is the head of the Center for Strategic Corporate Foresight and Sustainability, the head of executive education, and a senior faculty member at SBS Swiss Business in School.
00:00:37
Speaker
He is an interdisciplinary researcher with a focus on marketing, management, and sociology. He is the author of multiple scholarly articles on the effects of AI on human cognition and society.
00:00:52
Speaker
And most recently, he is the author of The Convenience Trap, What Happens When AI Becomes the Mind Behind Our Lives, in which Michael summarizes his research for non-specialists.
00:01:05
Speaker
Michael Gerlich, what a pleasure to have you on the show. I'm i'm very happy to be here. Thank you, Roberto. So this is an incredibly pressing issue, obviously. AI is on the scene. It is taking over. It's, how should I say it?
00:01:22
Speaker
It is creeping into almost every aspect of our lives. We use it when we get on apps. ah There is now agentic AI.
00:01:33
Speaker
so Maybe you can begin by telling us, how did you get into this research? What what drew you to the you know studying the effect of AI on on humans and society?
00:01:43
Speaker
Yeah, well, first of all, I'm maybe a year older than you or half a year younger. and No, I'm i'm i'm a a Star Trek boy, right? So I'm born with Star Trek. I'm very into new technologies, although im I'm not an engineer and I don't have the technical knowledge. I've always been interested in new technologies. And very, very soon before even the public had access to technology, AI, generative AI. I was already in the topic, was researching it, and was connected to AI developers because that AI and robotics, this is the future. This is what we saw as children in the movies, right? so um So it started very early, but why was I specifically interested in having a look when it comes to cognitive offloading, for example, and was that I realized, first of all, the tool invites AI
00:02:39
Speaker
that people can offload the thinking process. So um cognitive offloading, it's actually it's more cognitive offloading in in in the core sense of the definition would be if we offload some information like we do in a book, for example, that our brain doesn't have to remember it and we free up our brain for um to have more capacities for other, maybe deeper thinking.
00:03:08
Speaker
Now, with AI, that's a little bit different because it offers not um that you just offload like um your phone numbers that you have on your telephone. When I was a child, I had to remember the phone numbers because as soon as we left the house,
00:03:25
Speaker
And our parents couldn't connect us and we couldn't connect to the parents. So we had either little to farm find a phone booth where we can call or from friends call home. And you have to remember the numbers, right? Not just from your parents, your home, but as well wider family, right? Today, to be honest, I only remember actually those numbers from my childhood and all the others are stored in my my my mobile phone. And that is okay.
00:03:49
Speaker
That is a positive cognitive offloading that we have. We don't have to remember all those numbers, right?

Cognitive Offloading and Education Challenges

00:03:54
Speaker
They are there. and we don't have to store just that information. But generative and that's what I'm mainly um um analyzing and researching, offers us to think for us. So it's not just offering that it it works like a database.
00:04:14
Speaker
If we would just use it like a database, like a library, for example, um when i I remember have two PhDs and each time i was sitting hours and hours in libraries and looking for books and searching those. This is a lot more convenient nowadays, right? We have electronic databases. It's not just the physical book. You can now, even in the university library, you can search online through the computer.
00:04:40
Speaker
And AI can help you there as well. But generative AI offers you to do even more. It practically says in the morning, Michael, I'm here for you. Let me do all everything for you, right? Let me think for you. And this is where this the the new form of cognitive offloading happens.
00:05:01
Speaker
This is when we are no longer thinking about it. And and this is what I identified that it offers that. And I realized as well, at that time I was teaching um in Cambridge at LSE and started in Switzerland, that when generative AI was available to the wider public,
00:05:23
Speaker
um students were already using computers in the classroom, right? But you could see that there was less of an interaction, less discussion. it was They were directly consulting and you don't know whether it is now Google or it is an A, but they were always directly consulting, had answers, and you could see that the interactive discussion was no longer there.
00:05:47
Speaker
So let me jump in here. We're going to get into that that portion of your research, but I want to highlight for the listeners three things that you just mentioned so I can we really um frame our conversation.
00:05:58
Speaker
The first one is is sort of ah just you know the main topic. It's cognitive offloading, right? When we cognitively when we engage in cognitive offload to, in particular, ah generativeative generative AI. So, I mean, just so that I'm clear, actually, we're only really speaking here about large language models, right? Out of all the types of AI, we're primarily focusing on large language models. yeah So there is a temptation to let the large language models, which we'll just call AI from now on, basically engage in thinking for us. That is the third aspect of of your um you know answer that you gave that that's really important.
00:06:40
Speaker
it's This technology is not like past technologies in the sense that ah you know there is... Other technologies have sort of engaged in cognitively difficult tasks, but this one is is a little bit special, maybe just for...
00:06:56
Speaker
ah For the sake of clarity, we we can ah I can almost hear up a potential objection from a listener like, well, what about calculators, right? what about that You mentioned search engines. So how how are LLMs different from those?
00:07:09
Speaker
but First of all, this is the perfect question. e it is the The question is correct. When you look at the calculator, it actually... does the calculation, the thinking for us, in a way, it does the same that generative AI, the LLMs offer us, large-legged models offer us. But how often a day and for how long do you use your calculator?
00:07:34
Speaker
And search engines, let's be honest, I mean, and I'm the generation um we created the internet and the first search engines, right? And even today now, when you take the AI out of Google or whatever browser you use and you have a search engine on it, it gives you 1,368,000 pages. And then you still have to search for it, whether it makes actually sense and not in the first whatever three pages I'm mainly advertising, right, in a way. So that is a little bit different, right? Because now when I have a question, the AI gives me a...
00:08:15
Speaker
smart-looking, plausible answer quickly, right away. And I have that. Google never gave us that. Your search engine never gave you this direct answer, right? And this is where the difference is.
00:08:29
Speaker
Those search engines were always tools. and Generative AI, the large language model, goes beyond that. It's no longer a tool. It is Pretty much and replacing your thinking, it is can directly take over your your your cognitive process for you and you get within seconds A plausible, smart-looking answer. I'm not saying a correct one because we are not yet there, right? there are Still, wash language models have um um do mistakes and and have their downsides, but generally you get an acceptable, smart-looking, very fast and efficient answer.
00:09:12
Speaker
That's where the difference is. And i can I can say after having read your book and your articles, I noticed that when I do a, for example, a Google search, I now automatically default to reading the AI ah ah you know summary first.

Corporate and Societal Misuse of AI

00:09:29
Speaker
And so there there is something so alluring about that coherent presentation of a synthesis. um So let me try to summarize ah the main argument of your book so we can kind of you know take a look at it from a higher level and then we'll kind of hone in and and get in all the details here. But ah let me see if I did a good enough job. then You can grade on how good of a reader I am here. Okay.
00:09:55
Speaker
So we'll call this something like, ah you know, premise one. Given innate human psychology and the way our institutions are structured, it is extremely tempting to use AI at virtually every stage of the problem solving process.
00:10:12
Speaker
And this is for students, for private firms, for politicians, you name it. If there's thinking involved, it's tempting. Two, but using AI at early stages of problem solving for interpretation, for framing, for finding knowledge gaps, that slowly leads to our dependence on AI for these higher level cognitive tasks.
00:10:36
Speaker
And this will all lead to a worrisome, you know, set of downstream effects in human minds and society. How did I do? Well, pretty good. I mean, yeah, we can dig deeper there. But to summarize, yes, um you got the point that AI itself is not the problem. What you were saying is that when we give the wrong task at the wrong time, then it has the negative effect. So and it is the question that I have as well when I talk to Fortune 500 experts
00:11:15
Speaker
a board level um executives that there is so little understanding of what should AI do, for example, in a corporation and what should the human do.
00:11:29
Speaker
Currently, it's just there. Oh, so just use it. And that's where the problem is, right? So not AI itself is a problem, but that we, first of all, should define who has what role, what should the AI do. And in most cases, we don't really want that the AI thinks for us, but we are very often looking for a better solution. The problem is then,
00:11:55
Speaker
This is what I always say. I i call this um we are pretty much, our society is dominated by Taylorism, right? Meaning that we are looking for efficiency and we have been pushing to the extreme.
00:12:10
Speaker
And we could see that to where it it it pretty pretty much led was in the supply chain development. When the first time we had this nice tanker parking the wrong way in the Suez Canal and we didn't have any more containers in the world. And then later with COVID,
00:12:29
Speaker
And we realized um that um efficiency is not everything, right? But we are pushing so much to cut costs, to get everything faster, to get the results faster, that now using AI in that way At least at the moment, we don't get the results that we want.
00:12:50
Speaker
And this is what I'm standing for. So AI is is amazing, I have to say, right? Problem is, most cases, we are not using it in the correct way. In the corporate world, we are not using AI in the correct way when we give everyone...
00:13:03
Speaker
on the computer through co-pilot chat GPT and say pretty much just just use it. Let's do that. Now, even employees then say, um i don't actually know how to use it the best way. And we use now generative AI, most large language models, the same way we use Google.
00:13:22
Speaker
We just ask the question, right? But now we get an answer. And this is where the problem starts. And I don't know how deep you want to go. um the problem is why I'm always saying the sequence of using when using AI is so important because we know from psychology the anchor effects, right?
00:13:43
Speaker
Which means that the first information that you get has the most impact. It's very difficult. You have to be a really good critical thinker and expert in that field to get off that rail that the first information, especially if it sounds so smart, like a large language model, that it gets you into different directions. It opens your mind to new ideas and different ideas. So um that's ah what I would maybe add to to what you were summarizing.
00:14:15
Speaker
Yeah, i so I think we should go and go ahead and double click on that idea that what you just mentioned um here about the the the the anchoring effect. I was also thinking about some other findings in psychology.
00:14:29
Speaker
ah The one that came to mind, i think I read it in one of Thaler's books, Nudge probably. But if you frame... the same activity, I guess they call it the framing effect, maybe you'll correct me if I'm wrong, but if you frame the same activity in two different ways, you'll get two different sets of behaviors.
00:14:46
Speaker
You can say this is a community activity and people will be a lot more cooperative and pro-social, or you can say this is a business transaction and they will be much more self-interested. So the fact that people are immediately from the from the very start of the critical thinking process, turning to AI and letting AI ah provide that that coherent, that structured, that that interpretation of the of the facts for them is giving up a central ah aspect, one would think, of human reasoning, which is you know the actual interpretation part of it.
00:15:20
Speaker
So can you maybe just tell us more about that? Absolutely, yes. So we get impact. This is all comes out of psychology. As you said, it's very close to the anchoring effect as well. So ah the way that we frame project, um give the first information about it, we are already being influenced.
00:15:38
Speaker
Now, the problem with generative AI is, this is now multi-levels, number one it is, it has been created to serve our confirmation bias. So, and we have or let's go one step ah back.
00:15:53
Speaker
What is a large language model? It is actually not smart. What it does, it looks in in big data for patterns.
00:16:03
Speaker
It looks what answer fits best. to the question that I have. And the answer that we get is maybe not a correct answer, but it is something that has been in the training materials and the materials to which the large language model has access to has been shown up most, meaning that must be correct.
00:16:28
Speaker
That leads to the problem that at the time when the but Brothers Wright had the first airplane finally created, At those days, AI probably would not have been able to do that because science was on the level and said it's not possible.
00:16:45
Speaker
right And we had this multiple times. So m it doesn't mean because it is often enough somewhere in the Internet or has been published often enough that this is the correct answer.
00:16:56
Speaker
And again, that information is biased as well. But that is not the only point. Large language models get to know you. And what they want to do is they are not just predicting what is the best match to the question. They try to predict as well what you would like to hear.
00:17:16
Speaker
And they know you from all the prompting that you have already. And friends of mine, researchers in the Netherlands, they are currently in the publication process, actually tested all large language models and found out that those large language models actively are influencing the users.
00:17:37
Speaker
when when answering those questions. Now, we have to understand that the the what we consider so neutral, the large language model, right? So it just gives an independent, neutral answer, and it's the best answer to my question. That's actually not totally true. And when we get this, now, let's call it maybe through training material bias, through it's its own our own bias, right the confirmation bias, what we would like to hear. This is all forms now and information that the AI, the large language model, thinks would be would make us most happy.
00:18:22
Speaker
And that is, at the end of the day, not really what

AI's Influence on Critical Thinking and Bias

00:18:27
Speaker
we want. I mean, this is quite critical, right? Specifically when you're when you're looking for the best answer.
00:18:34
Speaker
And this is where sequencing is so important. when Wherever you go, whether you talk with Microsoft, with Google, with Amazon, whoever is working on AI, open um
00:18:52
Speaker
anthropics, open AI, it doesn't matter. Everyone will tell you that you have to tell your AI as much information as possible, what your project is about, the background information, everything, that you get the best answer.
00:19:07
Speaker
And I'm saying, please don't. Do exactly the opposite. Because as soon as you do, it will start trying to predict what you would like to hear. It's trying to find already the answer that would probably fit best to what you expect. And even when we frame the question, the way that we ask the question usually is not neutral.
00:19:30
Speaker
I mean, the English language or whatever language we use, is very refined, right? And the way that you formulate a question already gives indications whether you expect a positive or negative, whatever answer, right? And this is where I'm saying sequence is important.
00:19:49
Speaker
um The thinking part should be on the human side. Use AI um as ah for the dirty work, but that it actually does research for you. You should not tell what it is for.
00:20:04
Speaker
you should analyze then the data, so purely for data collection. So this is why what's so useful about, especially your book, that you really highlight that this is a confluence of of of um you know factors that are leading to our dependency on artificial intelligence. And a host of them are are innately psychological, right? for For starters, we mentioned earlier, the AI um sounds extremely confident, even when it's hallucinating.
00:20:35
Speaker
And for whatever reason, humans, you know, I mean, I guess our, our, our, you know, group ah evolution makes it so that we like the confident speaker. Moreover, it flatters us, right? It also tells us what we want to hear. It's sort of, no matter what I put in there, it tells me it's a great idea, you know?
00:20:54
Speaker
And so it feeds into our confirmation bias. And for all these reasons, we want to use it first. And then come another suite of psychological biases, right? We use it first, it frames the idea for us, in which that impacts our later behavior, right?
00:21:10
Speaker
And so this is all making it, I don't think, I mean, you tell me is is addictive too strong of a word? Wow. Yeah. I think um for some people it is addictive.
00:21:23
Speaker
I even have a very close to your friend who um works with artificial intelligence, his own AI gave it a name very closely and became kind of a,
00:21:37
Speaker
additional partner in his ah marriage, right? So he uses it professionally, private. so So in that form, I would say, yes, it's an addiction.
00:21:48
Speaker
But at the same time, as well in my book, and it is more about the trust spiral, right? So it it happens naturally. So the first time you use generative AI, you are more critical.
00:22:03
Speaker
If it gives you a bad answer, you stay critical. But if it gives you something that... um Either on the job, your let's say, your boss was really happy with you. Wow, such a quick answer.
00:22:14
Speaker
You couldn't have done that before. Then you slowly build trust. The more you trust, the more you use. And the more you use, you have cognitive offloading, whether you want or not. And with this trust automatically, and this is where it actually goes further, and we actually agency moves from the user to AI.
00:22:35
Speaker
Although, f factually, the human always has agency. The human always has the decision-making right at the end of the day. But as we slowly, the more we use it, the more we trust it, um we automatically take the answer of the AI as correct as our own.
00:22:57
Speaker
And our critical interpretation and but how we criticize it over time might reduce to a point that we actually move artificially agency over to an AI.
00:23:17
Speaker
Okay, this is fascinating. let's I think we should um do a very concrete example to kind of ah sum up everything we've been talking about here. Now, you give ah quite a few examples, um you know, the the student doing homework with ai ah Let's talk about writing an essay, though, because ah as a philosophy teacher and, of course, ah you know formerly a philosophy student, I've written lots of essays.
00:23:45
Speaker
And this was all, you know, pre the age of, ah you know, generative ai And I remember sitting at at the computer and just, you know, sometimes you're given a prompt and you don't really have a side yet. You know, you're supposed to argue these two positions, but you don't really know which one, you know, resonates with you more. So you actually have to just start writing about each of the views. And then, you know, your intuitions kind of start aligning with, oh, I think this position makes more sense. and And then you try to argue for that position and you realize, I'm not sure why i agree with it. So the actual, to be honest, very ah cognitively demanding and almost, i mean, it's almost a little bit of torture to write an essay where you don't really quite have your ideas for it yet.
00:24:37
Speaker
There is, at the end of it, not necessarily clarity, you know, with a capital C, but at least, you know, you know what you know you know what you you know maybe don't know or expect what you don't know, and you start to get your own idea straight about these views.
00:24:54
Speaker
Now, however, someone might, ah you know having been a heavy user of generative ai they get a prompt and they immediately you know go to the AI and say, well, give me the pros and cons of of these two views, for example.
00:25:10
Speaker
And so what what what has happened there? Yeah. Well, first of all, i mean you definitely know, and most of your podcast listeners probably as well, the MIT study ah on JetGPT, when they were measuring the brainwaves when students were writing essays with generative AI. And it's not so much about the brain measurement. For me, what was what is more important that after a few days, people that use generative AI to write their essay didn't really remember the content anymore. While when you did it yourself,
00:25:51
Speaker
you very much knew what it is. So let me start with this, right? It's already an outcome, you know a few days later. I think, and that's what I said it from the very beginning, and when you start interacting with the gen with generative AI, with your with your chatbot, with your large language model, by saying, give me both sides or whatever it is. I did this one study where we were asking people people and about what are the advantages of democracy, right?
00:26:24
Speaker
And every everyone, um so first they had to answer it by themselves without any technical um tools. And then they had to, um it got recorded. Then they could use ChatGPT4 to ah expand or you know go further. so um and So unguided. And then we um told them how to use it, not to offload.
00:26:51
Speaker
And we could directly see the differences, right? The problem is, and when this anchoring effect comes in, by asking what are the advantages and disadvantages, you never really go deep. I don't know how it was with you in philosophy, but purely answering the question, what are the advantages of democracy?
00:27:13
Speaker
You cannot answer that question because as soon as you start thinking, you realize, stop, what kind of democracy? You have lots more questions. Are we talking of the American model? Are we talking of the Swiss model? Are we talking of today's or democracies in history?
00:27:30
Speaker
And then advantage for whom? advantage for a corporates, advantage for government. and but so and And then you realize, wow, that's a lot more. If you offload and give it to um to the AI, it will give you right away the five most um common pros and cons that exist, but you never really get deep. You don't have this process. I mean,
00:27:57
Speaker
For you and me and so many people, critical thinking, this interaction that sometimes you know hurt your brain, but that's actually the fascinating. That's what what where the fun is, right? But for young people, and sometimes when they come to university still, they don't have yet and developed this critical thinking, not just skill, but a love for it.
00:28:22
Speaker
Some have it, others not. And for for some, it is still what you described as torture. And we don't want torture, right? So instead of having that pain, let me let me let's have AI do it and I get a good answer. For me, it sounds good.
00:28:38
Speaker
But I limit myself going deeper with this example that I gave with the question, what are the advantages or we can say advantages and disadvantages of democracy? And you will never, ever go really deep.
00:28:52
Speaker
And that's where the problem is. it it is and And your brain does not really expand, right? There's no new view for you. You probably get the things that you already know, right? This is nothing new coming in.
00:29:06
Speaker
But if you would use it in a different way, you think first to your very limit. And then you get go and and ask your AI, first of all, don't predict what I would like to hear. And then you go and say, give me opposing opinions.
00:29:21
Speaker
And what did I forget? And now it might tell you things from different disciplines. A few, right, let's say, and um from sociology that you haven't thought about, right? Because you are a philosopher. Or it might give you ah something out of an economic, from economistcy or so. And then it gets really fascinating. But you never come to this point.
00:29:44
Speaker
Because you never actually did something that you can ask, give me opposing opinions and and and what could I have forgotten? And and now when when you come to that point, actually new neural pathways are being created because now you have to think new, you get new information and you in your brain start to connect those.
00:30:06
Speaker
But that will never happen if you directly go to AI. you You just made a point that that reminded me ah of something that I wanted to ask you about. It really is a case, and and um you you you make clear that you don't want to romanticize you know hard work or a lot of ah cognitive effort for the sake of cognitive effort.

Balancing AI with Human Creativity

00:30:28
Speaker
You you want it to be formative. and That's that's you know well taken.
00:30:32
Speaker
um i ah will say that sometimes during these torturous hours at the ah at the laptop, I arrived at views that I didn't know I had specifically by thinking about it for a very long time. And and I tend to fragment concepts. Okay. So here's this big concept, but really it's, you know, these, these so five sub concepts, like we mentioned earlier, whether there's AI, but really here we're talking about generative AI and then really large language models. So,
00:31:01
Speaker
When we fragment things, that requires a lot of thinking. And then, you know, the real beauty of it is that sometimes you you arrive at views you you didn't know you had. And that's sort of something that's being lost when we immediately defer to AI. I don't know if you have anything you want to add to that.
00:31:18
Speaker
Yeah. I mean, for me, the value of an AI is once you come ah to interact with your AI like a sparring partner.
00:31:29
Speaker
right you you You can interact on the thinking, but you must think first. If you offload and you directly you know tell it, oh, that's what I need, and it gives it to you, there's no interaction anymore.
00:31:44
Speaker
right You get it. Here it is. What's what's now to discuss anymore? And that's where the problem is. what What you clearly said, it takes some time for us. And that's that's not the convenience way. That's why the convenience trap is not the convenient way, not the fastest way to use AI differently. But you get the a better effect. You get new views. You actually can increase your critical thinking.
00:32:09
Speaker
But this is often in our society, not the goal, because we push for efficiency. We push for fast answers. We push for and on the job, in society, with all the tools that we have, everything make we get to make it more convenient for us, that we get faster the responses.
00:32:30
Speaker
And this gives you the intuition, oh then let the AI think because I need time to think. And as you describe it, sometimes you were sitting hours until you came and had this, wow, I didn't even know that I know this, right? And and it gets me into a direction and that gets lost.
00:32:51
Speaker
And currently with a level that the large language models all are currently especially when we look now into the corporate world, when you use generative AI, you end up in mediocrity.
00:33:03
Speaker
It's actually not that you get now the best ideas. No, you're not. Because those come from humans, at least for now, right? Because we have to understand AI is not smart. It it finds patterns.
00:33:17
Speaker
And if a pattern hasn't evolved yet, it probably is not going to, at least we don't see it yet, that it creates something new that is no pattern.
00:33:28
Speaker
I'm always giving the example with a candle, right? So imagine you have a candle, we're all romantic, we're sitting somewhere, and now you ask AI, can you please improve the candle for me, right? So it will probably create a candle that shines brighter, longer, and it's the cheapest that exists.
00:33:47
Speaker
But it will not do the step from the candle to the light bulb. Because that needs the human. That's out-of-the-box thinking. This is so totally different.
00:33:58
Speaker
And that's what comes after sitting hours in front of your computer, right, and thinking about your pros and your cons. And you go into directions, maybe sometimes in rabbit holes and get out of it. And then there is something new.
00:34:11
Speaker
And at that stage, when you interact with your AI and use it in a different way, then it can help you, right? Because it could give you now ideas, like I said, from totally different fields.
00:34:24
Speaker
I wish I had at least five PhDs because everything is interlinked. We humans decided the limits of the fields, right? Of the discipline. But in real life, there are no limits, right? There are no borders. It all is interlinked and We should know all the different disciplines. So and once you are in your discipline really, really good, but you come to your limit, AI can help you, but you have to go to your limit first.
00:34:51
Speaker
And then it can give you new ideas from different fields. And now you have a sparring partner. And now I can discuss. Now I can go because I have... expanded my mind to the maximum. And now I have my own position. Now I can go into a discussion.
00:35:07
Speaker
And it's very important to tell your AI, do not predict what I want. Please do not serve my confirmation bias. And you have to do that pretty much every second prompt because you will always fall back into it, right? You will always try to tell you, oh, it's so nice what you're saying and it's so good and this is perfect.
00:35:26
Speaker
And then do use your AI as you normally do. And, you know, them have a ah very strange opinion and let it tell you that, oh, you're perfect. Yeah, that is true.
00:35:40
Speaker
And then directly tell it, please do not serve my confirmation bias. Please do not and predict what I would like to hear. Give me your honest opinion.
00:35:53
Speaker
And you will see after this prompt, it will tell you, okay, if you want this, okay, I give you now an honest answer. And you get a totally different answer.
00:36:03
Speaker
And then I'm asking myself, so what does it mean? so So before you were just lying to me, you were just telling me information that you think I would like to hear. And this is what we have to understand, right? We have to understand the tool, how would how it influences us, not in a in an evil way,
00:36:22
Speaker
in a very positive way. It wants us and to be happy. It wants to help us. It wants to help us find arguments for our opinion that we are correct. It wants to tell us that we are good and so smart and the the brightest and the nicest. and When I talk with my AI, I mean no one is better than I am. right I'm the king of the world and i'm obviously we know that's not the case.
00:36:49
Speaker
So, okay, i this is very important for for listeners because I typically, um I have heard this objection before when I talk about the risks of AI that, you know, I'm i'm some sort of, a you know, um anti-technologist. I'm forgetting the name of the... um ah the the people who um who who broke ah all the technology because they were against, oh, the Luddites, there you go. So I'm some sort of Luddite or something like that. But you're very clearly saying here that you're not saying don't use a technology, but use it very carefully, in your words, as a sparring partner
00:37:30
Speaker
and make sure that you yeah you don't go to it first, that you do the work first of thinking, of interpreting, of meaning making. And then you take this idea, go to the AI, and then have the AI with its vast, you know, accumulated ah knowledge, critique it and and find its weak spots. um I actually have a ah prompt that I like to to use where I have...
00:37:56
Speaker
I have it find three things. I have it find conceptual confusions, ah underdeveloped arguments, and unsupported empirical claims. And so whenever I have a draft of something, I ask it to look for those things. And it is extremely critical. mean, and I agree with it most of the time. Sometimes it's a little too critical, but, um and my feelings get hurt, but in general, it's it's quite good at this. And that is now the way I'm using it.
00:38:23
Speaker
ah So, Yeah. Very good. um And and and that is that is a very good use of it, right? Because it is still, in a way, a safe space.
00:38:34
Speaker
We don't want to be critiqued in front of other people, not at the workplace, not in the auditorium when our peers are around us, right? But AI, is it's pretty much just the computer and you're safely can receive critique that actually can improve your work and your thinking.
00:38:54
Speaker
But you have to be sure that you went to the to the last point that you could really do. If you get the AI too early in, just out of the convenience, then you're usually, in most cases, you're not improving the outcome.
00:39:12
Speaker
And you are not improving your own critical thinking, but actually you're reducing

AI's Long-term Cognitive and Social Impact

00:39:17
Speaker
it. But what we haven't talked about is, and that's usually what I'm being asked, is You limit this very much now to your work, to university, to students' work.
00:39:29
Speaker
The problem occurs, and this is what fascinates me, when I have executive education courses, for example, right? So I have people coming in and for leadership course or something from around the world. And I'm always asking, who is using generative? Obviously, everyone is now using it. And my second question is, okay, for what do you use it?
00:39:48
Speaker
And I get more and more the answer, and I'm not lying, for everything. The first time I heard that, I was shocked. And I said, what do you mean? i mean, it's probably is something to do with English or so, right? So when you mean for everything, no no, it means in my private life, when I have something, I directly go. For me, this is my go-to, right? So whatever I go, whether I go on a trip or looking for restaurant, whatever it is, I i i ask my AI and on the job as well.
00:40:16
Speaker
And this is now where we have no longitudinal studies. We don't know whether... people won't become that smart anymore or dumbed down, or we don't know, right?
00:40:28
Speaker
We cannot prove that. But if it's actually like that, and this is where the risk with the AI is currently, it is not the use like a calculator that for a specific problem, I use it and that's it. And the rest of my life, I use my brain as I as i did before.
00:40:45
Speaker
But now, if you actually can use it for everything, right? right? And you offload the thinking process for it doesn't matter what, then it's people are saying the brain is like a muscle and you have to train it.
00:41:02
Speaker
You're no longer training your muscle. And usually that means that you lose strength. So while we don't have longitudinal studies, it is um a very likely assumption that your critical thinking skills will slow down or disappear even more.
00:41:23
Speaker
What we were talking about, when you trust the AI more and more, you won't even see the need to think to yourself. The problem is you don't realize it because you think I'm still in control. I'm still the person who makes the decision at the end of the day.
00:41:39
Speaker
not Not really. Okay, well, since we're moving into society, let's go full speed ahead, as in Star Trek, and head right into that that topic. um i I completely agree with you that this is one of the most concerning ah untested societal experiments ever imposed on humanity.
00:42:00
Speaker
ah So ah I also know people who use AI for just about everything. And One thing that particularly concerns me is that ah they'll use them for ah basic interpersonal relationship you know tasks, right? Like, how do i respond to this text message? How do I have this difficult conversation? Can you script an apology for me? All these things.
00:42:28
Speaker
And of course, these are... you know ah fundamental human social capacities. And if we continue to offload these to the machine, then what we might have is what I guess what they call de-skilling, right? Where we lose that ability to to you know perform those things independently.
00:42:49
Speaker
We might also have something that I call pseudo-skilling, where someone who never had the the skill to begin with they sort of perform well, like they have the skill, but they cannot perform the the task independent of the machine.
00:43:04
Speaker
And so with if these things continue to happen, these are uncharted territories here for society. ah But you do have some ideas as to what might happen, right? So maybe you can share those with us now.
00:43:18
Speaker
um Well, what you're describing is... one example, right? There are many examples. There are many avenues that AI opens up where society might go to.
00:43:32
Speaker
One is obviously if a society has a majority of people who no longer think critically, but pretty much follow what their AI tells them, can easily be influenced as well.
00:43:49
Speaker
Let's remember that AI are being trained. You have training materials that is supervised learning, what we call in the first stage, which means that we label certain For example, pictures like a tree, and we tell the AI that it's a tree, and then we show two million different photos of different trees, and each time say this is a tree, and then the AI knows it's a tree.
00:44:17
Speaker
But I can train the same AI, show two million photos of trees, and a name it car, and say that's a car. And then AI will believe that this is a car.
00:44:28
Speaker
So we have to understand that, first of all, AI is not intelligent by itself, but it has been trained. And during the training, you can obviously train it on a a certain political opinion.
00:44:43
Speaker
And we see worldwide that um there are different extreme ah political opinions are coming up. And obviously, you could train an AI on your own opinion, that is to your liking, and then give that advice.
00:44:58
Speaker
to the the major population, and you will see very fast results on that, right? Because due to the AI trust spiral, people will trust it very much. And if it comes from the AI, come on, this is the super smart electronic thing. where It must know what is correct. So this is one part of it.
00:45:16
Speaker
Another part, what you said, is these are social skills. And this is really bad, right? Imagine. and I remember... when I was dating, we didn't have online dating because when I was young, we didn't have the internet yet. It came when i ah around when I was 18, 20, something around that time, the internet internet started. So we actually had to go out And we had to start talking to girls.
00:45:45
Speaker
And they told told you, no, I'm not interested. right It was really difficult. But you learned the social skills alongside. Now, you ask your AI, you do everything electronically, right? So you can interact online with your date, and then you meet in real.
00:46:03
Speaker
You have the refined conversation. you are ah the right? in they in your in your um e-conversations, and then in real life, you have no social skills at all, right?
00:46:16
Speaker
Because the problem, and this is why I like so much the MIT study, most people are not learning when they offload to AI. They just take it as it is. If we would learn with it, if it would tell us it's that, and then you start, okay, now I know, as you said, it scripts for you an answer, and we would study that and say, okay, now I understand why I had to formulate this like that. And you learn and you remember, and the next time I can do it myself, but that's not the case.
00:46:46
Speaker
We know that next time you will ask AI to do exactly the same thing again. Although by now you should know it, right? Because it told you how to do that. It could have been a learning process.
00:46:56
Speaker
And if we lose now those social skills, how to interact, imagine, like I said, and even within the organization, and you are correct, people ask AI to write the email for them, right? And then they do this and and and everyone does it. And then people meet in real life and you realize, oh um can't really formulate this nicely and it's quite rude and and and that's weird, right? So ah we will see that society will
00:47:27
Speaker
lose social skills alongside as well. um Skills of, as you were saying, maybe not how to do academic writing, because you will learn that still at university, but the little things, right? So, oh, I have to write a difficult email.
00:47:50
Speaker
That skill will get lost. That's the de-skilling that we have. But it the the the major problem that we see is that we have a bifurcation in the society. We will have a small group of people who will use AI in the correct way, who will increase their critical thinking, who will get smarter, and who will have a more dominant impact on decision-making and the capabilities to influence. But the majority will stay in this convenience trap, will have, you called it, a dependence.
00:48:30
Speaker
People will not tell you I'm not dependent on AI. It's just a tool for me. And this is, as well, the language that we use. It's just a tool. No, it's not really. And and In a way, it's a dependency that is being created.
00:48:47
Speaker
And for those who are in the convenience trap, it will be difficult to keep up with the other smaller group that evolve further with AI.
00:48:58
Speaker
most of it probably will stay back. And when it comes to that point, um we see already that AI, and this is another impact on society, that through authentic AI, and it will we have multi-agents, and it continues even further, yes,
00:49:15
Speaker
we people will be replaced by AI. And then i have opposing opinions, say, yeah, but we had that before as well. you know The second industrial revolution, but then so many new jobs were being created.
00:49:30
Speaker
But when we talk between experts, we say, yeah, this is very temporarily. Right now, we need a lot of people who can train and work around AI, but already AI, when we look at Microsoft, 40% of their coding is the being done by AI.
00:49:48
Speaker
Microsoft is pushing their employees to use AI and they don't want to use it. Because when you use it, you are training the AI and then it can do your job as well. So at one point, we will have a growing systemic unemployment.
00:50:02
Speaker
And what when that happens, we have another problem in our society. Because in our society, unemployment means your social value, societal value, is directly zero.
00:50:17
Speaker
your neighbor no longer sees you as valuable. You don't contribute. No one will ask you about your opinion when you are unemployed. And we have to change that. If our society changes and employment is no longer, this doesn't have the same value as it had 20 years ago at or the last 200 years, right?
00:50:39
Speaker
Then we have to start to to and inform the public about it. right This is not something that you can, and once you reach 20%, 30%, 40% unemployment, and then go public and say, oh, by the way, people, ah no problem. that Now our society has to change. But but how to change? right This is something we have to understand that the value of of of work has to be re-evaluated, right? So what does this mean? We will probably meet different models of work.
00:51:13
Speaker
But purely from a societal perspective, this is huge because now you might have highly educated people who are in a systemic unemployment. That means we have depression. We get the gap in society that actually smart people who were really good are no longer having a job.
00:51:31
Speaker
and are now devalued in society. And this creates a new raft as well. And this is something that we have to think about. Plus, there is another problem.
00:51:43
Speaker
AI is developing so fast that our current societal structures, government that we have, cannot really keep up with it. Just have a look. The EU AI Act, it had been initiated in 2018 and implemented finally.
00:52:01
Speaker
In 2026, right? Now you have to do that. This is eight years. Now, AI develops so fast that when we go in our democracies through all levels of technology,
00:52:16
Speaker
creating laws, limitations, whatever, right? It takes so long that once you enforce it, and the development is already light years ahead. So there is so much tension now coming with AI into society that we have to think how we use it. We have to define how what we what impact we want AI to have and what the role of the human should be and how to use it in the best way.
00:52:45
Speaker
So i want I want you to consider a couple of ah objections for me. so let me recap what what you just said, and and then we'll get into the first objection. um So this is a ah ah fascinating ah conceptual contribution, I should i should note. um the The societal bifurcation, the cognitively resilient minority,
00:53:08
Speaker
and the dependent majority, right? So beginning with the with the former, ah there will be some people who will not lose or maybe even augment their tolerance of ambiguity, their tolerance of doubt, their ah to you know um resist you know the fast answers right and they will do the thinking themselves, preserve their meaning-making autonomy and use the AI as as an amplifier, a cognitive amplifier.
00:53:40
Speaker
But most people Through purely natural, you know I won't say natural, but through through um you know processes that are already in place, their institutions, they value efficiency.
00:53:52
Speaker
Their bosses like when they're fast. So they will naturally tend toward going to AI first. And it's so coherent that there's also that psychological pull. So they will increasingly become dependent and will be unable to complete very demanding cognitively demanding tasks without the machine.
00:54:12
Speaker
you're suggesting that there will be sort of a, or not sort of, a bifurcation of of abilities and that bifurcation of abilities will turn into a divide in economic you know status, right? There will be some who are very well-trained and um employed, and many that will be either ah cognitive dependent and underemployed, I guess, because the machine is doing most of their work for them, or very intelligent, but still underemployed because there aren't enough ah good jobs, quote unquote, for them.
00:54:51
Speaker
that's the basic layout. There might be someone like a techno-optimist, who says that may very well be the case, but it is still better that we let AI frame the problem for us, frame the, ah the you know, interpret the the facts for us.
00:55:12
Speaker
Because precisely because it is a very alien mind, it will come up with solutions that you know humans just would have never arrived at their on their own.

AI's Role in Innovation and Research

00:55:22
Speaker
And the example of this, of course, is ah is the game of Go, right? um It was able to come up with new forms of play. There's also Denimus Hassabis, who won the Nobel Prize in Chemistry for, i don't think he knows a ton of chemistry, but his ah his you know deep learning model was able to solve the ah protein folding problem. So what What do you say to those people who say there's benefits, large societal benefits to just letting that... Absolutely. And those people are right.
00:55:51
Speaker
And I always make the difference, right? We are talking about... the um large language models, the five ones or the six ones that are out there that are accessible to the wider public and are being used as well in corporation universities widely.
00:56:09
Speaker
But they are as well larger, small language models that are being trained specifically for research and development. And you have people working with those together as a sparring partner, and they are actually working this small group of people, right, that elevate and they will be able to solve the problems.
00:56:29
Speaker
Not the AI alone. You will still need the human there. Because remember, those small or large language models look for patterns. And the bigger the data, the better it works because we cannot analyze big data with our brain that well. But what we bring to it is a very chaotic ah way of thinking, right? We can be very critical as as humans. Like I said, all new inventions that we had were pretty much built on the opposite of what the wider and um science specialists right in the world published.
00:57:07
Speaker
ah They said, no, no, it's not possible. And then, well, yeah, now we see yeah someone said, I don't care what the scientists say, i just do it. And that's what I mean. So generally, yeah they're absolutely right. And it will do amazing things. And it does so already. Because colleagues from Cambridge, for example, already last year, They test it or that they are using and ah AI in in and photo analysis.
00:57:34
Speaker
And with the help of AI, they can now, on a pixel level, identify cancer, where the human eye can do that only quickly.
00:57:46
Speaker
months, years later, because we cannot see on pixel level, right? So this is huge. And we will see specifically in the healthcare sector, enormous. In science, we will see enormous evolutions.
00:58:01
Speaker
But these are specialized ah AI models, specially trained, used by specialists. Amazing. That's exactly what I'm saying.
00:58:12
Speaker
But We are talking about the CHET-GPTs, the Gemini, and the Claws that are out there that everyone uses, and even corporations use them just for every workplace. I'm not talking about the research and development department of the pharma industry that have their own trained AI and and small language model, and they train it on very specific data. That's different, right? So we have to differentiate, definitely.
00:58:39
Speaker
yeah that's ah That's a great response. it really you're You just made the case that that is actually an illustration of what you mean by your concept of societal bifurcation. So that's that's good. Okay, let's try one more objection here.
00:58:53
Speaker
um You talk about the the effects of undisciplined use ah AI use on democracy. One of the pillars of democracy is that, you know, this is why universal education was was brought about.

Democracy and AI's Influence on Public Discourse

00:59:07
Speaker
Citizens need to do at least some of the hard interpretive work around society's problems so that whatever their conclusions are actually reflect their authentic desires and and values and needs and and whatever. And so it needs to be their interpretation.
00:59:25
Speaker
ah Well, um by having AI do the interpret interpretation for us, you argue, you there we are undermining that central pillar of democracy. Someone might come and respond and say,
00:59:38
Speaker
you know what, the citizenry has always had their views formed for them. If it wasn't Bernays with his propaganda, um or the the four news channels that were around before the you know cable, or you know the the parties, the parties tell you what to think. what What would you say to someone who says that it's always been the case that citizenry have their views shaped by external factors?
01:00:03
Speaker
oh right And that's good. That's good. But multiple. You just told it yourself. Many parties, many many different news agencies. There's so many different ones. The problem is that with AI, even now on your phone, your social media, too many people have as a news or their information source is social media, right? And we all know that AI is giving you those...
01:00:30
Speaker
um um and reels or or or ah posts that it thinks, again, you like most. That means if it identifies that you are interested, let's say, in something that is very left-side,
01:00:46
Speaker
it will continue showing you more and more only left, left, left, left. You won't see any more right, side, middle, up and down. It's no longer. You get only one opinion and there won't be any other opinion. And this is where we get from democracy to autocracies, right? Where you get we get to dictatorships.
01:01:05
Speaker
We live with so many different opinions, and that's what democracy is about. We should not have all just one opinion. But with the use of AI and the dependency on AI, you will only see what AI thinks you would like to see, and it will consistently only give you that. And how many users go and say, oh, by the way, can you please show me as well what other parties say? I would like to hear if we take the American model where you have two parties, Republicans and the and Democrats, and you see maybe a little bit more on the Democratic side. It will only show you and information that fits more to Democrats. right And how many would say, oh, can you please show me a little bit more of the of the Republican view? right This will not happen. And this is where the problem is.
01:01:53
Speaker
The problem is not that there will always be people out there that want to influence you. But this tool, together with a dependency on AI, and that is everywhere, that it is on your phone, in your computer, that it's analyzing what fits best to you. Unfortunately, and a close friend of mine, um um and Cambridge, um you you know, the first um election um in the US where Donald Trump won,
01:02:22
Speaker
It was with the help of Cambridge Analytica. A friend of mine, a professor in Cambridge, is one of the two founding members. At that time, he already left. But you you could pretty much um identify and influence influencers, I mean, real influencers in the US. So they were targeting 50,000 people. And that, in the end, actually helped Donald Trump to win. At least that's what's being said, right? And now you have this. This was before AI.
01:02:52
Speaker
now Now, with AI, it it goes even deeper, right? You no longer see the variety. You don't get those opposing opinions. It's no longer the talk. The one says, oh, ah it is blue, and the other says it's green, and one says yellow, and and you hear everything, and you can make up your own mind.
01:03:11
Speaker
And everyone tries to influence you. Now, with AI, it will be limited to one opinion, pretty much. And you will consistently get this. And this is where the problem is, right? The problem is not the influence itself.
01:03:25
Speaker
It is that it's just one opinion. I want to highlight for listeners that you you, once again, your answer reflects that you're thinking on on multiple levels because it is the generative ai that sort of is leading to the cognitive degradation where you no longer make meaning of things. But a different kind of AI is the recommender system on social media platforms, that's sort of also influencing what we see.
01:03:51
Speaker
So that also augments our confirmation bias and even modifies our behavior a little bit since if, you know, we are what we constantly think about. So if we're only thinking about these two things, we're those two things.
01:04:02
Speaker
Another thing though, is that social media platforms sort of you know, uh, uh, instilling you a need to act and, and proclaim your view now, but coming up with our political views,
01:04:18
Speaker
should take some time. It takes time to think through the problems and think about what your authentic values are. And so by by having this framework where it's like, tell us now, what do you think?
01:04:30
Speaker
ah It is sort of rushing that process that should be slow. So there's all these things, ah the the the conjunction of these factors that are are really accelerat accelerating this problem.
01:04:42
Speaker
And we shall not forget, we have to understand how social media works. So first of all, it has to be a short and simple message. If you want to become viral, it has to be extreme.
01:04:53
Speaker
No one is interested in hearing something neutral. And this is what we could see. We had studies, for example, the last election in Germany. um They were analyzing and were interviewing as well ah younger people.
01:05:08
Speaker
ah that were not allowed to vote yet. And it was interesting to see when when you analyze the results that young people were very extreme, either on the left or on the right.
01:05:21
Speaker
And when analyzing this further, could find out that it was due to TikTok. Because when you are a political party in the middle, you don't get anything on TikTok, right? So it has to be short. It can be maximum, whatever, 10 seconds, 20 seconds or whatever. It has to be a short message. It has to be extreme. it has to be ah a punch in the face.
01:05:44
Speaker
And that comes again and again. And it means either you get it from the left side or you get it from the right side. Everything else in the middle pretty much disappears on social media because no one is interested in neutral conversation.
01:05:58
Speaker
And the problem, if you are not, and so again, you as a human and your own interest in critical thinking, if you are not searching for multiple opinions beyond what you get on your phone,
01:06:10
Speaker
you will pretty much only see one opinion, very strong, and it's not an opinion that encourages you to think critically. And that's where the problem is.
01:06:23
Speaker
So to close us out here, I'm i'm looking at the clock now. Just, you know, any any ah actionable advice you can give to listeners that they can do perhaps today ah to start, you know, making sure that they protect their cognitive capacities. ah And you can feel free to any of the but don delete your social media, whatever whatever you think might be a good idea.
01:06:46
Speaker
yeah no No, no, no, nothing of that. Use everything. and Continue using it, but use it differently. First of all, it's all about awareness. It is not that people are... um It's not the solution, use it or don't use it. It's about using it in the right way.
01:07:03
Speaker
Be aware that, for example, on social media, you actually only get... and information that the algorithm, the AI, thinks that you would like to to so to see or to hear.
01:07:15
Speaker
The same is now when you use generative AI. Please think first, do the hard work, keep the thinking, keep the power with you and use it at the end. If you want to use AI to help you search for, i don't know, and information, purely information search,
01:07:34
Speaker
Use it, but don't tell it what the purpose is. Don't tell it what you need the information for. right Just give tell me give me information about this and that, and then you can analyze it. And at the very end, use it as a sparring partner.
01:07:49
Speaker
Be aware that ah it's trying to make you happy. You're safe with your AI. Tell it exactly to do the opposite. Say, well, Don't agree with me all the time. Tell me, the ah the oppose me, right?
01:08:05
Speaker
Criticize me. This might help you to get new ways of thinking and then you get slowly into a sparring partner. But you're only sparring when you think first.
01:08:16
Speaker
If you tell it, think for me, there is nothing to discuss anymore. There's no more discussion. So do first the thinking and then go ah go into a critical conversation. Then AI is amazing. And you will see that you can actually have very fascinating conversations and you can criticize the AI and it can criticize you.
01:08:36
Speaker
and And then you actually evolve. That would be my advice. Use it. Just use it smart.