Introduction to ATech Podcast and AI Discussion
00:00:17
Speaker
Hi everyone and welcome to the ATech podcast. Today we are in conversation with Kevin Croston and Francesco Bolici. Kevin Croston is a distinguished professor of information science at Syracuse University and Francesco Bolici is an associate professor of organizational design and management of information systems at the University of Cassino in Italy.
Exploration of 'Deskilling and Upskilling with Generative AI Systems'
00:00:44
Speaker
Together, they've co-authored an important paper titled Deskilling and Upskilling with Generative AI Systems, which we'll be in conversation about.
00:00:57
Speaker
But before we jump in let me quickly preview a few key concepts that'll help you follow the discussion. The very first one is the main topic of the conversation that is de-skilling, which refers to a loss of a skill, either because a machine takes over a task or because a person stops practicing it.
00:01:18
Speaker
Think, you no longer remember how to take square roots by hand because a calculator does it for you. The second is upskilling, which refers to acquiring new or deeper skills, especially in response to new technologies.
00:01:34
Speaker
With generative upskilling might mean learning how to write better prompts or debug AI-generated code.
AI's Effects: Leveling and Multiplier
00:01:43
Speaker
You'll also hear us talk about two effects of AI, the leveling effect and the multiplier effect.
00:01:50
Speaker
The leveling effect is when AI closes the gap between novices and experts, letting beginners perform close to expert level with help from the system. The multiplier effect is when ai amplifies the ability of experts even further. So the skilled get more skilled.
00:02:09
Speaker
Finally, keep your ears out for discussions about task composition, which tasks remain for humans once AI automates others, and learning by doing, a cognitive science principle that becomes tricky when machines are doing all the doing.
00:02:29
Speaker
All of this sets the stage for a deeper conversation about how AI is reshaping the workplace, higher education, and what it means to build expertise in a world where machines often do the first draft.
00:02:42
Speaker
All right, with all that said, let's get into the conversation.
Motivations and Challenges: AI's Impact on Skills
00:02:58
Speaker
All right, guys. So Sam and I talk about upskilling, actually not upskilling, de-skilling often on the show, but we had never done a deep dive in on this. And obviously we're interested because this is one of the fears that we have about one of the risks of AI that, at least in my estimation, doesn't get talked about enough.
00:03:18
Speaker
So um let's start with this. What inspired you guys to take on this topic? but What led you to this field of study? Yeah, as you were saying, I think that we we found that there is a kind of a tension between what we already start to know that people can do with AI or that AI can start now to do without people if you want.
00:03:43
Speaker
So, you know, the fact that the ah you gain a time, speed, efficiency in using ai On the other hand, you have this other part that if um if so You know, it's very typical to say, for example, learning by doing, right? So um it's one way of learning is while you are doing something, when while you you are performing activities, you are learning something.
00:04:11
Speaker
But if a AI is starting to work with us and taking part the section of our work and sometimes even start to substitute us, what is happening happening to that learning by doing?
00:04:25
Speaker
if the doing is not done by us anymore but is done by um by a machine so you have this tension that is also attention in the timeline because the advantages the efficiency is clearly short term in a sense that As soon as you use AI, you can have a better performance.
00:04:45
Speaker
So the advantages are immediate. They are there for you when you use AI. While the impact ah on the learning process are clearly long term, what is happening, will see in the next years is not something that you can see as soon as you start to use AI.
00:05:02
Speaker
So from this tension of what you can see immediately as gains and the possible impact on the learning phase in the learning processes that is long term, that's where we kind of get a little bit, you know, intellectually excited and we want to investigate a little bit better.
Long-term Implications of AI on Skills and Education
00:05:21
Speaker
um you, as you were mentioning, there are people and there are studies that actually start to prove that efficiency is there if you use AI, of course, in some contexts with some many different limitations.
00:05:35
Speaker
But there is a little bit less work done on okay what are the effects in the long term in of these years. Plus, another reason for which to investigate this is that, well, as you mentioned, we are professors at the university, so we want to also to understand how the learning process itself is evolving.
00:05:56
Speaker
Because one of our point is that we need to help students to learn. But if the process of learning is changing, how we have to reshape what we do daily in our work.
00:06:09
Speaker
Yeah, just on that last part, i I teach statistics and the tools can actually do my problem sets in midterms quite well. And so there's the dual fear that ah Francesco mentioned. One is like, if the students are using the tools, are they having an opportunity to learn?
00:06:24
Speaker
But also the more reflective question of what is it that they should be learning in this ah in this age?
00:06:33
Speaker
Yeah, so that's really um interesting, the whole issue of like, the immediate advantage of using an AI system versus the long-term impacts and how the benefit could be immediate.
00:06:49
Speaker
um And then the long-term and, and then the, so the benefit couldn't be immediate and it could be actually smaller than the overall negative effect, but because the negative effect comes later, you might not um kind of appreciate it as much or anyway, it just seems like that, like,
00:07:09
Speaker
timing issue could create a challenge for us when dealing properly with AI. But, um cool. Anyway, I just want to ask and ah another question. Like, um you know, your article has a lot to do with de-skilling, up-skilling in the context of AI.
00:07:26
Speaker
How would you define, roughly speaking, de-skilling and up-skilling? so I think to answer that, you really have to step back and ask, what is a skill? And a definition that we're drawing on describe skills as basically the ability to do some task that has some kind of valued outcomes. So there's that productivity part that we've been talking about already.
00:07:48
Speaker
um So skills are productive of some kind of an outcome, but they also have this property of being expandable. So you can become better at things over time. And so that ties in with what we were talking about at the sort of longer term impacts about expandability of skill. Another really interesting point about them is that they are in many ways social.
00:08:09
Speaker
That is to say, what counts as a skill is not sort of inherent, but it's defined by the context you're working in. And ah similarly with what kind of skills are valued.
00:08:19
Speaker
So then if you think about upskilling or descaling, you can look at that in a couple of different ways. One is just the particular individual. um So what kind of skills they have, meaning what kinds of ah valued outcomes they're able to create.
00:08:33
Speaker
um And so someone having more skills basically means that they ah can do more things, the things they can do are more valued.
00:08:43
Speaker
um and And contrary-wise, ah we can talk about the process of skill loss where because you're not practicing something, you you lose the ability to do it after some period of time. And there's a ton of nuance to that, like, um you know how how deep the skills you have are, how well established they are, um you know the difference between someone who's just a beginner and who knows some of the rules but has trouble really putting that together versus an expert who doesn't even think about it. They can just do things quite automatically.
00:09:15
Speaker
But the other notion and the notion that we really talk about in the paper with de-skilling and up-skilling is the de-skilling or up-skilling of a job. So the job that someone does is a collection of tasks.
00:09:27
Speaker
um Those tasks require different um kinds of skills to accomplish them. So a high skill job would be one where there's a bunch of things people do that require a high level of skill to do it.
00:09:39
Speaker
And descaling then comes in when you start to use the machine to do some of those tasks. Maybe that you're replacing some of the tasks the human would have done with now a machine performance and the skills that are the the tasks that are left to the human are ones that don't require as much skill.
Historical Context: Technology's Impact on Skills
00:09:56
Speaker
And from the point of view of an employer, that might be actually quite desirable because it means that you can get the work done, but you don't need the you know highly trained, likely more expensive people to work on them.
00:10:10
Speaker
um You can instead have people with less skill because the ability to do the task is augmented by the capabilities of the machine. so that then is the notion of upskilling and de-skilling that we're trying to get at in the papers, the upskilling and de-skilling of jobs in terms of the kinds of skills they yeah they require.
00:10:32
Speaker
Yeah, that's helpful because it's like on the one hand, i mean, normally when I think about de-skilling, I'm thinking about the erosion of skills in a person. So you're thinking about a person and you're thinking about like them potentially losing um skills. And so I don't know, like an example of that is like maybe, um you know, a journalist um starts using AI and maybe they get worse at summarizing sources. Sure. one thing In fact, um there's at least one published case example where
00:11:07
Speaker
a company bought an accounting package that did some kind of accounting procedure, which I don't know the details of. um And it had automated that process so that the accounting staff didn't have to do it.
00:11:19
Speaker
And then they switched packages to one that didn't have that feature and discovered that no one actually remembered how you had to do this thing. Because for some period of time, there had been no need to do it.
00:11:31
Speaker
And that that could be with the individuals themselves forgetting. I mean, like in grade school, I learned how to take square roots by hand. I do not remember how to take square roots by hand because you know who does that anymore?
00:11:43
Speaker
um But it could also be a ah compositional shift that you know the people who knew how to do something leave the company. The new hires are not required to know how to do something.
00:11:55
Speaker
And so you lose the skills that way. and that that kind of compositional shift is something that is a ah big concern in pretty much any high-tech company um that as your more senior employees retire they take with them knowledge about how to do things and if you're not careful you know the company can literally forget how some important process is done I mean, another thing that I think is interesting about upskilling and the skilling with AI is that if you think about it, the concept and the risk about the skilling typical and is not something new, right? Think about ah airplane pilots, you know, with the autopilots. So they actually are doing one fourth ah of the ah what the pilots 50 years ago were doing, right? Yeah, or even less actually.
00:12:43
Speaker
Yeah, probably even less. So it's not something new, but it's also true that the characteristic of AI that it means the speed of adoption, the fact that it is, of course, affecting type of works that traditionally were less subject to technology, to technology subsidization.
00:13:05
Speaker
And also the fact that another Another interesting thing for me with AI it has always been that right now, even right now, AI is something that typically is individually adopted.
00:13:18
Speaker
So it's a technology that is strongly embedded in organizational routines, but these the early adoptions are always individual.
Industry Differences in AI Adoption
00:13:30
Speaker
i don't know if I'm able to explain this point clearly, but well and think let's think about the pilots with airplanes. a certain point, at the technology arrives, a company decides to install it in the airplanes and decide how to teach the pilots to use the new technology and now how to keep them able to intervene when there is the need because the autopilot is not working. Right.
00:13:56
Speaker
While there the way in which typically, not for everyone, but typically is AI use adopted by us is that we individually as person are using our own AI model and we are actually replicating that use, that adoption in our our daily work.
00:14:13
Speaker
Of course, companies slowly, slowly getting to you know creating the policies, how to and on. Yeah, I think that's changing pretty quick. Yeah, it is changing, but is it changing quick enough, for let's put it in this way? um I don't know. right know that's and That's something that for me is still interesting. But actually, the the even even when the companies are doing the adoption um in industries that are less regulated than aviation, are they taking the time to think through the training um and because Because that is one of the things that is interesting about pilots is that um they may not be called upon to do many of the things daily because the autopilot can do it.
00:14:52
Speaker
But there is a very, very strong interest in their ability to do that if ah demand occurred. And so they're you know constantly training to do things which they may, in fact, never have to do you know while they're flying.
00:15:06
Speaker
Hopefully, most people do not have an engine failure in their flight. But you know if it happens, the pilots know what to do. I don't think we can say the same thing about um companies' adoption of um automation in in other settings, that you know if the automation does not do what it's supposed to do, if there people you know who still have the skills to step in, you know it's not it's not clear that people have thought through those kinds of demands.
00:15:31
Speaker
it's ah That's also important, right? Because you you mentioned in your paper how it's important to keep in mind the nature of AI or the type of AI that we're using in our time where it's much more successful when um it's dealing with cases that are not, I think you call them not corner cases, but are cases that are more um typical cases that appear in its training data, in other words. So um point being is that I think in every domain, there's probably going to be, of course, they're going to be stick like,
00:16:07
Speaker
more rare statistically, but there are going to be times where some, something extraordinary happens and potentially the AI is not prepared to handle that extraordinary outlier type situation.
00:16:21
Speaker
And so, uh, it's imperative that the human in the loop is capable of kind of dealing with it. Right. And so that's like kind of point being, I guess, I guess all I'm trying to say is that, um, when you think about the nature of like generative artificial um,
00:16:37
Speaker
um it's the case that we do need people to continue having skills in order to handle the corner cases, I guess, right?
Skill Inequality and AI's Role
00:16:48
Speaker
But it's also the case that...
00:16:50
Speaker
Sorry, go ahead. It would be fewer, right? It would just be fewer people that you would need. i mean, basically, like things are going hunky-dory like 90% of the time and that 10% or maybe even less, you would need fewer and fewer people with high skills to get those things done. Am I am i right about that?
00:17:07
Speaker
Yeah. So it's interesting because, um as Francesco pointed out, de-skilling because of a technology is not a new thing. um It's like You can see articles written about this in the 1960s when computers were first starting to be used for automation.
00:17:22
Speaker
And there was that paradoxical effect that um it could de-skill a lot of jobs. So for example, you didn't need to know how to do manual accounting if the system had all the accounting rules built into it.
00:17:36
Speaker
And it simultaneously created a smaller number of jobs that had a much higher skill demand because you now needed to be able to figure out what the computer was trying to do and what you needed to do to get it due to do what you wanted it to do.
00:17:49
Speaker
um And that that required a very high level kind of expertise, right? Because you had to sort of know what the entire process looked like and how it had gotten automated and where all the you know different parts of the process were happening.
00:18:03
Speaker
so yeah So we do have that um evidence in the you know historical record of simultaneously you know removing content from a lot of jobs, but then creating these much, much um higher end jobs that require a lot more skill.
00:18:19
Speaker
Yeah. So that would be a like a ah skill inequality widening of that, right? Yeah. Well, but but the other the other problem is that it's not clear where those fewer higher skilled people are going to come from, right?
00:18:35
Speaker
Because you know you're you're demanding a very different kind of skill mix. um And so so again, you know thinking very carefully about that. So the current study we have going on is studying programmers who are interesting for a bunch of reasons that we may get into. um But in particular, people who develop software to support research.
00:18:57
Speaker
And they have exactly the issue that you're talking about, which is that the models have seen a ton of Python. um They have seen very, very little Python code for simulating black hole collisions.
00:19:08
Speaker
um So if you're in the business of building simulations of black holes, as some of the people on campus do, um the models are not necessarily going to be right about the details of that kind of code.
00:19:22
Speaker
They may be right about a lot of other things, but not necessarily about you know the real nitty gritty. um And so so again, you know the ability of the human to look and say, okay, well, no, but that you know that's not what I'm trying to do.
00:19:36
Speaker
ah you know we're that That is a particular high-level skill that is going to be really important.
00:19:44
Speaker
ah Just to kind of round out this this topic, we talked about ah de-skilling a bunch, but what would upskilling look like, just so that listeners have ah an idea of that? So again, I think it depends on which level you're looking at. So you can certainly think about individuals' upskilling by learning to do new things and getting better at the things that they already know how to do.
00:20:05
Speaker
um And there's actually you know tons of psychological research about how people learn. So there's there's a very good evidence base for what you need to do to help people to learn that way. um If you talk about upskilling jobs, then the basic notion would be that you change the task composition in a way that ah the tasks that are now part of this job require a lot more skill to do.
00:20:29
Speaker
um so um So you can certainly imagine that happening in the context of a given job. You could also imagine it happening through, as we were just discussing, the creation of new jobs that are now necessary that are higher skilled jobs than um than you would have had before.
00:20:50
Speaker
ah just Just so that I'm ah clear on this, so it really is just stick to the old principles of like you know repetition and association and like the the principles of learning from cognitive science, right? So if we were to use generative AI systems to teach ourselves like that, that would that would do the trick? That would be count as upskilling?
00:21:15
Speaker
Well, that would be personal skill development, right? Yeah. So, and as you said, um there's, you know, a good evidence base in cognitive science about how to do that. Although actually the the theory that we draw on in the paper adds an interesting twist, which I think maybe doesn't get talked about a lot, which is the um personal commitment to wanting to learn, like the motivation to learning.
00:21:41
Speaker
um And the um the author makes the point that if you are not personally committed to wanting to learn, then you're not going to get past a certain level of of competence. um And so I think that's going to be interesting to see how that factor plays out in these things, because if people are demotivated because they look and they say, hey, you know the machine can do all this stuff. Why do I need to know all this stuff? they're They're never going to have the personal commitment to get good at it themselves.
00:22:08
Speaker
And somehow this is one of the risks that you have already at university, right? and that's If you are teaching them, if you are a student, you are learning something that you fear or you are almost sure that the machine can do it in half of your time and the performance is more or less OK, are you really committed in learning how to do it?
Predictive Models of AI's Skill Impact
00:22:28
Speaker
And that's an open question. Probably and we do not have an answer to that, but it has an impact ah on your learning process again, right? Just as a commitment as Kevin was mentioning just now.
00:22:43
Speaker
Yeah, okay. um Okay, we want to move a little bit towards your model that you present so you can predict when ah something will be, ah you know, a system will de-skill you or up-skill you. But prior to that, just kind of set things up for the listeners, let's just, you know, think... um about the three possible effects that the use of AI will have. We've already talked about them, but let's just kind of ah use some of the the words or the the jargon that you are employing here. So obviously, the one effect might be that there is no effect whatsoever.
00:23:16
Speaker
All right. So that might happen. um so that that doesn't need much explanation, I suppose. ah Then there's also the leveling effect. That's when AI minimizes the importance of a human's knowledge on task performance.
00:23:31
Speaker
In effect, when low-skilled person can do something that a medium or high-skilled person can do as long as they have the help of an AI. And then there's the multiplier effect. That is when the use of AI acts as a multiplier on the human's existing knowledge.
00:23:47
Speaker
So maybe just telling us tell us a little bit more about the leveling effect and and why it might be a form of de-skilling. Okay, sure.
00:23:59
Speaker
So as you actually synthesize, mean, leveling effect is when in a given situation, um the performance gap between experts and novices is compressed because novices are supported by AI.
00:24:17
Speaker
OK, so you would expect the that the performance between the novices and experts would be big, while novices using AI will have a performance that is much closer to that one that experts can can get. So that's our definition, basically, of um of of leveling effect. OK, so when you have the nice system, a machine on on top of the work of novices, OK,
00:24:47
Speaker
and Our point is that it often gives a bigger boost to novices rather than experts. Okay. Why so? Because novice can kind of lean on the model for patterns, for templates, for first drafts, solution that they don't know yet.
00:25:07
Speaker
Okay, so they don't have the skills, the knowledge to get to that solution and the will provide them that solution for them. Usually the benefit of experts in using AI for a task, for a problem that they know usually is a matter of speed. rather than understanding.
00:25:28
Speaker
So it's faster to me to run this thing with ai but somehow i can solve it it I can solve it even without AI, while a novice maybe cannot get to the solution without AI because it doesn't have the experience, the skill to do it.
00:25:42
Speaker
So somehow the performance starts to look a little bit more similar between novice and experts. Okay? So somehow the observable gap um between between them narrows. okay But what it doesn't that narrows, what is still different is again their knowledge, is in the underlying knowledge behind the process of doing that performance. The learning trajectories, novices and experts are still different.
00:26:16
Speaker
So that's exactly the level, the the leveling effect. And to answer to your second question somehow, it is about about is this the skilling, right? um I guess that they more correct answer is it depends because leveling effect is not necessarily the skilling okay is not necessarily automatically bad somehow um it can also good it can also be good if it gives ah to novices the experience of learning from what ai is helping them to do okay so if ah
00:26:54
Speaker
using AI allow me to understand and to dedicate to focus less on simple tasks and you allow me, it supports me in dealing with more complex tasks.
00:27:04
Speaker
And as a novice, I'm still learning this. Well, potentially I'm improving my performance, but I'm not losing too much of the knowledge process to learn. On the other side, and it's this is connected with the topic that we were discussing earlier, right?
00:27:20
Speaker
On the other side, if I'm just using the AI to provide me a solution to a problem, and I simply apply that solution to the real world, i do not think about it, I do not learn about it, ah and so on. In that case, I'm completely eroding the knowledge underlying my performance.
00:27:38
Speaker
In this case, I'm getting the performance, I'm using my leveling effect, I'm getting my performance closer to the expert. but I'm still not learning. So next time, if even if a similar problem will rise, ah or i still use the AI, or i will still not be able as a novice to deal with it.
00:27:56
Speaker
Okay. So this is, I think, our first round of explanation. but it So the learning effect is these that can be good or bad according to how you are dealing with the with the process itself.
00:28:10
Speaker
Right, but but I think ah in terms of the job, it would be de-skilling because it means that the job doesn't require as much skill to do. um In terms of the individual, as ah yeah as Francesco said, you could imagine it working out differently.
Social and Market Influences on Skill Valuation
00:28:24
Speaker
the um The one thing I would say is that um it is very task dependent, um like what what is the nature of the task and the demands of the task? And the example that we used in the paper lot was um a system that had been trained on many rounds of question and answer with um users and tech support.
00:28:48
Speaker
And so the system could basically, it was um a text-based support system. So the system could actually be reading the texts from the user and then suggesting to the ai the tech support person, oh, you know maybe maybe this is a problem, maybe this will be useful for people.
00:29:04
Speaker
so So what that meant was that the techs did not need to have their own knowledge about problems. They didn't need to know how to search the system. right The system basically was prompting them with things that might be useful.
00:29:16
Speaker
um So if you are already, and probably not an expert, but like a competent tech person, um that's not going to be very useful to you because you knew the answer. um But if you're a novice, it could be extremely useful, right? Because now you can say, oh, well, you know you should try this. It's like you have caddy with you at all times, just telling you what to do.
00:29:36
Speaker
Well, but even more so, right? Because um actually, professional athletes are an interesting example of getting to that expert level. Because one of the characteristics about expert is that the performance is very automatic.
00:29:48
Speaker
You don't have to think about it, right? You just know what the right thing to do is. And so that that's the kind of thing that a you know really high-level golfer, you know they're not thinking about their swing. They just like they just do it.
00:30:00
Speaker
um So that that probably would not be replaced by the caddy. But then maybe the the things you're referring to are like, okay, so how far away is the hole? And you know which way is the wind going? And things like that, where, yes, having somebody who like is really skilled could i help you out with that. like So maybe maybe it's it's like a coach, whenever you're having trouble, they just do it for you.
00:30:22
Speaker
Well, but that's an interesting point, right? if um If you were trying to learn how to throw a basketball and the coach would be like, oh, you it you do it like this. See, there you go.
00:30:34
Speaker
Baskets in the hoop, we're done, right? that would not That would be effective for getting basketballs into hoops, not so effective for learning how to play basketball. And still there is only a Steve Carey.
00:30:46
Speaker
Yeah, that's right. And my analogies are failing me today. Oh man, Sam rescued me. Yeah. Well, no, it's it's super interesting, the whole leveling effect. So it's like the the level a leveling effect is when things to AI, like in the context of AI, it's like things to AI, the novice can achieve a performance that the expert normally achieves. And yeah like you said, those goes it's like you would expect- Probably not an expert, probably at least though a competent a competent person.
00:31:18
Speaker
Yeah, competent level. Okay, yeah. And so, and then it's, yeah, and then you can start thinking about like, okay, what kind of effects maybe can you predict from that, you know, like, I mean, obviously, there's positive effects, like higher baseline, right? You know, if the novices are already at competent level, like more immediately, then overall, there's going to be a higher baseline of performance, I guess there might be reduced reduced training might be needed at certain jobs, but then like,
00:31:46
Speaker
There's also it's a potential negative effects. You know, like, I guess like what you've been talking about, well, you mentioned de-skilling of the job itself, I guess would be one potential negative effect. Yeah, not paying people for um more experience, you know, right so so the the the base ah pay rate. You know, it's interesting because the the question of how you decide something is skilled or not skilled,
00:32:16
Speaker
is, as I said, very socially determined. And so very often um you use these external markers like how much training do you need to do the job? OK, so that's a more skilled job. do you need like specialized education?
00:32:30
Speaker
um Are there a number of different skills that have to be used in combination? So that, again, would suggest higher skilled. but But there's also a very strong um social aspect because sometimes it's just like, you know how much are people willing to pay for this, which is not an inherent characteristic.
00:32:47
Speaker
um And certain skills are like sort of routinely devalued um because people just assume that you know it doesn't take much know-how-to-do them. So if you think about a housework, for example, housework is you know almost always considered unskilled work, um which overlooks the fact that it might actually take quite a lot of time and practice um to be able to create these valued things.
00:33:13
Speaker
um which are only valued in the host household context though, right? There's not a market for them, so it must be there for low skill. Anyway, so um so the this question then, but as you said, if if the machine is able to do certain things, what happens to the demand for those skills? That is a that day is a question which is getting answered in the marketplace as we as we talk.
00:33:36
Speaker
Yeah, also because I mean, building what Kevin is saying that skills are very socially based, another way in which we evaluate how much someone is skilled or how much a work is skilled is, is a relative way. So how many people have that skill?
00:33:53
Speaker
You know, i mean, and this is exactly the point of that Kevin was doing. But again, you know if now using AI, a lot of people can do that, then this, the relative part of the skill becomes a little bit tricky to, at least to be measured, you know, because now everybody can do it.
00:34:10
Speaker
And again, going back to our a main domain, doing exams now with the students that can use AI. Okay, so what is the skill now? And is being able to use AI better than someone else still part of the skill that we want to evaluate?
00:34:26
Speaker
That's kind of interesting we have to start to think about it, you know.
00:34:31
Speaker
ah Maybe while we're on the topic of education, I can quickly ask um this. So you you did mention ah that ah you you talked about prompting in your article that And you said that at at least at this point, um well, actually, I'll just let you first talk about that and I'll bring it back to education. What about prompting?
00:34:54
Speaker
ah If you are an expert prompter, does that mean that you will you know get a multiplier effect every single time you use AI? ah Tell us about that. Right. So um most of the cases that we talk about in the paper end up looking like de-skilling.
00:35:10
Speaker
um But I think that was partly just the state of the... um people's experience with the tools at that point. um And subsequent, you can see studies where you actually do see more of a multiplier.
00:35:24
Speaker
um And as I mentioned, we're studying programmers and it does seem like in programming at least, more experienced programmers get better results with the tools.
00:35:34
Speaker
So there's a lot of talk about vibe coding, which is, you know, anyone can sit down, they can type in a few prompts, they can get a program. And so that's that's an example of, you know, the leveling effect that we were talking about.
00:35:47
Speaker
But it turns out to be a little bit more subtle because for a more complicated kind of program, um knowing what to ask for is a real skill.
00:35:59
Speaker
Like being able to take this vision you have and actually imagining what it would look like as a running program and then breaking that down into pieces that the model can do, that is a skill.
00:36:09
Speaker
And that is where we were talking about prompting. um And it's a skill which is not identical to traditional programming skills, but which is closely related in ways that we're still trying to puzzle out.
00:36:22
Speaker
Like there's this notion of computational thinking, and it seems like computational thinking is at least in part behind the ability to get better prompts for the tool to get better results.
00:36:34
Speaker
The second piece though, um is that, as we have discussed, the models are not perfect. And in particular, in writing the code, um there are lots of cases there are lots of cases where the code is just fine. And then there are lots of cases where the code is you know subtly wrong or doesn't capture, you know doesn't deal with some less common situation correctly, what are called corner cases, um yeah where it uses a set of libraries that maybe aren't
00:37:05
Speaker
exactly the libraries that you ought to be using in that context, um or where you know it's introduced security problems. um there There's a study actually, which has just come out, which is quite interesting. It was looking at adoption of a particular um AI programming tool.
Human Oversight and New Skill Requirements with AI
00:37:20
Speaker
And in the month or two immediately after the adoption, the number of lines of code that the projects created had really jumped up quite dramatically, like 80%. which is that sort of immediate productivity gain that we've been talking about.
00:37:35
Speaker
um But starting in month three, if I'm remembering the details correctly, those ah gains sort of ah had faded away. um And what you had also was a big jump in the number of errors that were being detected in the code.
00:37:50
Speaker
um So basically the models were creating very, very large amounts of code, but then the code had all of these bugs that the developers now needed to go through and you know root out.
00:38:03
Speaker
um And so again, that art of looking at the code and saying, yes, you know this does what I want. um Or, well, you know it's not exactly what I want, but I got the idea. Or, yeah you know it's got all these bugs which I need to figure out. So again, that requires a lot of skill.
00:38:19
Speaker
And it isn't, again, exactly identical to the skills of writing code. But it's really you know a core set of skills for any developer, that that's sort of debugging um And so again, those are reasons why expert developers seem to get more out of it because they can you know very quickly evaluate the code, see whether it's what they want.
00:38:38
Speaker
um They're more skilled at debugging. So that is an area where um it does seem like there's a a multiplier. even Even though it initially looks like leveling with the idea of vibe coding, it is still the case that experienced developers can do more with tools.
00:38:55
Speaker
so I'm wondering, like, could we put it in terms of like, when to predict what marks? Or let me put it this way, like when to predict a multiplier effect, when to predict leveling, when to predict no effect.
00:39:16
Speaker
the example you're just giving is like, you should predict there's going to be a multiplier effect if after automation, the remaining tasks are still cognitively demanding, like as in that coding example. um right I'm trying to think of other cases, but anyway, yeah. could you yeah so i think I think that that is an important piece of it. that the remaining because Because again, the the notion of upskilling or de-skilling with the technology is that the tools are automating a certain set of tasks, and then we take a look at the tasks that are still left for the human to perform.
00:39:52
Speaker
and And so if they're high skill tasks, then there's going to be that kind of upskilling, which means that you would expect that ah more a higher skilled person would be able to do better.
00:40:04
Speaker
But the other piece is that there are additional tasks that are getting created because of the use of the tool. And the ones that we are focusing on in a paper which we're just wrapping up now um and which the the seeds are in the paper that you were reading um is is what we've been talking about.
00:40:21
Speaker
you know Prompting to get the tools to do what you want, um and then evaluation and integration of the outputs from the tools. and and so Some of the leveling effect examples are cases where prompting and evaluation are very trivial.
00:40:37
Speaker
like ah With the tech support people I mentioned, the prompting is actually the system just listening to the chats. right so It's not something that the tech support people have to do.
00:40:47
Speaker
and The evaluation is basically, you know here's a suggested document that might have the answer, which you can then give to the customer, and the customer is the one who evaluates. So those skills, therefore, are not really needed, and so we see leveling.
00:41:03
Speaker
And then with the programmers, as we've been talking about, um you know you can you can do a simple prompt pretty easily, but doing something which is more sophisticated requires ah actual skills that are somehow related to programming um and the evaluation and the use of the code definitely requires those skills.
00:41:23
Speaker
And so so in that kind of a context, you see the multiplier. So that would be our prediction. Take a look at the content of the remaining tasks, but also how hard it is to prompt well and how much evaluation is needed of the output.
00:41:37
Speaker
so So it's like in some cases, the AI will, let's say, absorb a bunch of tasks and then ah the remaining tasks which were there before the ai happened to be cognitively demanding and thus we still have like a multiplier effect. So like maybe the AI will screen um ah set this um leads for a salesman, salesperson and So that's taking away some tasks, but maybe the remaining task is um closing the sale.
00:42:14
Speaker
That remaining task was there before the AI, but that remaining task is also um cognitively demanding high skill. And so that would be maybe and where you can predict multiplier. But then you're saying in other cases, it's like there's a new, like when you introduce the it introduces a new task that wasn't there before that um requires um high skill so it's like like it seems like that might occur when like the ai output is maybe imperfect or slightly ambiguous and then i mean it can be that but it again also i mean in in the example you were making know assisting a salesman it's also possible not just taking as uh from theoretical point of view um
00:43:03
Speaker
the salesman now will receive a bat of information in in in forms of, i don't know, statistics or but ah best possible scenarios than before.
00:43:14
Speaker
Maybe the good salesman had them embedded in themselves and didn't even know what happened. Right. But now the the new salesman need to be able to read it to make sense of that suggestion that the AI is giving to them.
00:43:28
Speaker
So maybe you will need that to be able to read the graph, you know, ah something that until five years ago where where it didn't need it. um So that it's possible that you have other tasks that are from the corner case of things that AI by itself were not able to substitute.
00:43:47
Speaker
That's definitely a yes. But it's also possible that you have other tasks that were not there at all and that now should be taken by someone. Okay, go Kevin. The yeah salesperson is an interesting one also. That is actually um one of the examples in the in the earlier paper.
00:44:07
Speaker
And ah part of the reason I think it's interesting is because the skill of closing a sale is not necessarily a cognitive skill, right? It's a very social skill. um you know Will AI get good at um you know convincing people to buy things? you know I'm not gonna say it won't, but but it's a very different kind of skill than any of the ones that we've been talking about. you know Much more... um Emphatic level.
00:44:35
Speaker
Yeah. But we had also this discussion several times. on you know, one of the things that when you ask especially students, what are the jobs that the AI will never substitute? One of the things that students always say is that taking care of people, you know, something like that. But then you have evidence studies, especially from Japan and so on, where you have robots that are AI based.
00:45:00
Speaker
in which then patients say, we love these robots, they never get tired of us, they never are rude to us, and so on. So you you you present this evidence, and of course, as Cam was saying, it's impossible to predict exactly what is going to happen. But this idea that, you know, the empathic level is only human-based is something that I think that we have to you know, be a little bit careful in evaluating because I'm not so sure.
AI's Role in Altering Job Dynamics
00:45:29
Speaker
And the other thing is just this, we have also to be ready enough to another thing because one possible scenario is that
00:45:37
Speaker
you know, agents, AI based things will start to act as also as a customer. I want to build my own ai that every morning scan all the u utility offers and then sign the best deal possible. So i don't want to talk anymore to a salesperson. I want my eye scanning every morning, every offer for my utility contract. And every time you find a better one that I have, sign it.
00:46:08
Speaker
Fascinating. Yeah, so so one so one mark or that would, you know, it's kind of obvious, but yeah, just to emphasize what you know one thing is that when AI can substitute for the core expert skill, you know, then you can predict a leveling effect, of course. and And then the question is, right is that right, did I get that right, Frans? No.
00:46:34
Speaker
Well, I don't know if it has to substitute for the only... Not only that, but i'm saying I'm thinking that that would be one instance where you want you might want to predict there's going to be leveling if if if the AI can actually substitute for the skill.
00:46:49
Speaker
Yeah, but again, I think it is important to you know keep in mind that the skills are at different degrees of ah different levels and and the AI can probably in a lot of these cases, substitute for a competent person.
00:47:05
Speaker
um And it may be that that's all you need, right? You don't really care that you have the most expert tech person. You just you know you just need an answer. And so a competent person is fine.
00:47:17
Speaker
um And so a competent AI is fine. I think actually that will be interesting, you know talking about the the skills being socially determined is you know at what point do companies say, eh, it's good enough.
00:47:29
Speaker
right The AI doesn't do as good of a job writing ad copy as you know our best ad copy person, but it does as good of a job as sort of an average person, and it does it in a few seconds for a few dollars. and so yeah It's good enough.
00:47:49
Speaker
I want to kind of touch on the education angle a bit because um this this is what, you know, literally it keeps me up and I'm not sure how, what to teach to prepare my students for whatever is coming ahead. If I had to just spitball here and and you guys can tell me, you're the experts, right? So you tell me if and how I'm wrong, ah but it seems like There's a bunch of high cognitive skills that that will be very useful in the emerging economy, especially as new AI models come about.
00:48:29
Speaker
And so what you... What you're at risk of is that if you use these things unintentionally and and mindlessly, you will get the leveling effect. So you perform great, but your pay will not be very good because that's you know you're actually low skill and you're only you know at the level of a competent employee because you have ah ai But if you have a lot of high skills, high kind cognitive skills, then you will be able to get potentially the multiplier effect. So it it almost seems like what I should tell my students to do is to learn use AI to learn a bunch of skills. I mean, if you're trying to train to be a sales guy, get the AI to like you know role play with you and challenge you so you can see every scenario or you can you know use ah ah it to multiply your capacity for... um
00:49:23
Speaker
you know, reading graphs and and, you know, maybe do some deliberate practice with this, like you should get me to master this skill. is it am i how how does that land for you guys?
00:49:35
Speaker
So, um right. So this question of what students should be learning, um i don't have an answer for either. I i worry the same things. like um And part of the reason I think it's quite problematic is that, you know,
00:49:53
Speaker
skills do form a hierarchy. you know There are more basic skills on which higher level skills are built. And and so so on the one hand, if you want people to develop you know these higher level skills and there is this learning hierarchy, then they personally have to learn the lower level skills.
00:50:14
Speaker
The complication is that I don't think we have a very clear accounting of that. like um So for example, I teach statistics, um you know how much of the real underlying mathematics do you need to know to be able to know how to do statistics? And and different people would have different opinions about that, right? Some would say, like it's absolutely critical that you know the equations and you know how everything's derived. um And others would say, well, you know as long as you know why you're doing it, that's fine.
00:50:45
Speaker
um But I can definitely see the students taking the shortcut that you were mentioning, which is just um here is an a assigned problem um and you plonk it into you know your favorite model and it tells you exactly what to do and so you can do it, right? So you you have got that skill from the system um without having to have it personally. and and And for me, that is in a way very self-defeating for the reason that you mentioned, which is that
00:51:16
Speaker
um you know If your skill is cutting and pasting things into Claude, then that's not much of a skill. um So you know why why would an employer pay you to do that as opposed to just you know cutting and pasting things into Claude themselves?
00:51:36
Speaker
Yeah, so i actually i'm a little unclear where I'm going with this. i think I think the notion that we have to teach the students how to do things with AI as opposed to in opposition is correct. But but exactly what do you need to know to be able to use it well? Like how much of what it's doing do you have to personally know how to do? I think is going to be so task dependent. And you know i think basically we're all trying to work it out Yeah, I don't think that so far there is a sanded and proved the answer to that. So we are all ah and this same at the same level, talking about the levels here. um i think that one thing that clearly should be avoided, as you both were mentioning, is like if you use a machine AI just as something that that
00:52:28
Speaker
after it you just approve or not and you just be as it is without thinking so on definitely this doesn't help you in any way right maybe it it will willll help you to pass the exam if a professor is not very careful but that's it you are learning the basic meta skills, if you want to start now the to talk about the meta skills that are about ah being able to frame the problem. So means given a situation, you need to know what are the main elements that you need to consider to solve the problem. and the The ability to answer the answer.
00:53:05
Speaker
you know You get this answer If you don't cut and paste, but you try to make sense, maybe it it helps. ah A form of reflexivity, if you want in this, those are all things definitely will help you.
00:53:18
Speaker
If those are going to be enough, um I don't know, honestly, i I don't know. But the point is to work with the problem and with the AI in framing, making sense, and not just simply to approve or do not approve, because disapproved or not approved, as Kevin was saying, is a clear pattern towards unemployment, at least.
00:53:43
Speaker
Unemployment, anyway. oh the the The one thing that does occur to me is that, um you know, ah in teaching intro courses, since the focus has really been on ah basic skill acquisition, um we tend to give students these very, very well-structured problems um and you know try to try to smooth the path as much as possible so that they can focus on developing the skill as opposed to trying to figure out what it is that they're being asked to do.
00:54:15
Speaker
and And I am wondering now whether that is a good path going forward because if you smooth the path for the student, um you also make it very amenable to automation.
00:54:28
Speaker
And so so I think students given very simple things to do will say, you that's a simple thing to do. And look, the model can do it. so So why do I need to be involved in this? and And so I don't know exactly what the right path looks like. um The things you mentioned about using the models to you know to challenge you and to explain things you know as opposed to doing things, um I think is very well taken.
00:54:53
Speaker
um And yet at the same time, we know from cognitive science that a certain amount of repetition of basic skills is necessary to you know master them and make them automatic.
00:55:05
Speaker
um So so i I'm left with the challenge, which is that if I make things too easy, then the students rightly say you know that that is something the model can do, and so it should just do it.
00:55:20
Speaker
um And yet I want them to practice those things you know at some levels so that that they actually develop the mastery. So anyway, so what i what I'm thinking is that I need to assign things that are much less structured, where part of the goal is exactly as Francesca was saying. It's like, you know you need to figure out what it is that you have to do to solve this problem.
00:55:43
Speaker
um And then you know maybe use the tools to help you get through some of the details. but But that fundamental skill of, you know, um
00:55:55
Speaker
understanding problems, what a solution would look like, being able to tell whether what you're given is actually going to be useful. It seems like you really do have to double down on that.
00:56:06
Speaker
So this is really interesting. I mean, it seems like Francesco, you're kind of talking about basically there are different ways of engaging with ah AI and you you can kind of rank them in terms of what is going to be, you know,
00:56:21
Speaker
worse or better, I guess, in terms of learning. And, you know, obviously if we're just talking about passive acceptance, you know, just, you just take the AI's input as it is, you copy and paste it That's going to be probably the worst, you know maybe a little bit better is some shallow editing, maybe better. And then, but but you started talking about Francesco, like even the best is going to include like sense-making, maybe Or like treating the AI as like a partner. Like, I'm just thinking about like, where you compare, um, the AI's output to what you are yourself thinking like, oh this is my argument. This is how I'm, uh, approaching, um,
00:57:05
Speaker
the nature of justice and comparing it to what it is saying about justice. Yeah, sorry. Go ahead. No, no, it's perfect. But actually, I was going to say one thing though is that you have put your finger on the tension which underlies this question about skills, which is skills have a productivity aspect, you know, being able to do things and they have an expandability aspect.
00:57:26
Speaker
And so the things that we're talking about for expandability perhaps come at the expense of productivity. The things that would enhance productivity perhaps come at the expense of things that would enhance expandability.
00:57:41
Speaker
And so so I think that that's something that individuals have to grapple with is like, am I trying to learn this or am am I just trying to get something done? i think companies have to grapple with that. you know Am I trying to develop employees or am I just trying to get the work done?
00:57:54
Speaker
And the answer will always be, well, you got to get the work done. um but But I think as a society, we're also going to have that challenge, which is, you know, people still need to learn stuff.
00:58:05
Speaker
And if companies aren't willing to learn to do it, then, you know, okay, so where is that learning going to happen? and And I think, you know, if you look at um Germany with a strong history of apprenticeships, um they may have a very different
Path from Novice to Expert in the Age of AI
00:58:20
Speaker
answer to that. It's like, well, you know, we're used to, as the science society, paying for people to be apprentices so they can learn.
00:58:26
Speaker
important skills, you know, and this is just going to be an extension of that. I think in the US, you know, it's maybe a little more short term, but I'm sorry, Francesco, you were trying to say. not no, it's's it's interesting. um I have at least two points here. um The first one is and what someone was saying is basically, if you think about it, is one of the points that we actually discuss in the paper somehow is like how a novice can become an expert now.
00:58:55
Speaker
you know because because that's a crucial point. If the novice doesn't do things to learn because ai is doing things to learn, how will it become in 10 years expert?
00:59:07
Speaker
Right? In order to to be able to to to do a report for a company, you are a consulting firm, you have you read 300 three hundred rap reports that have been done for past clients and someone asks you to do the same thing now for a new customer, right?
00:59:25
Speaker
And since you learned billions of them, ah then you reproduce the same structure. And then once you have done 200 billions of reports, you become an expert and in 10 years you will tell someone start to do the report for the next customer and you be will be able to evaluate if the report that they now know is ah if that report is good or not.
00:59:47
Speaker
But if you've never done these two billions of reports, how can you become an expert? And that becomes a quality, I think, um a skill quality. I mean, we we we can have a discussion about here that is going to be extremely important in AI.
01:00:01
Speaker
That is the ability to verify AI answer. And that's where experts today have a huge advantage towards novices. Because experts right now can now evaluate if the answer when ai in their field is good or bad. and even in 10 years, they will be able because they have the expertise done by the doing ah in the last 20 years that allow them to verify if that answer is good or bad, or at least they they know if it's acceptable or not.
01:00:33
Speaker
But a novice now that will have have much less experience, and as Kevin was saying, repeating a task, repeating something is important to build knowledge, skill, and so on. on If they don't have it, how they become experts? That's, ah I think, is a good, is really a good question that should be addressed somehow.
01:00:54
Speaker
Can I propose for a second that we all imagine we're Plato's philosopher kings and we have total totalitarian control? Would it make sense to um to literally not let, for and there's two domains here, in education, literally not have kids use AI for like 12 years or the first, all of grade school and learn those core skills.
01:01:17
Speaker
And then they're allowed to use AI. And same thing for for employees, right? You get hired onto a firm in the first minimum. I don't know what the minimum is. Five years. No AI. And then after that, once you've mastered the requisite skills.
01:01:31
Speaker
Well, so, you know, um we've had this debate before when calculators became cheap enough that um you could start using them in school rooms. um And the question was, so if kids can type stuff into calculators, how are they going to learn to do math?
01:01:46
Speaker
And i think it was not quite as draconian as you know not for the first 12 years, but there was you know sort of a period where it's like, okay, so you're gonna learn how to add numbers yourself, even though you know starting in next year's class, we're gonna let you use the calculator at So you could try saying, you know we want you to learn how to write a paragraph, so you have to do it yourself, you can't use ai um I think,
01:02:15
Speaker
The AI use maybe is a little less obvious than the calculator use, but you know that might work. that The problem though at the company level is that um the companies are not going to want to hire people for five years who don't use AI.
01:02:36
Speaker
um And so I think that's the societal investment question that we were talking about, which is, You know, if companies aren't going to want to hire entry-level people who are doing things that they perceive the AI can do well enough, um then, you know, what is the path for people to get those skills?
01:02:57
Speaker
And I think that's a question for us in universities. Like, you know, if employers want our students to have two years experience, how do you make it look like they have two years experience when they graduate? um Because, you know, if no one's hiring brand new grads,
01:03:12
Speaker
there are never going to be any people with two years experience, you know, two years from now. So, um but I think that's something we can also think about, um you know, putting putting in place incentives for companies to actually keep investing in employee training, um which, i you know, I think smart companies do, but it's it's easy to be much more short-term.
01:03:34
Speaker
Yeah, but that's actually, um again, ah and and another tension here, and also the... the hypothetical scenario of Roberto is ah is going there.
01:03:50
Speaker
Incentives for companies are probably incentives for the CEO, for the board, for the managers. And those incentives usually are three, five, ten years, you know, in those... No, six months.
01:04:02
Speaker
Okay. you I mean, I was trying to be positive, you know, I'm still European. I'm trying to to have a much broader approach. But in any case, it's very short term, right? In the short term, the only thing that honestly, rationally matters is that how much less can I pay or how many less people can I hire in order to keep the same level of performance and not decreasing performance.
01:04:28
Speaker
And it's purely rational. I mean, there is nothing bad in this, but that's the interest ah ah on that scenario. For society, I still hope that the the target, that the objectives that as society we have is a little bit different. And that's why in the in in the elementary school, actually, the kids still learn to do sums by hand. They still learn to write by hand, even if they that Everyone has a tablet or a computer right now, right? Because we try to give them a different kind of skill.
01:04:59
Speaker
But I don't think that that we can put society and companies at the same level from a motivational point of view. And definitely we cannot put the interest of individual worker or not at the same level with company and maybe even society right now. So that's actually something that becomes a little bit interesting because again, there is a tension that should be solved somehow.
01:05:22
Speaker
but But somehow there is a tension. I was reading and really this morning and I talked about it with Kevin this morning. There are the first studies coming out. Of course, they are still ah first studies. So in which they show that, for example, the study that I read this morning, they show that there is already an impact ah of 5% on wages, on
AI's Current Impact on Employment and Skill Decisions
01:05:44
Speaker
salaries. ah of decrease of salary on AI dominated area sectors. you know so And they even found that the there is a little bit of a contraction in hiring, especially novices and, and let's say, middle ones.
01:06:06
Speaker
And probably the easiest explanation that is in line with our model is that, well, you need less junior because now one junior can do the work of five juniors. You know, so while maybe i still need the the same amount of access or more or less that one so far, don't know in 10 years we will see.
01:06:26
Speaker
But right now, one junior with AI can definitely do the work of 10 juniors in some specific task, in some specific context. Okay. So I will hire one junior that will use AI.
01:06:41
Speaker
ah Gentlemen, I'm mindful of the time now. um Do you mind if I give you one rapid fire question? You can give me a 30 second answer. yourll You'll love this one. Don't worry. We've been talking about the the firm level, the education level.
01:06:54
Speaker
What do you do personally to not de-skill yourself?
01:07:00
Speaker
So if I can start, one thing is um I think hard about which skills are important to me and which ones I'm willing to offload. And things that I'm willing to offload, I just use the tool um because i do not have a personal commitment to becoming better at that.
01:07:18
Speaker
um So for example, um writing very, very basic R code to manipulate a data set, you know that's fine. The tool can do that.
01:07:29
Speaker
um Deciding which analysis I want to do, well, you know that that I want to keep a little bit closer to myself. I'm deciding which hypotheses I want to test and which theoretical models that I want to do first. And then, you know, I'll bounce it off the model to see, you know, if I'm overlooking something. But but that's a skill which is core to being a researcher and which I'm not going to give up. So that that would be my answer. Think carefully about what matters to you and then make sure that you're actually doing the things that matter.
01:08:05
Speaker
Yeah, I mean, I think it's a good like good answer. i mean, I'm a human being. i love to use AI for or standard repetiteive repetitive bureaucratic tasks. I love it. you know I mean, half of my writing times basic emails about basic stuff is done is done like this now. And that's good. I mean, it's good.
01:08:26
Speaker
and the but And we discussed also with Kevin in a lot of ten times this thing. um everything that ah can that, and again, it comes from my previous point, I think that more or less everything that that can easily verify, somehow I'm willing to let AI do for me because I can verify it, right?
01:08:47
Speaker
Something that requires, you know, I'm less certain and is exactly what Kevin was saying. I mean, if I pick a research question, you know, if I have to pick something that then I have to work in order to understand better, that's a little bit less willing to to give it to the machine somehow.
01:09:07
Speaker
And even if I do it, I do what ah very often we do again with Kevin, We have the problem. We first provide our own answer and then maybe we go with AI and then we compare our own answer and we say, oh, actually 90% is okay.
01:09:22
Speaker
This is wrong. Actually, it gives me and a good idea about it. you know But somehow and for those types that I want to retain knowledge and experience and everything, and maybe ah i first provide my own answer and then later i go with AI. But again, and think that this approach can work if you are an expert in the field or at least you are knowledge but ah knowledgeable enough. If you are a novice, you cannot do this because you do not have any tool to do it.
01:09:49
Speaker
And so you rely completely on the machine. And that's where the difference starts.
01:09:55
Speaker
Kevin and Francesco, you've given us a lot to think about. So ah thanks for coming on the show. Thank you for inviting us. Thanks again for having