Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Preparing for an AI Economy (with Daniel Susskind) image

Preparing for an AI Economy (with Daniel Susskind)

Future of Life Institute Podcast
Avatar
0 Plays2 seconds ago

On this episode, Daniel Susskind joins me to discuss disagreements between AI researchers and economists, how we can best measure AI’s economic impact, how human values can influence economic outcomes, what meaningful work will remain for humans in the future, the role of commercial incentives in AI development, and the future of education.  

You can learn more about Daniel's work here: https://www.danielsusskind.com  

Timestamps:  

00:00:00 Preview and intro  

00:03:19 AI researchers versus economists  

00:10:39 Measuring AI's economic effects  

00:16:19 Can AI be steered in positive directions?  

00:22:10 Human values and economic outcomes 

00:28:21 What will remain for people to do?  

00:44:58 Commercial incentives in AI 

00:50:38 Will education move towards general skills? 

00:58:46 Lessons for parents

Recommended
Transcript

Preparing Children for Future Careers

00:00:00
Speaker
Wherever I am, whoever I'm talking to, the question I get asked the most is always the same, which is what on earth should my children do? It's far better to think of ourselves at sea on a little boat and we can pull up our sail and go faster or put it down and go slower. But we also have a huge amount of discretion over the kind of direction of technological progress as well. AI is not the same thing as social media.
00:00:22
Speaker
If we bundle technology into a kind of monolithic, indivisible lump of bad stuff, parents are going to let down their kids in preparing them to use these technologies. One of the reasons I'm hopeful about the future is because of the possibilities of AI, particularly in the educational setting. I think we can, if we get it right, use it to do really extraordinary things.

Introducing Daniel Susskind and His Insights

00:00:48
Speaker
Daniel Welcome to the Future of Life Institute podcast. My name is Gus Docker, and I'm here with Daniel Susskind. Daniel, welcome to the podcast. Susskind- Pleasure to be with you. Thanks for having me. Let's start with hearing a little bit about you and your career. Could you quickly introduce yourself?
00:01:02
Speaker
Daniel Susskind- Sure. So I'm Daniel Susskind. I'm an economist and a writer. My interest is really the impact of technology and particularly AI on work and society.
00:01:17
Speaker
i have been yeah exploring this issue for the last 10, 15 years or so. sort Written three main books exploring this issue back in 2015, a book called The Future of the Professions, which was looking at the impact of technology and AI on white collar workers in particular.
00:01:42
Speaker
Then 2020, a book called A World Without Work, which was looking more generally ah the impact of technology on the world of work. And then last year, book called Growth, A Reckoning, which was a sort of broader look at the sorts of technologies that we develop in society and this tension between the fact that technological progress and growth is associated with almost every measure of human flourishing. And yet it's also seemingly responsible for many of our greatest challenges too.

AI's Economic Impact: Economists vs AI Researchers

00:02:10
Speaker
And then you have an upcoming book about the future of work for our children. Maybe say a little bit about that. The observation I make is that you know this this work has taken me to all around the world, spoken to thousands of organizations, hundreds of thousands of people. And yet wherever I am whoever I'm talking to, the question I get asked the most is always the same, which is what on earth should my children do?
00:02:37
Speaker
Mm-hmm. and And there's always a kind of frustration because you always get asked that question and you have a couple of minutes to answer it. And i think everyone who leaves that interaction feeling a bit disappointed. The person who asked the question feels that the answer was a bit you know shallow. I leave it feeling ah had so much more to say. And so i wanted to to write a book exploring exactly this. So but the the new book is exactly that, you know, what should my children do?
00:03:07
Speaker
how how to flourish in the age of AI, drawing on all the sort of thinking and conversations and ah experience of the last decade and a half. That's great. And we're going to talk about that book also in this conversation.
00:03:19
Speaker
But I want to start a different place. On this podcast, i interview a bunch of AI researchers, and they have a perspective on AI and the and the the economic impact of AI that differs from from how economists tend to think about it. So how do you how do you think economists and AI researchers ah disagree on on the future of AI and and the economic impact?
00:03:42
Speaker
I think it's changed a lot in the last decade or so.
00:03:48
Speaker
the the The kind of issue that I really cut my sort of intellectual teeth on back in the beginning was ah sense that economists were systematically underestimating the capabilities of technology. This was the observation I was making back in sort of 2010, 2011, that in particular, sort of governing idea and in sort of economic literature was that, well, that machines can perform routine tasks and activities, but they can't perform non-routine ones, things that require faculties like creativity and judgment and empathy.
00:04:20
Speaker
And yet, what we could see even back then was gradually, but pretty relentlessly more and more non-routine tasks were being taken on by the latest technologies and eventually by, by AI. And I was interested in, you know, what, why it was that economists were making this sort of systematic mistake and thinking about the capabilities of technology.
00:04:45
Speaker
And what emerged from that was, sort of a realization that economists were using quite an old-fashioned conception of how it is that machines and technology works.
00:05:01
Speaker
A view that if you wanted to automate a task, you had to sit down with a human being, get them to articulate how they perform the task, and then write a set of instructions for a machine to follow. ah and you know that was true ago.
00:05:17
Speaker
when when people in AI were working on expert systems and it was all very top down. And it's true that back then, if a human being couldn't explain the sort of particular rules or you know kind of thought reasoning processes that they went through, it was very hard to see how you might automate a task. but you know, of course, what's happened in the decades since is that machines don't have to follow the kind of explicit instructions articulated from the top down by human beings. You know, we make medical diagnoses now through AI, not by trying to copy the kind of particular rules that a doctor follows, set down for the system to follow, but by learning from the bottom up, you know, through lots

AI in the Workforce: Complement or Substitute?

00:06:00
Speaker
of data. and
00:06:01
Speaker
And so this was this was the sort of, I think this was the the thing that the mistake that economists were making, it was not quite appreciating how it was that these new technologies were working and and what that meant for the kind of traditional boundaries they'd drawn between what tasks machines could and could not do.
00:06:27
Speaker
For can computer scientists, I think
00:06:33
Speaker
in the way in which, and you still see this to some extent today, but less, less than you did. But again, 10 years ago, I think the way in which many people in AI, computer science, were talking about the impact of technology on work, were
00:06:48
Speaker
neglecting a sort of fundamental realization that economists had made in thinking about the impact of technology on work, which was that technology can have two very different impacts on work.
00:06:59
Speaker
On the one hand, it can substitute for human workers, displacing them from particular tasks and activities. so And, you know, those are the sorts of examples that I think you know capture the sort of capture our imagination yeah and you know capture the sort of headlines in the popular press yeah the moment a machine outperforms a doctor or outperforms a human driver wherever it might be. but there's also And so there's that sort of harmful effect of technology on work. And I think that's what many you know technologists were focused on, but there was also a second far more helpful effect of technology on work, which is that it sort of, in it could compliment workers as well. It could increase the demand for human beings to do tasks that hadn't yet been automated.
00:07:40
Speaker
And the way in which that process worked was far more subtle. And and the actual impact of technology on work depended upon the sort of battle between these two forces, kind of harmful substituting force, a helpful complementing force. And i think, you know, a decade or so ago, computer scientists, AI researchers, technologists in general were quite bad at recognizing the sort of dual nature of the impact of technology on work. So I yeah in In short, I think economists have learned a lot in the last few years by understanding far more about how these technologies work and what that means for their capabilities. I think similarly, computer scientists have learned a lot about how the different ways that these technologies can actually affect the work that people do. And as a result, the kind of
00:08:31
Speaker
indeterminate, the kind of uncertain yeah aggregate effect that these technologies can have on work. It's not as straightforward as focusing on these sort of cinematic substitution effects. One complaint I hear from economists is also that it takes that computer scientists and AI researchers in general underestimate how long, how much time, how much effort is involved in implementing these technologies into existing companies and workflows, and how long it takes for these technologies to diffuse in the in the economy in general.
00:09:03
Speaker
So having demonstrating something in a controlled lab setting or ah in a test setting environment is very different from from it being being implemented into the economy. Do you think that's that's true? Because you can also make the opposite case that something like coding, for example, is redit is ready to be implemented basically the moment it's created.
00:09:27
Speaker
so So I think what you're describing is really important. it's important for explaining one of the kind of big puzzles about technology and economics, which is that Anecdotally, we appear to be surrounded by stories almost every day of technologies taking on tasks that we thought only human beings alone could do. yeah know Remarkable stories. And yet, when you look at the productivity statistics, you know that that famous line that economists have said over the decades, which is, you know you see technology everywhere apart from in the productivity statistics. And one of the kind of really compelling explanations for that is exactly, as you say, the lag of
00:10:08
Speaker
between the kind of invention, the innovation, and then the amount of time it takes for these technologies to actually kind of find practical use and then diffuse through the economy.
00:10:23
Speaker
um And you saw that, you know, say in the industrial revolution, you have this kind of wave of extraordinary technologies around 1780, you know, the spinning jenny, the roller spinner, the power loom, all of these. And yet it takes many decades for the effects of those technologies to start to appear in the productivity statistics.
00:10:40
Speaker
And is that the most important metric productivity? Or

Tracking AI's Economic Influence

00:10:43
Speaker
what should we be measuring? What should we be noticing? Is it GDP? Is it productivity? Is it the unemployment rate? What is most important for us to notice when we want to keep track of AI?
00:10:55
Speaker
Thinking about Britain today, sitting here, you know as I look around, public services, backlogged and broken. average real wages haven't really risen for 16, 17 years, they're sort worst run since the Napoleonic Wars, worklessness rising, there are very few problems in the British economy, which would not be solved by more productivity growth.
00:11:23
Speaker
And so you if you're looking for, and you know, it's true more generally that yeah almost every measure of human flourishing is associated with ah growing economy and a growing and and a growing economy requires productivity to growth. So I think, you know, in thinking about the sort of the the benefits of technological progress,
00:11:44
Speaker
ah productivity is is really, really important. um So that that's one measure that we ought to be, you know, paying attention to and trying to understand, you know, are we...
00:12:00
Speaker
and and try to also understand, you know the limits in which we, the limits of the ways in which we measure productivity and this sort of productivity paradox that we see all these technologies around us. And yet they don't seem to be making a difference to the sort of, to the way in which we traditionally measure them, which is the productivity statistics.
00:12:19
Speaker
But I also think, I think you're exactly right as well to sort of point to the unemployment. I think, yeah A running theme through my work is that we're just not taking the impact of these technologies on the work that we do seriously enough.
00:12:36
Speaker
And as a result, I think paying attention to what write a lot about, this idea of technological unemployment is is's one of the things I'm paying you know particularly close attention to as well.
00:12:52
Speaker
I think it's worth though, and it's quite important, particularly in the shorter shorter run, that it's unlikely that we're going to see mass pools of unemployed people due to you know technological change. It's far more likely that what we'll see, and to some extent I think we already do, is technology not affecting the sort of quantity of work, but the quality of work, not the sort of number of jobs, but the nature of the jobs that are out there.
00:13:14
Speaker
um And I think in many respects, whether it's the pay of the work that's available, or, you know all the different dimensions in which, you know the quality of work might run, the way in which those are being either improved or indeed degraded by technological progress is really important.
00:13:32
Speaker
Is there a way for us to to capture the productivity without getting the unemployment? So is there a way for us to steer towards technologies, especially AI, that complements labor as opposed to replaces replacing workers?
00:13:48
Speaker
Yeah. I mean, this is the, this is, one of the big hopes. And I think the kind of general philosophy is right, which is that I think very often when policymakers and politicians talk about technological progress, that sort of the metaphor they have in mind is that they're like a sort of train driver sitting in a ah train and they can kind of push down on the throttle and speed up and get more technological progress or pull back on the throttle and slow down and get...
00:14:16
Speaker
less technological progress, but the sort of their direction of technological travel is kind of fixed by the the rails that are set down for them to trundle along.
00:14:28
Speaker
I just don't think that's the right. And so in a sense, the only question that matters is do we want more or less technological progress? And that's not right. And this is you know a big argument of mine the most recent book, that
00:14:41
Speaker
that actually a far better metaphor is a sort of nautical one. yeah It's far better to think of ourselves at sea on a little boat and we can pull up our sail and go faster or put it down and go slower. But we also have a huge amount of discretion over the kind of direction of technological progress as well, the sort of nature of technological progress.
00:15:00
Speaker
And yeah I think that's true of the impact of technology on work as well. the Just to go back to that distinction before, technology can have very different effects on work. It can complement, it can substitute, it can reduce the demand for the work that human beings do, or it can increase it. And that characteristic of technology isn't fixed.
00:15:23
Speaker
you know It can be shaped by the incentives we create in the economy. So do you know just just one example, It's really interesting that in every year yeah since 1981, the effective tax rate on hiring a human worker has essentially been higher than using a machine.
00:15:46
Speaker
In other words, at the margin, the US tax system seems to incentivize you know replacing a worker with the machine. In other words, it seems to incentivize the development of technologies that substitute for workers. And so you might say that's the sort of, you know,
00:16:01
Speaker
incentive that you might actively intervene to change. So there's quite a kind of really interesting and i think important conversations at the moment about how we might steer technological progress exactly, as you say, away from technologies that substitute for workers towards those that complement them.
00:16:19
Speaker
But do you think do you think that's ultimately possible? i'm i'm guessing there will be strong economic incentives for ah developing technology that substitutes for for human workers as opposed to complement ah human workers. Just because if you fully substitute a worker, you can then cut out the middleman, so to speak, and and get the productivity without having a human...
00:16:42
Speaker
ah without having to pay a human worker. Yeah. So i do um I do think there is a lot we can do with this, but I do think there are limits. not not the Not the limits that you're describing, but one of the big limits is just practical, which is that ex ante, it's quite difficult to know the impact that a technology is going to have on the labor market. If you say to a computer scientist, well, look, is this going to substitute for a worker complement them?

Future Human Roles in an AI World

00:17:09
Speaker
Like, you know,
00:17:11
Speaker
I'll say, <unk> you know, I don't know. i need to you know see it in the market. I need to see how people use it. and So it's quite difficult to anticipate. On that point, actually, what's whats what's the best research we have for trying to predict, predicting whether some technology will substitute for human workers or complement human workers?
00:17:31
Speaker
i don I don't really think that that literature exists. It's not something And just to kind of you know explain why, think about something like the automatic teller machine, the ATM.
00:17:42
Speaker
When that was released, ah just yeah intuitively, it feels like you know this is going to be you know a disaster for people working in banks. you know they you know Their job is to hand out money over a you know a desk. Oh my gosh, you know the ATM automates this.
00:18:03
Speaker
This is going to be you know the decimation of the kind of know the the sort of the the world of bank employees. And yet, if you look at what happened in the United States once the ATM was rolled out, the opposite happened. You had a sort of surge in bank tellers. And the reason why that happened was, in retrospect, entirely understandable, but ex ante quite difficult to anticipate, which is that, in part...
00:18:26
Speaker
in part the nature of what it meant to work in finance changed. And so people weren't handing out cash anymore, but they were, you know, offering financial advice or, you know, offering kind of, you know, types of, you know, financial, financial products and so on. and so the nature of the work changed and, but also, you know, the U S economy grew as well. And so,
00:18:45
Speaker
you know the There was you know new demand, new types of financial. So so the the the point again, which is that this kind of helpful complementing force works in ways which are quite subtle and quite hard to anticipate in advance. So that that's part of it, which is the ex ante. It's just very difficult to anticipate what impact the technology is going to have on work.
00:19:06
Speaker
There's also the added complication, which is that kind of ex post, the impact that these technologies have on work can change as well. yeah If you think of something like a GPS system in a car, yeah in a world with human drivers in the seat, a GPS system complements human drivers, right? It makes them more productive at the wheel. It allows them to navigate unfamiliar roads. yeah it's It's a compliment. But In a world where we have driverless cars, well, these GPS systems just complement the yeah complement the sort of the driverless car instead, making them better. So yeah the this the same technology over time, the the effect that it has on the demand for work can change.
00:19:48
Speaker
There's also, I think, and this is a kind of deeper and deeper question, which is not a kind of technical one about whether or not
00:19:58
Speaker
technologies are going to compliment or substitute, but a kind of moral one about yeah the the implicit assumption in redirecting technology away from technologies that substitute towards those that compliment is that there is something important about work that we want to protect.
00:20:18
Speaker
And two obvious reactions. One is that work is an important source of income. yeah know It's the main way that we share our income in society. For most people, their job is their main, if not their only source of income.
00:20:31
Speaker
It's also a way of allocating meaning and purpose as well, of course. you know Work isn't simply a source of income, but it's also a source of direction and fulfillment structure. um And so you might say because of those things,
00:20:46
Speaker
yeah There are good reasons to want to redirect technological progress away from those that substitute towards those that complement. But as as you'll know, there are many people who argue, well, well look there are other ways of sharing out income in society other than through the work that people do.
00:21:02
Speaker
yeah Work is not the only way of solving the sort of distribution problem of how you share out prosperity in society. And well, hold on a sec. Also, when you think about work and meaning, lots of people really don't like the work that they do. And lots of people...
00:21:16
Speaker
would if they could, you know, find meaning and purpose outside of the world of work. um And so actually, it's not entirely obvious that we want to be steering technology away from substituting towards complementing. In fact, we ought to be doing the opposite and a world with less work where people get their income from non-work and ah you know opportunities and find meaning and purpose outside the labor market is perhaps a world that we ought to you know welcome. And and you know there's there's there are lots of kind of thinkers and writers who argue exactly that. So there are both, kind of I think, important sort of technical and also important kind of moral reasons to wonder whether the project of redirecting technological progress is straightforward as some of its advocates suggest today.
00:22:03
Speaker
i think you know I think there is some merit to it, but I don't think it's the panacea that that that many hold out as. do Do you think these decisions we make on ah on a societal level will play a large role such that it will show up in the and the economic metrics and and statistics? And what i'm asking here is, for example, do you think think we'll simply decide that some jobs are, that we don't want to automate or substitute some jobs?
00:22:32
Speaker
There's sort of two questions here. One is, you know we we live at a time when the leaders of all the large AI companies say, we are going to build a system within a decade that can outperform human beings at every cognitive task but that they do.
00:22:54
Speaker
And, you know, there are good reasons not to take those claims entirely at face value, not least the kind of extraordinary financial incentives that these companies have to, you know, talk up the capabilities of their technologies. But yeah that there are that that said, there are you know very few examples of technical problems that we have invested as much finance and as much human capital in the pursuit ah of as the pursuit ah of AGI.
00:23:27
Speaker
you know We've currently invested, Stuart st stueart Russell, the computer scientist, estimated something like 10 times what we did during the entire Manhattan Project in the pursuit of AGI. It's enormous. And so i think asking ah the question,
00:23:43
Speaker
you know, what if we succeed? What if we build these systems? And, you know, what work might remain for human beings to do, even in a world where these systems and machines could do everything more productively than us, or at least every economically useful task more productively than us. I think that's That's an important question. That's a kind of technical question about what work might, what work from a kind of technical point of view might remain even in a world in which machines can do everything. And I think the answer to that question isn't obviously nothing. Yeah, I think there are quite interesting it reasons to think that there is still work that human beings will do, even in a world in which machines could do everything better than us.
00:24:23
Speaker
But then there's also the question that you're asking, which is more of Might there be less from a kind of technical point of view and more from a sort of moral point of view, might there be certain roles, certain jobs that we want to protect from automation, that it might be these technologies could do them, but we collectively decide that they shouldn't do them from a moral point of view. And I think there are also some of those as well.
00:24:49
Speaker
Yeah, it actually, the the first point, it sounds like a contradiction to say that AIs will be able to do everything, but there will still be work for humans to do. What could be some reasons for that?
00:25:00
Speaker
There's a few reasons.

Human Preferences vs AI Capabilities

00:25:01
Speaker
i mean, one is the sort of economic reasoning that that comes from the sort of internet world of international trade, which is the kind of just comparative advantage that, you know if you think about what happens when countries trade,
00:25:17
Speaker
yeah if you think about a simplified story with, say, just the US and Vietnam, and imagine there are just two goods that are produced, robots and rice.
00:25:28
Speaker
The United States, in theory, you know has the sort of absolute advantage in both of those things. It could produce robots more productively than Vietnam. And it could also you know produce rice more productively than Vietnam. But it it doesn't make sense for the United States from an from a kind of efficiency point of view to do everything. It makes sense for the countries not to follow their absolute advantages, in which the US has an absolute advantage in everything, but to follow their comparative advantage. What are they relatively better at?
00:25:57
Speaker
And the U S is likely to be relatively better at producing robots than rice. And Vietnam's likely to be relatively better at producing rice than robots. And so this country specialize in producing one of the things, and then they trade in exchange and exchange in that way, the kind of collective pie is, is greater.
00:26:16
Speaker
And, and the logic of comparative advantage applies equally well. It seems to me, not when we're thinking about, you know, thinking about the U S and Vietnam, but when we're thinking about people in AI. you know and even in a world in which AI could do everything more productively than human beings, it doesn't necessarily make sense from an efficiency point of view for AI to do everything.
00:26:36
Speaker
yeah Human beings are a productive economic resource. you know They should do what they what's in their comparative advantage. And similarly, AI should do what's in its comparative advantage. Now,
00:26:50
Speaker
now That's not to say, though, it's one thing to say that there might be tasks and activities in which labor retains a comparative advantage. It's another thing to say that there's going to be enough demand for those residual activities to keep everyone in well-paid work doing only those.
00:27:09
Speaker
So those are kind of two different observations. But at least from a, you know, there are kind of, there are economic reasons to think that even in a world in which these systems and machines can do everything, labor will still have a kind of comparative advantage in certain things. so Now, as these systems become just relentlessly more capable,
00:27:29
Speaker
you know, that comparative advantage might shrivel and the demand for those residual activities might shrink even further. Again, you know, it's not it's not a necessity that it means there's going to be enough work, but it it's a possibility, you know, that is a possibility. the
00:27:44
Speaker
but But then there are also... I think alongside these sort of economic thoughts around comparative advantage, there are also
00:27:58
Speaker
sort of
00:28:00
Speaker
I think, reasons to think that that there are certain tasks and activities that we have a kind of ah taste or a preference for how those tasks and activities are performed, not simply by how well they're performed. In other words, the process, not just the outcome.
00:28:21
Speaker
Yeah. So this is this is what you talk about and what will remain for people to do a recent paper. And you you you separate these preference motives into aesthetic achievement and empathy. Maybe we could talk about those in turn.
00:28:35
Speaker
Yeah, sure. I should say just but this project, I think, just to frame the kind of project, you know i don I don't think you know the The challenge for now in the world of work because of AI is not that there aren't enough jobs for people to do. it The challenge is that there is work and for various reasons, people can't do that work.
00:28:51
Speaker
and I call this sort of frictional technological unemployment. And, you know, there's there's various reasons for it. You know, people might not have the right skills to do the work that's available. They might not live in the right place that work has been created.
00:29:03
Speaker
They might have a particular conception of themselves that's at odds with the available work and want to stay out of work to protect the identity. I think those are the sorts of challenges that we face now. but But I do think in light of the kind of technological developments that are taking place and in light of what, you know, those at the vanguard are telling us, we also need to ask the question, you know what if they succeed?
00:29:27
Speaker
What if we build AGI? What does that mean for the world of work? And that's where this paper sits. that was This paper kind of, and it was published by Columbia a few weeks ago,
00:29:41
Speaker
this paper is in that setting. It sort of takes the premise of AGI as given and then says, well, what, what will remain for human beings to do? And, and yeah, so there's this sort of, there's these kind of economic reasons around comparative advantage, but there's then also these sort of preference reasons, preference limits.
00:29:58
Speaker
And these are all different cases in which we seem to value how a particular task is done, not simply how well it's done. So, you know, the aesthetic limits are,
00:30:09
Speaker
really interesting. You know, when you walk into the Sistine Chapel and look at the ceiling, you think, gosh, isn't that beautiful? But you also think, isn't it remarkable that a human being did that? In other words, we value the fact that that kind of painting was done by a human being, not simply that it's extraordinarily beautiful.
00:30:28
Speaker
And you might say that so long as we value the process through which a task is done for these sorts of aesthetic reasons,
00:30:37
Speaker
then by definition, these are the sorts of things that these technologies will struggle to do because they are not human. And yeah there's more prosaic examples of it. you know We value a you know, the fact ah a suit is, you know, handcrafted by a tailor or we value the fact a, you know, chocolate is, know, hand molded by a human artisan chocolatier or, you know, more prosaically, I love the, there's a great story about a Michelin star restaurant in the UK that sold,
00:31:13
Speaker
automated coffee yeah capsules as their coffee. I don't know what machine it was, but they were selling automated coffee capsules for their coffee and ah without telling the customers in the restaurant. And when the customers found out in the restaurant, they were completely furious. And the reason for their fury is interesting, which is, you know, in in blind taste tests, people often struggle to tell the difference between a sort of automated coffee and one handcrafted by you know, sort of great barista. But what they were complaining about wasn't actually the taste of the coffee. It was the fact that they were paying, they thought, for a kind of Michelin star process.
00:31:53
Speaker
They wanted the kind of the artist craftsman making their beautiful cup of coffee, and they were instead were getting a machine. Yeah. I mean, how how should we think about this then? Is this simply an instance of the labor theory of of value kind of re-emerging where people think that the labor that went into creating something is is the value of of that thing?
00:32:13
Speaker
or Or is there something deeper here? Is it is it the that the labor that went into something becomes, in some sense, part of the story, part of the product, part of what you're buying? Yeah, I don't think we need to go Marxist on it. I just think it's People have tastes and preferences, not simply for outcomes, but also for processes.
00:32:36
Speaker
Let me give you an example. So that's this kind of aesthetic one. There's also, I think, interesting reasons of achievement as well. Achievement limits too.
00:32:48
Speaker
you Anybody interested in AI would have followed over the decades the sort of progress in chess playing machines and Today, the the very best chess machines could beat the very best human beings at a game of chess.
00:33:02
Speaker
And yet, when the very best human beings sit down to play each other at chess, there is huge demand to watch them duel.
00:33:16
Speaker
And the reason is because we like to see, you know, we don't simply care about the kind of efficiency in which the pieces are moved around and on the board, we also care about who's doing the moving or what's doing the moving. We like to see human beings outperforming relative to some sort of standard benchmark of achievement. And so, you know, there are lots of areas of our lives where again,
00:33:43
Speaker
for reasons of achievement, we value not simply how efficiently a task is done, but also how it's done. And that is an interesting role, I think, for for human beings as well.
00:33:55
Speaker
There's also a kind of, I think, a final type of... sort of preference limit, which are the kind of, you know, sort of empathetic or emotional ones. Actually, but before we get to the empathetic one, at the achievement one, how, so which jobs do you imagine would would exist in the future based on our preference for human achievement?
00:34:16
Speaker
Is it that, because we can't all be Magnus Carlsen, or we can't all all be Eugen Bolt, or someone who who kind of pushes the limits of what humans can achieve. So how will this give rise to to jobs?
00:34:29
Speaker
Yeah, i i think I think you're exactly right. And again, i don't want to slide from making the claim that there are some tasks and activities that will remain for people to do. The narrow claim to a kind of broader claim, which is that there's going to be enough demand for those tasks and activities to provide everyone who wants it with well-paid work.
00:34:46
Speaker
Yeah, those are two very different two very different observations. And yeah just more generally, I think it's a mistake that people make when they think about the kind of you know longer term future of work, which is, it's one thing to say, this is a task that human beings will always do. And it's another thing to say, there's going to be enough demand for that task, you know, to provide everyone with a job doing it.
00:35:08
Speaker
i think the sort of I think anything involving so a kind of degree of competition, degree of sport, kind of degree of rivalrous interaction, these sorts of reasons might bear out.
00:35:23
Speaker
Whether it's kind of competition on the sports field, whether it's kind of intellectual competition, whether it's yeah know any anything that's about, again, you know achievement relative to some kind of standard human benchmark.
00:35:37
Speaker
But by definition, you know, that's exclusive because it's it's, you know, you are saying you are, you know, valuing the exceptional rather relative to the average and in a world in which everyone's exceptional, no one's exceptional. So, you know, it's not a sort of, it's not a something that is going to necessarily provide everyone with well-paid work.
00:36:02
Speaker
Where there's a kind of, broader possibility I think is around the emotional or empathetic um sort of aspect where we have a taste
00:36:16
Speaker
for a human being for sort of emotional reasons. You know, it say, know, end of life care. yeah it it It matters. The very fact that a human being is sitting there in those last moments of your life is the thing that matters.
00:36:32
Speaker
yeah That it's a human being, fellow person sitting with you, understanding your thoughts, feeling your feelings.
00:36:45
Speaker
Again, you know you might say that if that's the thing that you value, then it's difficult to see how a machine could ever do that. I do think though, in all these different cases, there are limits to the limits.
00:36:56
Speaker
Um, yeah. And this is another thing that I try and do in the paper, ask how robust are these limits to these systems and machines just becoming gradually, but relentlessly more capable.
00:37:11
Speaker
Just for instance, you know, ah think about the aesthetic limits. It might be the case that these systems in the future are able to compose music that is so extraordinarily, you know, moving or ah painting that kind of reduces, yeah kind you see it and you sort of burst into kind of emotional fervor or, you know, a piece of text that somehow captures just so perfectly something you were thinking or feeling. yeah there that It's possible that these systems could achieve aesthetic outcomes, which just so,
00:37:49
Speaker
dwarf what human beings are capable of doing that actually our attachment to the fact that a human being was responsible for a kind of an aesthetic, an aesthetic um yeah outcome might just seem, know, tied and antiquated, you know, yeah and yeah and you don't have to kind of be sort of high and mighty about it. You know, think about tailored suit, you know, yes,
00:38:17
Speaker
yes, it's lovely to have a suit handcrafted by a human being, but if it's kind of you know, if one arm is shorter than the other and, you know, it pulls in the back and, you know, it's a bit tight on the bum and, and actually a kind of automate, you know, the kind of sort of the the sort of automated suit is just so extraordinarily comfy, then our

Ethical Considerations in AI Task Allocation

00:38:36
Speaker
attachment to the kind of human craft is,
00:38:40
Speaker
might might fade away. so So I think there's limits to these limits as well. And one of the things I spend quite a lot of time doing is thinking about what these limits to the limits might be. But there's also a third category, it's worth saying, there's not simply, there's the sort of general equilibrium, these sort of limits, these limits of due to the kind of this idea of comparative advantage, that even in a world in which machines could do everything more productively, it still might make sense to employ labor doing what's in their comparative advantage.
00:39:06
Speaker
There's then these sorts of preference limits where people have a ah kind of taste or a preference for how a particular task or activity is done. And I think there's various ways that might work. But then there are also moral limits where It's not simply that we have a kind of taste or a preference for a human being performing a particular task or activity, but we believe that a human being ought to do that task or activity from a moral point of view.
00:39:34
Speaker
That even if a system or machine could do the task or activity, they shouldn't do it. There's something important about... human being being involved in in the task or activity. and and example an example here might be a judge, for example. So deciding in a legal case, or we could imagine being a parole officer, or perhaps perhaps being the person responsible for making sure that military operations are in compliance with international law.
00:40:04
Speaker
that Exactly right. I think, you know, there are lots of kind of micro examples of tasks or activities that have this kind of moral flavor to them where we want to keep a human in the loop, essentially. the There's also, so there's all these kind of, you know, there's all these kind of task specific sort of moral limits.
00:40:23
Speaker
There's also, though, the broader issue of AI alignment and yeah the the the kind of the sort of the the values that sort of direct the AIs that we're building more generally, you might also think that that is an activity that human beings ought to be involved in as well.
00:40:45
Speaker
You know, there are moves, of course, to automate aspects of alignment, to remove human beings from those sorts of judgments. But, you know, you you might say that's ah that's a mistake. But again, you know, I think there are limits even to those moral limits.
00:41:00
Speaker
Yeah. And the legal one is very interesting. You know,
00:41:04
Speaker
it might be the case that an automated judge is able to reach such a sort of, you know, well honed, sophisticated, you know, well designed piece of, you know, legal judgment that, you know,
00:41:23
Speaker
it becomes, you know, morally sort of indefensible to use the sort of flawed human alternative. The human judge who, you know, gets hungry before lunch famously, and you know, is, you know, changes their sentencing decisions is stricter, you know, so, or, you know, on on the battlefield, it might become, you know, morally questionable to, you know, in the sort of increasingly dynamic and,
00:41:54
Speaker
quick nature of conflict, the fact that decisions need to be made so quickly all the time, given the sorts of technologies that are being used, the idea that you might send decision down the chain of command to a human being and then up again,
00:42:16
Speaker
perhaps in the process losing an edge, losing a person, i don't I don't know, that might seem morally, you know, so so there's, I think there are kind of interesting limits to even the moral limits. And and the argument make in the paper is that, well, look, what what you think about whether there are limits to those moral limits depends in part upon whether you have a moral theory in mind that is sort of process-based or outcome-based.
00:42:47
Speaker
You know, is, when you think about the role of AI in the legal system and whether it is morally acceptable to use an AI or not, all the re yeah is it the case that your kind of moral theory is only appealing to outcomes?
00:43:05
Speaker
In other words, does this system do a better job of, or a more effective job of making a sentencing decision? Well, if that's the nature of your moral theory, that it's purely based on outcomes, then clearly there are limits to that, you know, those moral limits, because there might come a time when these systems are just so extraordinarily capable that the outcomes they deliver are more, you know, are better in some sense than the outcomes that a human legal reasoning is able to reach. And so, you know, there are limits to that sort of moral objection.
00:43:39
Speaker
But if your moral theory appeals to the process, in some way, then that might put a break on some of those limits. If your view, and and in the extreme, if your view is that it is important for a human being to be making these sentencing decisions, however good an automated alternative comes, that there is something, you know, fundamentally important about, you know, say human beings making, know, only a fellow human being ought to be able to make the decision whether or not to, know, lock someone up for the rest of their life.
00:44:14
Speaker
if that's your view and it's kind of independent of outcomes, if you're just completely attached to process, well, then maybe you' maybe you're sort of maybe the moral limit is robust there ah because there is no matter how good outcomes become, your attachment to the human process is going to stand in the way. Now, I wonder for those who do hold those sorts of pure process-based moral theories for thinking about the impact of the let and moral limits of AI, whether as these technologies just become more and more capable and the outcomes just become that they are able to deliver just become better and better in lots of different domains, whether or not people's attachment to kind of pure process-based moral reasoning is going to hold up.
00:44:58
Speaker
Yeah, it it does seem to me that we will probably react to commercial pressures and competitive pressures from from governments or between governments and between companies and so on.

Transforming Education with AI

00:45:10
Speaker
We'll make it so that there are strong arguments. they will you know We will see strong arguments for implementing these systems, even in the cases where we might imagine that there are moral limits.
00:45:23
Speaker
But in the process of doing so, and in the process of kind of accepting these arguments, we will... We will disempower ourselves, right? If we take ourselves out of the loop in critical decisions, that's ah that that's something that's difficult to take back.
00:45:37
Speaker
Just because if you if you have the automated judge or the automated military system, then then you are then you're competitive. And what's the what's the reason for for kind of reintroducing humans back into that process?
00:45:54
Speaker
But in some sense, we need, perhaps we need the moral arguments, given that we might be entering a world in which AIs are simply better than than humans. Do you think that in the end, it it is the moral arguments that will make the the biggest difference to what we we end up doing?
00:46:13
Speaker
my my view I don't think even in the end, I think already today, those moral arguments, the kind of moral limits to automation are almost more important than the sort of technical limits. There are many things that we could use these,
00:46:24
Speaker
technologies to do that we don't do, not from a kind of technical point of view, but often just from a moral point of view. are even even not Also just a sort of ah kind of cultural point, we just feel uncomfortable, you know, even without appealing to some sophisticated piece of moral reasoning, we just feel kind of socially, culturally uncomfortable using these technologies in, you know, certain settings. And that feels to me like one of the You know, we spend a lot of time and as economists, I spend a lot of time thinking about what the technical limits to these technologies are, what they can and cannot do from a sort of technical point of view. But I think these sort of he sort of moral, cultural, social constraints on technology already bind us a great deal.
00:47:07
Speaker
Yeah. my My son is in daycare right now, and he's going to be 20 years old in 2044. How old is he now? He's 15 months old.
00:47:19
Speaker
Okay. and And so when I when i think of of the pace of AI progress, when I think of how much better these models have gotten over the last five years, say,
00:47:32
Speaker
it makes me wonder what will be left for him to do. what What do you think? What do you think the role of 20 year old people in 2044 is going to be? I think one of the, yeah, having spent the last decade and a half or so,
00:47:53
Speaker
observing and writing and thinking about the impact of technology on work, I think one of the biggest mistakes that we have made collectively is to think that we are clever enough to say, to sort of predict which jobs are going to have to be done.
00:48:11
Speaker
And as a result, what skills and capabilities are going to be most valuable in the future. And there are so many examples of this, you know, you, there's kind of contemporary examples, but there's also historical examples. You know, who who would have imagined you've gone back to the 19th, you know, the sort of late 18th century and at the start of the industrial revolution and sort of,
00:48:34
Speaker
whispered to somebody that in a few hundred years time, a national health service in Britain is going to employ more people than there are sort of men working on farms in sort of you know Britain.
00:48:46
Speaker
People, they it just wouldn't make sense. You know, there barely was... you know, healthcare, sort of spirit that we have it today. And and there certainly wasn't kind of public provision, yeah the NHS sort of fifth largest employer in the world or something like that, you know, just completely, it just would have been unimaginable.
00:49:04
Speaker
You know, the way in which life transformed in the centuries since, you know, the rise of healthcare, the rise of leisure, you know, the just incomprehensible. but yeah But again, you know,
00:49:15
Speaker
you don't have to go back to the industrial revolution. Think, you know, start of the internet era. If you had whispered to somebody that in 10 years time, people would be finding work as a search engine optimizer, it wouldn't have meant anything.
00:49:28
Speaker
Or, you know, even 2019, if you had said,
00:49:33
Speaker
you know you're going to grow up and be a prompt maximizer. It just wouldn't have meant anything because the technologies that transformed our lives and the way in which our economies changed, was it wasn't simply hard to imagine. It was almost unimaginable. you know We just didn't have the concepts.
00:49:50
Speaker
you know the concepts and yeah so And I think this is you know it's just as true today that if we think about the future and given the pace that you're talking about, the pace of change that you're talking about, the idea that we can predict jobs you know in a couple of decades' time just seems to me to be incredibly hubristic. And so the sort of the the kind of principle, one of the kind of running themes of my new book is just the the immense uncertainty that we face.
00:50:20
Speaker
And that is the challenge that we have to find a way to respond to the the uncertainty. It's just we are setting young people up to fail if we say, this is these these are the jobs that are going to be available for you to do.
00:50:33
Speaker
And these are the skills and capabilities that you must learn. in order to do them. And then perhaps one suggestion that will ah appear in people's mind is that we should, we should educate our kids in in a more general sense, we should we should teach them how to learn things, we should give them general train their general reasoning skills,
00:50:55
Speaker
you give them very general skills that can then ah help them adapt to to many different future states of the world. do you Do you think that's plausible? Or is is this a a form of kind of cope that we will be able to adapt?
00:51:10
Speaker
I think the most important thing is that we teach people how to use AI effectively. yeah I think we need to be spending something like,
00:51:23
Speaker
a third of our time in school and university learning how to use AI effectively. And I think, you know, that that is the sort of the most important thing that we can do.
00:51:36
Speaker
And when I say use AI effectively, I don't simply mean, you know, how to write prompts and, you know, get the systems, although I think that's important. But I also think it's important that, you know, we teach people, yeah know, the history of these technologies, where they came from, the way in which,
00:51:51
Speaker
yeah they we need to think about problems in order to use them effectively. The sort of the limits of these systems, technical you know the the fact they hallucinate, and you know I expect they're going to hallucinate for some time, the fact they make mistakes, you know that we're able to, and in the event they make mistakes, we're able to understand why and interrogate.
00:52:08
Speaker
And then also the kind of ethical and moral issues around their use. you know I think there is a whole AI cur curriculum, and it's a really exciting project that we need to be writing together, crafting now.
00:52:19
Speaker
And I think that's one of the most important things we could do. And it's not is' not something we can sort of just tack on to the existing curriculum. It needs to be fundamental and it needs to be substantial. And so I think, I think something like a third.
00:52:37
Speaker
And you know, i yeah, that that's not just plucked out of thin air. i if I think about, so I spent a long time teaching mathematics and economics to undergraduates ah at Oxford when I was a fellow at Balliol College at Oxford for for some time. And I taught economics across lots of different subjects there. So I taught it to, you know, people studying politics, philosophy and economics or history and economics or economics management. But what I did as soon as the, I, and I was teaching the college. So I sort of had every year, a kind of group of, you know, 20 students and, you know, small group. But what, what I did was in that first year,
00:53:21
Speaker
I took a third of their time when they were, learning economics just to teach them mathematical methods so that they could then take these tools that they had learned, you know and a big chunk of their time was spent learning it, but i you know take these tools that they learn into all the different domains in economics that they'd then be working, whether it was labor economics or industrial economics or yeah macroeconomics or micro, whatever it was. But there was a sense in which these sorts of mathematical skills were fundamental and they needed to spend right at the start
00:53:55
Speaker
a big chunk of time learning to use learning them so that they can then deploy them in all these other settings and i think yeah that is exactly yeah how i think about ai today that it's a technology that we need to be asking how we use it in every discipline and and that requires A big chunk of students' time is spent it dedicated to learning about these how to use these technologies effectively.
00:54:25
Speaker
And and that's that's one path, kind of leaning more into ai Some opposition I've heard to that is that, you know, you're a professor yourself.
00:54:36
Speaker
you're a professor yourself right and And you must have encountered some homework that seemed to you AI generated. And then perhaps wondered whether your students are actually learning anything or whether they are simply using AI to get through the homework, to to write the essay, to solve the the math, and and just handing it in without learning much.
00:54:57
Speaker
And so another direction you could go in is to become almost... Luddite with regards to AI and go back to pen and paper in order to ensure that that students are actually learning something.
00:55:09
Speaker
And perhaps then later, when they have basic skills, when they know when they are good at reasoning, writing, speaking, and doing math, then reintroduce or then teaching them how to use AI.
00:55:23
Speaker
I think what you're touching on is the kind of, you one of the, you know, the fundamental challenges that we face. And it really is one of the big problems that I'm setting out to to solve in in the new book, ah What Should My Children Do?
00:55:35
Speaker
I think the challenge for all educators in response is not to go backwards, but to go forwards, to ask, okay, well, then how do I make what I teach deeper?
00:55:47
Speaker
How do I make harder? How do i how do I enable these kids to use these technologies to understand ideas, to solve problems, to make discoveries that would have been unimaginable, you know, before these technologies came around that, that seems to me to be what we ought to be doing.
00:56:08
Speaker
rather than clinging to you know old-fashioned traditional ways of you know teaching and educating. So I think the challenge is how we make, you know as as teachers and educators, how we make the you know the the the the substance of what we're teaching you know harder, more challenging.
00:56:29
Speaker
the the but The burden, the you know the ball is in our collective court. because it's just, i think, unrealistic. It's both unrealistic to think that everyone isn't going to be using these technologies in years to come.
00:56:44
Speaker
And more kind of practically, it's also just, it's unfair because we are encouraging, we are teaching young people in an environment which simply does not, if if we sort of strip it of technology, we're teaching them in an environment that does not reflect the world that they're going to enter into.
00:57:06
Speaker
ah We are setting them up to fail.

Guidance for Parents on AI and Social Media

00:57:09
Speaker
i I think we've got to be asking how we can make it harder, more challenging, more difficult. Do we risk losing some of the weaker students if we do that? So if if you're faced with a task that is that is difficult enough for for the average student to be able to solve it using AI, is that ah is that a risk for for the weaker students?
00:57:31
Speaker
No, i don't I don't think so. i mean no more so than you know the challenge of weaker students in a world before AI. yeah On the contrary, I think one of the kind of promises of these technologies is that they are able to tailor what is being taught, how it's being taught, to the kind of particular strengths, the weaknesses of different students. i mean One of the big challenges of the traditional educational model is that it it's it's not particularly tailored. We know that one-to-one tuition with a human being is incredibly effective.
00:58:07
Speaker
But it's, you know, and we were lucky to offer it at Oxford in the tutorial system and I saw how effective it can be. Extraordinarily effective. But, you know, most institutions can't afford to have one tutor per one or two or three students. And And the promise of these technologies is that it can replicate the kind of interaction that you might have with a human tutor, but do so at a far lower cost. And so that's one of the things I think that's very exciting.
00:58:34
Speaker
And, you know, people talk about personalized learning and, you know, and and things like that, but we've really kind of barely scratched the surface. um And there's a lot more for us to do and explore.
00:58:46
Speaker
Do you think there are lessons for for parents here about how how to, whether whether or not to to kind of pace your child into a certain in the certain direction? For example, 10 years ago, it it seemed like programming, learning how to program, that was that was ah the the perfect kind of path forward.
00:59:05
Speaker
And now it turns out that it's exactly what these systems are good at doing. Yeah, now it's now it's unclear because reasoning models are exceptionally good at programming, at mathematics and so on.
00:59:16
Speaker
So you empathize, yeah are you you you spoke about the uncertainty of the future. And how should parents react to that? I think the biggest risk among parents is the legitimate concerns about the impacts of smartphones and social media on young people. And I think, you know, there are real issues around social media, and particularly in the mental health of young people.
00:59:43
Speaker
But what I really worry about is that legitimate concerns about those technologies kind of seep into how parents think about AI as well. AI is not the same thing as social media.
00:59:54
Speaker
And if we bundle technology into a kind of monolithic, indivisible lump of bad stuff, parents are going to let down their kids in preparing them to use these technologies.
01:00:10
Speaker
So that is the kind of biggest warning I have for parents at the moment, which is, know, I share many of the concerns that are out there at the moment about smartphones and social media, but AI is a very different beast.
01:00:24
Speaker
And it's a mistake to conflate them and to conflate the kind of concerns we have about the former with the kind of extraordinary opportunities with the latter. and And that's part of the sort of the philosophy of the book.
01:00:37
Speaker
as ah As a final question here, there are groups of people, and here think about children, perhaps the elderly and so on, that have more trouble then than other people kind of adapting to change.
01:00:52
Speaker
Kids tend to like structure, elderly people tend to dislike change and so on. How do you think about those groups in society when you're thinking about a more radically uncertain world, a world that's changing faster?
01:01:05
Speaker
what What should we do on on a societal level to help those those groups thrive? The new book is focused exactly on that first group because I think you're exactly right.
01:01:17
Speaker
I think you know we live in an age of anxiety where almost every day we are told stories of the sort of existential challenges that we face and kind of reminded about our you know, incapacity for dealing with them. And I think, you know, there there is a hope that has driven many parents on before me that if they work hard and if they love their families and look after their families, their children's future is going to be better than their past.
01:01:55
Speaker
There's a sense that, you know, and yet I think now, for the first time in some time, there's uncertainty about that. I think many parents are not sure that if they keep their head down and you know work hard and look after their families, that their children's future is gonna be better than their own.
01:02:18
Speaker
and And that's quite a dangerous thing when people lose faith in the future. And that is, I think, a situation that many parents find themselves in. and one of the,
01:02:32
Speaker
reasons I'm, in contrast, hopeful about the future is because of the possibilities of AI and the possibilities of AI
01:02:45
Speaker
particularly in the educational setting. I think we can, if we get it right, use it to do really extraordinary things. I think the kind of traditional education system is broken.
01:02:56
Speaker
Not enough people have access to a good enough education. Not enough people are adequately prepared for the world that exists, you know, beyond the kind of artificial environment of school and university. And, and you know, the possibilities of these technologies, both to get in thinking about what we teach and how we teach and when we teach are really extraordinary. And so, yeah, that's why I'm spending a lot of my time at the moment trying to gather all these thoughts together so that exactly, as you say, we can help this group of people, you know, young people really flourish in, in the world, in the world to come.
01:03:32
Speaker
Great. Daniel, thanks for chatting with me. It's been a real pleasure. Such a pleasure. Thanks so much, Gus.