Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Anton Korinek on Automating Work and the Economics of an Intelligence Explosion image

Anton Korinek on Automating Work and the Economics of an Intelligence Explosion

Future of Life Institute Podcast
Avatar
5.3k Plays5 months ago

Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com  

Timestamps: 

00:00 Automation and wages 

14:32 Complexity for people and machines 

20:31 Moravec's paradox 

26:15 Can people switch careers?  

30:57 Intelligence explosion economics 

44:08 The lump of labor fallacy  

51:40 An industry for nostalgia?  

57:16 Universal basic income  

01:09:28 Market structure in AI

Recommended
Transcript

Introduction to AI Economics

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker, and I'm here with Anton Korinek. Anton, welcome to the podcast. Hi, Gus. Thank you for having me. Do you want to introduce yourselves to our listeners? Hi, I'm Anton Korinek. I'm a professor of economics at the University of Virginia and the economics of AI lead at the Center for the Governance of AI. So then you're probably the perfect person to ask how automation and wages affect each other. Ah, yeah, that's the billion dollar question that economists have been debating for at least the past 200 years.

Is Automation Good or Bad for Wages?

00:00:40
Speaker
So if we go back to the beginning of the Industrial Revolution, there was this big debate between the luddites on the one hand and the emerging economics profession, and I guess the entrepreneurial class on the other hand, ah about is automation good or bad for wages?
00:00:58
Speaker
and From a big picture perspective, economists have been arguing for the past 200 years that automation is good because it is what ultimately makes us as a society much wealthier. Yet, from the perspective of an individual worker who gets automated, I think the question has been just unambiguously obvious that automation is bad for them. The big question is how can we reconcile these two opposing perspectives? How can we make sense of that? And for the past 200 years, I guess economists have argued, well, automation is painful for the individual who gets automated.
00:01:44
Speaker
But it allows our economy to produce more with less. It makes the economy more efficient. And ultimately, after some adjustment period, it is also good for the workers that are experiencing automation because they can switch to more productive jobs and jobs that will ultimately generate higher income for them.

Public Concerns Over AI and Job Automation

00:02:07
Speaker
So now we are facing the age of AI. And I think job automation is one of the greatest public concerns when people play around with tools like chat GPT and they see that these large language models
00:02:25
Speaker
can perform more and more intelligent tasks. So economists have jumped to their natural reaction to what we've been preaching for the past 200 years. Well, we need some automation, and we certainly need technological progress for our economy to grow and for workers to ultimately be better off. But the big question, and that's a question that I'm taking very seriously, is whether this time is different. And you have a model of how to ah think about artificial intelligence and wages, where one of the things you conclude is that human wages depend on the pace of automation. So on how fast different jobs and different tasks are automated.

Impact of Automation Pace on Wages

00:03:15
Speaker
Why is it that wages will rise if automation happens slowly?
00:03:21
Speaker
Yeah, that's one of the really surprising findings. So I should mention this paper was motivated by this notion that a lot of people in Silicon Valley have that at some point in the future, machines will be able to do literally everything that humans can do. And if that's the case, then it is easy to see from an economic perspective that the wages of workers are going to be at the same level as the cost of the machines that can do the same things. So in some ways, machines cost will be a ceiling to workers wages. Now, that may be true in some
00:04:05
Speaker
future depending on whom we listen to maybe in five years or 10 years or 20 years or much longer.

Economic Transition to Automation

00:04:13
Speaker
But the big question is what will our economy look like under transition to that future as we progressively automate more and more and more? The interesting thing is, if we have a little bit of automation, then that by itself is again painful for the automated workers. But as long as we accumulate enough capital, it is actually beneficial for the rest of the workers because it makes their contribution to the economy comparatively more valuable.
00:04:49
Speaker
So if we automate a little bit, then we can suddenly produce this sliver of automated goods much cheaper. And that means all the other goods rise in value and the labor producing all those other goods and services will rise in value. So there is this balance between some automation raising the value of what is left, what is unautomated. and the automation also displacing the workers that are being automated away in the goods or services that let's say the AI or the machines suddenly can perform.

Historical vs. Future Automation Scenarios

00:05:28
Speaker
And this race between essentially automation and what is still left for humans is what determines the level of wages.
00:05:39
Speaker
This is the situation we've seen historically that people become more productive when you have more machinery and more automation of labor that's traditionally required a lot of of human labor or tasks that's traditionally required a lot of human labor. And the question is then, that's exactly can we automate and ah say 90 or 99% of tasks and then have a lot of humans move into that last 10 or 1% before we reach full automation? What do you think of that? Yeah, I think that's actually almost what has happened since the start of the Industrial Revolution. yeah If you look at
00:06:22
Speaker
The kinds of societies that we had 250 years ago, the vast majority of people worked in agriculture.

Automating Jobs: Lessons from the Industrial Revolution

00:06:31
Speaker
It was very hard work and it was necessary basically for everybody's survival. And today in a country like the US, less than 2% of the population still work in agriculture and the rest has all moved to what our economic models would say, more complex, more advanced jobs. Of course, the jobs remaining in agriculture are also much more complex and advanced because they involve operating sophisticated machinery. But in some sense, you could say, well, we have already automated 98%
00:07:08
Speaker
of those tasks that people have worked in 250 years ago because now only 2% of workers can produce that kind of agricultural output that we all need as the basis of our survival. so We are at 98% automation compared to the pre-industrial times, and we are automating more. And I think there is still quite some way to go where humans can still focus on their remaining tasks, so to say, before we are in this kind of world that we mentioned before, where everything would potentially be automated.
00:07:49
Speaker
If we then imagine that today in 2024, that's year zero, do you think the economy could do the same again so that we could automate 98% of all jobs that exists in today's economy and end up with a similar situation?

Machines Performing All Human Tasks

00:08:06
Speaker
Gosh, it's so hard to imagine, right? Yeah, it is for me too. In principle, there is no reason why that shouldn't happen. I should also mention, though, that it is highly plausible that at some point in maybe 10 years or maybe 20 years, we will have machines that can literally do everything that a human can do, including inventing new tasks and then performing those new tasks that we humans or that the machines have invented.
00:08:41
Speaker
That's something we've explored in depth on on this podcast. So yes, we that's definitely a possibility. You describe wages as a race between automation and capital accumulation. How do how do these variables fit together? Yeah, so how does automation help us with our wages? That's the force that I described before. If you automate something, you can suddenly produce the automated goods or services much cheaper. And that means the value of the remaining goods rises and the value of the labor producing the remaining goods rises.
00:09:20
Speaker
Now, however,

Rapid Automation and Wage Reduction

00:09:22
Speaker
cheaply produce those automated goods and services. We need the requisite capital. We need, for example, now in the age of January of AI and large language models, we need the server farms to operate the language models or let's say in earlier times we needed the excavators to automate let's say creating new buildings or building roads and if we don't have that capital then the benefits from automation don't really materialize. So let's say
00:09:59
Speaker
If we develop the technology to build excavators, but we actually just use a hundred of them all around the world, you won't really see a macroeconomic productivity impact. You need tens, hundreds of thousands, probably millions of excavators working all around the world in order to deliver the productivity benefits from the technological innovation of inventing excavators. And that means you need this capital accumulation, the accumulation of the machines that can do things cheaply in order for human labor to benefit from producing the remaining tasks.

Gradual AI Implementation Benefits

00:10:42
Speaker
So when I speak of this race between automation and capital accumulation,
00:10:48
Speaker
What it means is if we automate very quickly and we displace what the humans can do, but we haven't actually accumulated the machines that can produce the automated things, then the economy does not grow very much, but the labor already can become devalued and displaced. And if that's the case, then wages are likely to decline. On the other hand, if we have sufficient capital accumulation, we produce lots of machines that can perform the automated tasks cheaply, then the value of the human labor and the remaining tasks will go up.
00:11:29
Speaker
so wages are So human wages or we humans are better off in a situation in which we gradually implement AI technology and we are worse off and our wages are lower in a situation in which we quite suddenly jump to human level intelligence across the board.

Relevance of Labor in Advanced Automation

00:11:48
Speaker
Yeah, that's a really interesting question. And so let's say our labor incomes will definitely be lower if we have very rapid automation and capital accumulation has not kept up yet.
00:12:06
Speaker
However, if we, and this is maybe a bit utopian now, but if we manage to produce the benefits, sorry, to share the benefits of these machines, of these really rapidly advancing machines a little bit more broadly. then we would actually be all better off because these machines can produce so much more than we humans by ourselves can produce. So the big question for the future, and I focus on that in several of my papers, this is one of what I think is the most fascinating question is context.
00:12:45
Speaker
is in a future in which machines can do more and more and will at some level be the equivalent of humans in terms of their capabilities. Do we actually want to maintain a role for labor because that's the way our societies have been organized for the past few hundred years? Or do we want to find a better way of distributing income without all having to work on something that the machines can do better than us?
00:13:15
Speaker
So you describe the effects of automation as first increasing wages and then decreasing wages. Why would the effect be that way? So if we are in an economy where we have very little automation, then there are a lot of low-hanging fruits, so to say. If we automate just a little bit, we'll have very high product productivity gains from that, and those productivity gains will ultimately benefit all the workers. On the other hand, if we are in an economy where there's just very little left for humans, if we displace even more of what is left,
00:13:58
Speaker
that makes it likely that the displacing force of automation will predominate and the productivity gains will not filter through to the

Human Task Complexity vs. AI Capabilities

00:14:09
Speaker
workers. So there's this hump-shaped relationship between automation and wages. If we have just a little bit of automation, then more automation helps workers. if we already have a lot of automation, and in particular, as we approach the very last tasks left for humans, then automation hurts workers.
00:14:32
Speaker
And all of this assumes that there is a ceiling of complexity that human workers cannot reach beyond. So there is a ceiling of how complex a task can be where humans can still solve it. Is there a debate around whether humans are limited in the complexity of the tasks we can solve? Are there some people arguing that we can perhaps solve tasks of just increasing complexity without bound? Yeah, so you are hitting the nail on its head in some ways.
00:15:03
Speaker
this question of how much can humans ultimately do versus how much can machines ultimately do is at the center of this conversation. And again, I want to start with the perspective from the past 200 years. For the past 200 years, there was no question that whenever we automated something simple, let's say, spinning and weaving, We humans on average focused our attention on more complex things and created more value in those complex things. And so in the narrative that economists have been telling the public to push back against luddism and push back against the lump of labor fallacy and basically an oversimplified understanding of how labor markets work.
00:15:57
Speaker
Economists have always focused on this new task creation. And I think for the past 200, 250 years, that description has been spot on. That's exactly what has happened. And as a result of that, we are so much wealthier today than we were 200 years ago. And wages are about 20 times what we were back then. Now, the big question is how far can we extrapolate that into the future? When you speak to neuroscientists, for example, or perhaps information theorists, they will observe, well, the brain is at some level, you can say, a biological machine. The brain is an information processing device. And it's a really amazing one.
00:16:52
Speaker
and I love all the human brains that we have because right now they are the best information processing devices around.

AI Surpassing Human Brain Complexity

00:17:01
Speaker
But at the same time, they have bounded complexity. They can do only so much. They have 85 billion neurons or something like that. And we can't really transcend that limitation very easily. And on the other hand, if you look at computers can basically
00:17:23
Speaker
Grow in size the neural networks that we are training can have many more neurons can ultimately have more connections than we have in our human brains and that means from a pure information processing perspective. I think it is fair to say that human brains face much greater limitations than neural networks, than AI systems, than, I want to say, artificial brains. And so bringing this back to our discussion of labor, throughout our history, we have never ever had machines that were anywhere near the complexity of our human brains.
00:18:08
Speaker
But now we are suddenly on the cusp of that. And that may fundamentally change what we have experienced about the past. So in the past, we have always automated some things and then our brains have been able to do more complex things. And I want to add a footnote here that also had to do with getting more and more education, which is something that we have to question going forward. But once these machines clearly surpass our
00:18:40
Speaker
cognitive and intellectual capabilities, it is by no means obvious that we humans would be able to invent or execute new tasks that are not also amenable to the machines. Is it the case that complexity is the same for machines and for humans? So how do we measure the complexity of a task that humans are performing? And how do we compare it to to the same task if that task is is performed by a computer? Yeah, so the ultimate measure of complexity that's relevant for machines is computational complexity.

Improving Machine Algorithms

00:19:17
Speaker
How many computations, how many floating point operations do we need to execute in order to produce some results?
00:19:27
Speaker
And now I should say this is kind of a moving target for the machines because we are figuring out better and better algorithms every month. And for the past 15 years, for example, there are some results that the algorithms we have developed have become twice as efficient or two and a half times as efficient every single year. So that means to perform the same task, we need to perform fewer computations. Now, for us humans, the computational complexity of something that's much harder to wrap our minds around, no pun intended, in some ways. And this is what, for example, Moravec's paradox hints at. Some things seem very easy to us because we can do them. We can, for example, walk without really thinking hard about it.
00:20:21
Speaker
And then other things that are very simple for computers like, let's say, adding up to 10 digit numbers are very hard for our brains or maybe even multiplying them. Perhaps we want to explain more of X paradox. This is a paradox about what intuitively seems easy to to us humans might not be easy for machines. Perhaps you could explain a bit what the paradox is about.

Moravec's Paradox and Task Complexity

00:20:45
Speaker
Yeah, that's exactly right. And the paradox ultimately comes from an observation about how easy it is to automate robotic tasks.
00:20:55
Speaker
Because in robotics, there are lots of things that we humans can perform very easily that are still a challenge. And I should say these days we're making really rapid advances in robotics. So any example I mentioned is probably something that will soon be easily performed by robots. But an example that was a really good one just a couple of years ago was folding laundry. Folding laundry is something that we humans can do relatively easily. I can teach my six-year-old how to fold laundry, but for a robot, it has been almost impossible just a couple of years ago because there are so many minute details, so many minute movements, coordination between basically hands and eyes, robotic hands and eyes.

Order of Job Automation: Cognitive vs. Physical

00:21:49
Speaker
that were really difficult to grasp. And I think the way to understand the paradox is that evolution has basically endowed us with a lot of computational hardware that is particularly well suited for performing those specific operations, for walking, for manipulating things with our fingers. We have dedicated brain parts that allow us to
00:22:23
Speaker
for perceive if our visual environment that allow us to steer our fingers in minute ways and so on. And we should also not underestimate as we grow up, like we spent the first year of our lives basically perfecting those kinds of skills. Like it takes us a year to learn how to walk. And I guess even then we are still toddlers. And when we are adults, we take it for granted that it is easy to walk but it is something that we have actually taken a long time to learn and by extension also something that takes a long time to learn for robots.
00:23:02
Speaker
Yeah. And so this leads to interesting conclusions if we are to guess around about which jobs could be automated and in which order jobs could be automated. So for example, if we compare something like being a professor of mathematics to being a dance instructor, Moraweck's paradox would inform us that perhaps being a professor of of mathematics would be automatable before you could automate the job of being a dance instructor. I guess the point here is to say that we cannot compare very easily complexity between humans and machines but of course we have a kind of strict measure of complexity for machines which is computational complexity and that's pretty useful to have I think.
00:23:47
Speaker
Yeah, I think we cannot compare it one for one. yeah But I think it is still fair to say, and in some ways we are learning more and more about this as we automate more because we are finding out how much compute it takes in the machine to perform certain human capabilities. And then we are making it more efficient how to perform these capabilities. So if you have, let's say, a brain that was devised by evolution to survive in, I don't know, the savanna, then that brain is not necessarily very adept at performing math. Whereas from a pure information, theoretic perspective, performing, let's say, arithmetic,
00:24:38
Speaker
for example, is something that takes very little computation. So in some ways you can say the human brain has a disadvantage at certain computations, like for example performing mathematical operations, and it is highly optimized for others. Now with machines, we found that we can optimize them for whatever we need them to be optimized. In the 2010s, we trained vision systems and made them more and more efficient to perform vision tasks like what our part of the brain that interprets the nerve signals coming in from our eyes does.
00:25:22
Speaker
In the 2020s, language models took off, which also, I guess, performed one function that our brains are pretty good at doing, although probably in a slightly different way and not as nuanced of a way as our brains are performing them. So with machines, we can fine tune them and we can make them particularly efficient at whatever specific task we need them to be efficient. With our brains we can't do that because evolution has created them the way that they are.
00:25:57
Speaker
There's the evolutionary part, and there's also our life histories. It might not always be easy, say, if you're in your 60s to just switch to an entirely new career. So the human brain isn't as, you could say, fungible as our computers are, in that we can't just easily move around from jobs to jobs. The billion dollar question is, to what extent can we switch careers? To what extent can people move into the remaining jobs that are not yet automated?

White-Collar Redundancy in Cognitive Automation

00:26:26
Speaker
Yeah, that's going to be a very important question in the upcoming years. so As you said before, ah maybe doing advanced math will be automated before dance instructors are automated. And then the big question is, will all those math professors or, you know, we economics professors, we aren't that different from the math professors, will we all suddenly become dance instructors? And I
00:26:55
Speaker
Guess I would bet probably now. um for a whole range of reasons, but as you say, part of the reason is that we we humans, we spend many years, even decades, accumulating human capital for a specific career. I went to primary school, middle school, high school, college, then I did a PhD. I spent a good 20 years accumulating human capital to become an economics professor.
00:27:28
Speaker
If you want to become a good dance instructor, you certainly can't do that overnight either. You probably want to start out pretty young and become a good dancer. And that will also take decades or longer. And that means it's kind of hard to switch between those jobs that take a long time to train for. And if we really see rapid automation in the next few years, If we see, for example, machines that can perform all the purely cognitive tasks, all the things that we just k need our brains for without moving our bodies, then there's going to be a whole class of white-collar workers who are in some ways redundant.
00:28:15
Speaker
You write about this issue in a way where you talk mostly about switching from human labor to machine labor. You also write about what happens if you introduce fixed factors into your models.

Scarce Resources in an Automated Economy

00:28:29
Speaker
These are factors like land, for example. How does these fixed factors affect the wages? Yeah, before we were talking about this race between automating labor and accumulating capital, right? And the important observation there was so that if we can perform lots and lots of things really cheaply using the abundant machines that we have,
00:28:53
Speaker
then the labor, which is relatively scarce, will earn large returns based on that. In some ways, that was the story of the past 200 years. yeah We had more and more machines that became ever cheaper, and we humans, we were the bottleneck. Everything we produced ultimately required us either to press a button for the machines or to perform more higher level complex reasoning and so on. So for the past 200 years, it was good simplified description of the economy to focus only on labor and capital. yeah But the question is, if the role of labor really declines, are those still going to be the two most relevant factors? And I would bet probably not. I would assume
00:29:44
Speaker
that we will find out that there are other factors in scarce supply and maybe for some time that's going to be minerals or rare earth or energy that are going to become really important bottlenecks in this process of letting more and more be produced by machines.

AI and the Process of Innovation

00:30:05
Speaker
If you are a bottleneck in the production process, if you are a scarce factor, then you get the returns that the economy generates. So for the past 200 years, we humans have been the scarce factor. We've been the bottleneck. And we have reaped ample returns from that. And ah our wages have risen so much. But if, let's say, in an HCI-powered future, energy is the bottleneck.
00:30:34
Speaker
or minerals or who knows what, then those scarce factors, those bottleneck factors are going to gobble up a significant part of their returns to economic growth. You also write about automation of innovation itself. So as I see things, this is the crowning achievement of human productivity. In a simple model, you could say that we do innovation and we create new technology and we implement that technology. And so we get increases in the productivity of our workers. But if AI can begin encroaching on that innovation, meaning that if AI can begin innovating itself, perhaps even
00:31:16
Speaker
improving AI itself. So you have AI driven AI research. What happens then? Is this one of these things where things can move fast and things can suddenly move in in an extreme direction? Yeah, this is what von Neumann called the singularity, right? Or actually good called an in an intelligence explosion. And it's also one of those things that are so hard to tangibly imagine. But in some ways, I think we can really see the writings on the wall already that it is possible for that to happen maybe later this decade or early next. And the way that we economists have always thought about the economies, as you say, there are the factors of production, that's labor and capital. And then there is our level of technology. And economists always thought that
00:32:13
Speaker
improving our technology is really the critical part to making us grow, to becoming a wealthier society. And we thought of the activities that improve our technology as science and innovation, and also rolling out those innovations throughout the economy, meaning technological diffusion. Now, if it turns out that AI can do all of these things, then technological progress is bound to accelerate. It's bound to happen much faster because if, let's say, for example, our biological minds are no longer the bottleneck for improving science, we may see
00:33:03
Speaker
ah whole bunch of breakthroughs that look like low hanging fruits to the AI minds, but that are just beyond our capacity to get there. And so I think it's very plausible if we have these human and even just slightly superhuman level AIs that we will see a take off in technological progress. Now, would I view this as a singularity? I think as an economist,
00:33:33
Speaker
there's always some scarcity. And I think we will see a takeoff. And then at some level, the machines may also run into problems that are hard for them, even though they are superhuman. And that means after an initial takeoff, maybe things are going to level off at a higher rate of growth than what we are currently experiencing.

Energy and Regulation Constraints on AI

00:33:58
Speaker
And I would find that plausible. Maybe there's going to be several. waves of that, like take offs and levelings often take offs when you make another breakthrough and so on. I think there are many possible scenarios. Ultimately, we also believe according to our best understanding right now that the universe and especially our event horizon are finite. So it means growth can't go on forever. It will have to end at some point.
00:34:32
Speaker
There are ultimate physical limits to how much growth we can get, but we are nowhere near these limits right now. When I think of an intelligence explosion, I often think of these bottlenecks that could prevent the feedback loop from getting going. so I imagine it a takeoff scenario in which AI begins to improve AI itself. And those gains then help the next cycle in the feedback loop. But there are just so many things that have to go right for that to work. I'm thinking, for example, you met you mentioned energy as a constraint. It could also be regulations in terms of how these systems are allowed to be implemented, whether ah they're allowed to improve themselves.
00:35:14
Speaker
There are many things that have to go right ah before AI can improve itself.

Economists' Skepticism About AI Growth

00:35:18
Speaker
But it's interesting to me to hear from from you and economists that this is actually somewhat plausible to you, because it's often physicists and computer scientists that that kind of buy this intelligence explosion idea. And I often get pushback from economists. So perhaps you could lay out why is it that other economists might be more skeptical and why is it that you might find this idea plausible? Yeah, I think there's two reasons. The first one is you'll not be surprised to hear that among a lot of economists, the notion that the possibility that AI may reach human levels of intelligence is still something that's more science fiction than a plausible real world scenario.
00:36:05
Speaker
so there are some fundamental beliefs that people have against such a scenario. But then the second question, and that's where it gets really interesting to me as an economist, is imagine we do have this kind of intelligence take off, I want to call it, not singularity. What would that actually imply for economic growth in the way that we classically think about it? And I can see the following possibility and in some ways that actually seems like a pretty plausible scenario to me. I can see that the machines take off and become much more advanced really quite quickly and that our human world does not really change as much as let's say the machine world advances.
00:37:00
Speaker
And I would almost say we humans, we don't want our world to be turned upside down in a very short amount of time. I think a very fast takeoff that changes the face of the earth would probably be a misaligned takeoff. It would not be in human interests. Humans want things to evolve more slowly at a pace that they can at least somewhat wrap their heads around. And I think if we do see an intelligence explosion that is somewhat aligned with human interests, then we will see lots of things that will become a lot better, let's say, in medicine, and let's say, in the world of work. But we will probably also want to keep some things pretty similar to the way they are right now.
00:37:55
Speaker
And then ultimately the question is a question of measurement. How would we value and count the advances that we are experiencing? Would that be reflected in something like the human concept of GDP or would it not?

Machine Economy vs. Human Economy

00:38:14
Speaker
So in one of my papers a couple of years ago, I tried to go down the rabbit hole of this measurement question. And I ah realized that the way that we currently measure GDP would not really be able to do justice to an intelligence explosion. And if you were to measure GDP from the perspective of the machines, they would come to quite different numbers. And why is that? Why doesn't that GDP accurately capture the growth that the machines are creating? And why, from the machine perspective, yeah, why is that?
00:38:52
Speaker
So the way that we define GDP is that it is the value of goods and services for final consumption for humans. Plus the value of accumulated capital that has a lifespan of more than one year. Plus of course, government spending and net exports, but let's keep those two out for the moment. Now it turns out a lot of the things that the AI would quote unquote consume would not count as capital under our conception of GDP.
00:39:30
Speaker
And that means they would not show up in our GDP, but they would show up if we wanted to define the equivalent of a machine GDP. So let's say you have, let's make it really tangible. but Let's say you have, this is of course, at some level it's a ridiculous scenario, but let's say you have a billion robots that are happily operating and living lives like you and I, And they are not producing a lot of human goods because they are living happily for each other. That means they wouldn't show up in human GDP. Yet, if you ask them, they would say, well, of course we are quite wealthy and we have a lot of resources and our output is growing very fast.
00:40:19
Speaker
And you can have you can almost think of it as two separate worlds, two separate islands that are growing at very different rates. You could have this kind of AI island growing at double digit rates. And at the same time, the human economy may just get a tiny little boost from all that growth, but not to be that radically changed. How could that be the case in the same world? So are you measuring world GDP in both scenarios? And if so, wouldn't the fact that the the robot world is experiencing double digits of economic growth affect the human world more? For example, if they begin consuming lots of energy, they begin creating large large chip factories and all of this, wouldn't this affect the human world more?
00:41:09
Speaker
Yeah, so so let's let's push just that thought experiment a little bit yeah because it's a useful way of thinking about it, not because it's a very realistic experiment. But yeah, are ultimately, human GDP revolves around human consumption. And what I would call the machine GDP would revolve around the machine consumption. And if the machines consume a lot of energy and we don't get all that much in return, that would actually subtract from our GP because we have less left to consume. Now, I hope it won't be that bad. I hope we will get some benefits. Let's say we will get a lot better health care. We will get a lot better entertainment and so on and so forth. And that would show up in our
00:41:56
Speaker
GDP. But there are also going to be some aspects of our GDP that won't be affected all that much.

AI Alignment and Economic Benefits

00:42:04
Speaker
Like, let's say, I'm looking around at my house right now, the construction of real estate is a really big chunk of GDP. I don't know how much that's going to be affected by a human level or a superhuman level AI in the near future. And there are several buckets of GDP that I think will not be fundamentally revolutionized in part because people don't want things to change too quickly. And so that's the reason why robot activity and robot GDP doesn't affect human GDP that much in this thought experiment.
00:42:48
Speaker
Yeah, let's make the thought experiment slightly less a robot island versus human island. yeah yeah So but more broadly, I do think that there's going to be a significant boost to human GDP as well. ah The extent of that boost depends on how well aligned the AI's are going to be. and in particular how the distribution of income is going to look like. So let's say if a lot of humans become impoverished because machines can do their jobs at a much cheaper rate, then
00:43:25
Speaker
human GDP may actually decline, even though the machines are taking off at the same time. If we live in a utopian world where we manage to share the gains from technological progress very broadly, then every human's consumption is going to go up, and the human GDP would also grow much more significantly. But I would still think it won't grow quite as fast as the machine GDP.

Lump of Labor Fallacy in AI Context

00:43:56
Speaker
Yeah, makes sense. Okay, Anton, let's take a break. And then when we come back, we can talk about this kind of intermediate situation. And what happens to human wages in the run up to an AI takeoff? What is the lump of labor fallacy? And how does it relate to thinking about automation and the future of human wages?
00:44:19
Speaker
Yeah, the lump of labor fallacy was something that I think people first were confused about at the beginning of the Industrial Revolution. And it's closely associated with the thinking of the Luddites. So the lump of labor idea is essentially that there is a fixed amount of labor that is there in the economy. And if we automate one chunk of it, then the jobs in that chunk will be missing. And there will permanently be unemployment because we have engaged in some automation. And the reason why it's false is of course, because our economies are highly adaptable systems. And when we automate something, then this is the big end that I think we didn't have to mention in the past two centuries, but we do have to mention looking forward, if there's still something that humans, that only humans can do.
00:45:18
Speaker
then humans will switch into that and ultimately the economy will generate full employment or something close to full employment again. And why is it that whenever you talk out loud and worry about AI automation and the risks associated with losing jobs for humans, why isn't it a knockdown argument to just say, well, that commits the lump of labor fallacy? What might be different with AI? You can say that. it Yeah. So the first thing is economists
00:45:54
Speaker
don't necessarily think about the labor market as the number of jobs, but we think about it as an equilibrium that consists of how many jobs are there and what wages are the workers being paid. yeah And so my first concern is actually not so much that we will see lots and lots of unemployment. We may see that too if AI automates things really quickly. But my first concern is that wages will face a lot of downward pressure. yeah And that means a lot of workers will be made worse off. And that can lead to social turmoil. So yeah thats that's the first point. And then the second point and that builds on something we have discussed earlier, if a machine can perform a worker's job,
00:46:48
Speaker
then the worker's wage is capped at the cost of the machine in a competitive market. and So if a machine does it cheaper, your wage has to go down, or you'll have to stop doing it. In the past, it was very easy to identify new things that these workers can switch into. But if AI comes closer and closer to AGI, closer and closer to the level where it can do everything.

Human Competition with AI in Labor Market

00:47:17
Speaker
And I should add, if we also have corresponding advances in robotics, then there may just be nothing to switch into. ah There may be nothing else left.
00:47:29
Speaker
And that means we either have lots of workers with really low wages or workers who say, well, those wages are so ridiculously low. It's not worth it anymore. Or in the worst case, those wages are too low for me to survive. And that would be a really big problem. Could humans still compete with AI systems even if humans are worse at a given task and then perhaps just receive a lower wage? So say that an AI system can write a paper in 50 minutes and I can write a paper in 50 days or 500 days. Will I still be able to compete with the AI system and just get a lower wage but still be able to earn some wage?
00:48:13
Speaker
yeah With physical tasks, this is plausible to some extent. So let's say if the machine can cut and sew shirts at a cost of $10 and it takes you three hours to cut and sew a shirt, then you would earn only $10 every three hours. And essentially, if you compete with the machine, the competitive market would say you should have the same earnings for the same product. Now, if the machine gets twice as efficient every two years, which would be in line with Moore's law, then it means two years later, you'll get only $5 for the same shirt, and then $2.50. And so and that's where you see the
00:49:04
Speaker
problem may arise. So right now, there are lots of tasks that machines could in principle perform, but they're just too expensive to be cost competitive. And that's why they're done by humans. What would be an example of such a task in today's economy? Yeah, in some ways, in every complex production process, you have this mix of human tasks and robots tasks.

AI Cost Efficiency in Cognitive Tasks

00:49:27
Speaker
So let's say you you go in a factory, There are lots of things that are being done by robots, by industrial robots. But then, for example, a human ferries a big pile of product from one machine to the next one. And in principle, you could design a machine for that. It just will be cost-competitive. What about in more white-collared jobs? What would be an example of something that that perhaps could be automated at a certain cost, but right now is too expensive? yeah and In the cognitive sphere,
00:49:59
Speaker
It's much more challenging to think about that. And I think one of the reasons that we have suddenly had these large language models emerge and they can do whatever they are doing at such a low cost that is ridiculously low compared to what the equivalent human cost would be. So let's say when it comes to writing an essay and we understand that right now machines can't write an essay. of the same quality as like a subject matter expert. yeah But let's see some essay for a clickbait website or something like that, which they can do perfectly well. yes It would take a human maybe, I don't know, two hours. It takes a system like GPT-40, maybe 20 seconds or
00:50:51
Speaker
let's make it a little bit longer, half a minute, the token costs associated with it will come out somewhere in less than a dollar. And for the human, the cost of spending two or three hours on that same essay would, traditionally speaking, we would expect them to be paid more than 50 bucks probably. And there is like a factor of a hundred in between those two costs. And That makes it very hard to imagine a human saying, well, all right, from now on I will write my essays and I'll earn only 50 cents for each one of them. And by the way, next year it will be and not 50 cents, but it will be 12 cents and you get the idea.

Human Preference in Goods and Services

00:51:39
Speaker
yeah yeah Could it be the case that there are some aspects of the human economy ah where we simply prefer ah buying goods and services made by humans, perhaps because we have some form of attachment to something made by other humans, Perhaps we feel nostalgic, perhaps we have an ethical commitment to buying human products. How plausible is it that factors like that will play a role in a kind of large scale economic transformation? Yeah, those reasons that are all very plausible. And I'll add a fourth one, perhaps, which is maybe we humans want to be in control of certain processes for alignment reasons. yeah So maybe our our future is that we are all going to be
00:52:27
Speaker
in alignment shocks or shocks of the sort that you described. So I think there's going to be a distribution of preferences among people. And what people will ultimately go for in their demand will very much depend on the quality at which an AI can perform certain things and the relative cost. So let's say with the clickbait essays, there's no question that we don't really care about having a human clickbait writer. We don't really want to clickbait that much anyways. Let's take another example. Let's take doctors. I can imagine a future where
00:53:12
Speaker
AI performs a lot of the tasks that human doctors perform equally well or better, but some humans are still going to say, well, but I want to see the human doctor. There's going to be probably some sorting by age. I think it's plausible that older folks are likely to stick with human providers much longer in examples like that. Ultimately, if the AI just becomes really significantly better, more and more humans are going to say, well, but ultimately this is medicine. My life is on the line. And I'll just go with the best system available. Or alternatively, there's also going to be cost pressures. The machine doctor is equally good, but costs just a tenth of the human doctor.
00:54:04
Speaker
And a lot of people will move to the machine service provider for that kind of reason. You don't think that these preferences, say we have a nostalgic preference for humans, or we have some form of emotional, ethical, ethically based preference for humans. You don't think such preferences can withstand, say, 10x differential in cost, where the machine doctor or the machine lawyer only costs 10% of the human doctor or lawyer. Yeah, it's going to depend a lot on the context. So for a lot of humans, especially if they have the requisite income, I can imagine that the 10x difference in the medical sector will still leave some demand for a human medical professionals in competitive fields, like for example, law, where it really matters if you have the slightly better argument.

Ethical Constraints in Automation

00:55:00
Speaker
I think the competitive pressures are going to erode human jobs much more quickly.
00:55:06
Speaker
On the other hand, also in law, I think it is very plausible that there will be ethical constraints on automating things like, for example, judges or also lawmakers. And they are all going to use AI as an aid. But I think we will probably want humans to be in the loop in those kinds of functions for a lot longer. even though from a pure intellectual perspective, they are no longer that necessary. You have a paper in which you imagine that human labor becomes economically redundant. And what that means is that humans can no longer earn enough wages to sustain themselves. And then you speculate, inform speculation about what would happen in such a situation. Why do you worry about political instability resulting from that situation?
00:56:04
Speaker
Yeah, so there's the good and the bad if that were to really happen. But the good is that the greatest bottleneck in our economy right now, which is the availability of labor, would suddenly be lifted and our economies could just grow significantly faster. But the bad is that labor is not only a factor of production, But it is also what most of us receive our income from and what most of us receive a significant part of our meaning and life satisfaction from.

UBI and the Future of Work

00:56:44
Speaker
And so from a pure production perspective, it would be good if we can automate labor and we can produce a lot more. But if we undermine people's incomes and if we undermine people's
00:57:00
Speaker
meaning that would create really fundamental challenges for our societies and challenges that honestly I am very concerned about, challenges that I think may result in significant political turmoil. And do you think something like universal basic income is a possible solution here? If you're worried about people not being able to sustain themselves, it seems like ah UBI would be a good solution. Perhaps we can discuss that. But if you're worried about a loss of meaning and it doesn't seem like UBI would be able to replace the meaning lost by not having a job.
00:57:41
Speaker
Yeah, that's exactly right. So from a pure income perspective, a UBI could fix the problem, so to say. And even so, we have to remember it would still be politically very difficult to implement because we would yeah be speaking about very large scale transfers ultimately. And throughout our history, we have never done something as ambitious as a UBI. So given our current political environment, let's say, I'm worried that we may not be able to pull it off. But of course, the alternative is even worse. Now, when it comes to the meaning challenge, you're absolutely right. A UBI would not address it in any way. Now, the question is,
00:58:31
Speaker
Would it still give people meaning if they were to work but they know that a machine can do the same work much cheaper, much faster, and much better? So if I put myself in that situation and I know, let's say, a machine can write much better economics papers, can devise better lectures, can teach better than I, would I still want to do that? Would I derive meaning and life satisfaction out of that? And honestly, I guess my gut reaction is no, I would not. yeah I have the same gut reaction, by the way. Yeah. Now, the other question is, should we fool people into thinking that they can do something better than the machines, even though it's not actually true? And I'm not very comfortable with that notion either.
00:59:25
Speaker
even though it would solve the meaning challenge and it may solve the income challenge, but it leaves me with a bad taste as well. So in other words, there don't seem to be any super appealing, easy solutions at the table, aside from the fact that it's going to be a very heavy lift to ensure that we can ah redistribute enough income to make sure that people are not worse off from artificial general intelligence. So yeah, what would be my best bet? Because I do believe there is a very high likelihood that this is all coming at us and that it is coming probably sooner than most people realize. I should say first, there are all these what I call nostalgic jobs in the paper, and there are lots of areas of the economy that won't be immediately affected. For example, if you are a government worker,
01:00:23
Speaker
Even if a machine can do your job better, it doesn't mean that you will be displaced tomorrow. On the other hand, if you work in a highly competitive sector, let's say you are a consultant and one of the leading consultancies of the world and the machine can do your job better, then you may be displaced a lot faster. So what is our best bet? Our best bet is that there will be a transition that takes some time, that gives us a little bit of a runway to devise solutions for the income distribution problem. in In some of my work, I pro proposed something that I call a seed UBI.
01:01:04
Speaker
which is basically the idea that we should introduce a small UBI as soon as possible because it will take significant time to set up the infrastructure, especially in countries like the US s where we have never done something like that at the national level. it will My estimate is it will take probably about two years to just set up the infrastructure for UBI. And it can be really small as long as we don't see major disruption. And then it should be set on an autopilot if significant parts of the economy get disrupted. And if, let's say, the share of labor in total GDP declines significantly,
01:01:51
Speaker
the UBI should automatically ramp up to compensate workers for that. Does the economic case for UBI depend on strong economic growth? So does it depend on an economy that's much more productive and much larger than than one than the one we have today? I'm asking because I think In the current economy, a UBI would be very expensive to implement. and so yeah Is it the case that we would have to see strong economic growth for a UBI to make economic sense? yeah so To pay out a UBI i at a reasonable level, yeah we would need our economy to grow. ah For the kind of seed UBI that I'm proposing, you know I'm not suggesting anything very ambitious.
01:02:41
Speaker
I just want the infrastructure in place. It could be 10 bucks a month just to have the system operating. And then if job displacement on a massive scale happens, we have it. And then it could hopefully scale up because growth would also take off. And I think the good message about these kinds of AGI scenarios is that the growth takeoff would always come hand in hand. with the labor displacement. yeah So we only need to compensate workers in scenarios where growth would also be significantly higher. And I think that would make it affordable to pay out the UBI that makes sure that workers are not worse off materially.
01:03:30
Speaker
And it still leaves as you observed the meaning challenge. and Why are UBI in the first place? Why not give the money, for example, to the people to only the people who have lost their jobs? So what is the advantage of a UBI over something more like a conventional means tested, the benefits program? Yeah, that's a very fair question. And I think at some level, we probably want a little bit of both. So the way that, for example, our systems of unemployment insurance work around the world is that if you lose your job, you could and can't find another one, yeah which is very likely in these kind of, another equivalent one, which is likely in these AGI scenarios, because let's say if it automates all cognitive labor, and you used to be a cognitive worker, then
01:04:25
Speaker
Everybody at the same time is going to look for something. yeah Then the unemployment insurance replaces a fraction of your income, ah which depends on whether you're in a more generous or less generous welfare state, for a limited amount of time. And that limited amount of time traditionally was meant to allow you to look for something new, to potentially re-skill a little bit, and so on and so forth. so So first, that will be very useful as a first safety net. However, if one of these AGI scenarios materializes, that limited amount of time is going to be over at some point.
01:05:08
Speaker
And two things are going to happen at that point. First, people will still need some sort of support. And again, depending on in which welfare system and how generous of a welfare system you live, that could be a little bit more or it could be very bare bones. And secondly, a lot of the inequality that we currently experience in our world would suddenly have to dissipate. So let's say you are a highly paid software engineer. If you are in the right area, you can easily bring in half a million or a million a year these days. Your unemployment insurance is going to be a fraction of that, but let's say
01:05:55
Speaker
once that's over, there is really no ethical reason to pay that software engineer significantly more than, let's say, a taxi driver who gets displaced from their job. We will have to accept it. There's going to be a lot of revelation. We will have to accept it that to use a slightly more pejorative term, we're all going to be equally useless in the labor market. And the kind of policy instrument that would embody this notion that we are all equal, that we are all universally the same, would be a UBI.

Taxation in an AI-Driven Economy

01:06:35
Speaker
How would taxation have to change in this scenario? It's just if a lot of people don't have jobs anymore and are
01:06:44
Speaker
traditional taxation system depends on taxing income from labor. Well, I guess we're in a situation in which we're losing tax revenue and we want to give a whole bunch more money away in a UBI. So what should we be taxing in a situation in which we have increasingly powerful machine intelligence? Yeah, I think you're absolutely right. The gap between how much we bring in and how high our spending needs are going to be is going to grow because right now most taxation ultimately derives from labor. So I guess in a nutshell, taxation by necessity will have to be based more and more on capital. What's the capital
01:07:33
Speaker
that is going to be most relevant in an AGI powered world. That's going to be for cognitive and physical things, compute and robots. So if we say we tax capital in AGI powered world, that's essentially a compute tax and robot tax. The positive part is if our economies grow really fast, And if compute and robot capabilities grow really fast, those tax rates won't necessarily have to be very high to afford what we want to spend. But we will still need to have them. And I think it is true that taxes on these two resources are going to impose some distortion. They're going to make the economies grow marginally less fast.
01:08:25
Speaker
And there's going to be a lot of contention about that. And let's say, the AIs, because at that point, humans are not going to be as relevant. They're going to argue, well, we can't pay such high taxes because we're going to fall behind. And we're going to fall behind in national competitions, international competitions. And we really should skimp a bit more on how much we pay to those humans.

AI Market Structures and Scaling Laws

01:08:53
Speaker
So that that may be one of the big distributive conflicts going forward. But at the same time, I would view this as a question of alignment. If we want those AIs to be aligned, part of that alignment is to take care of the basic material needs of humans and ideally a little bit more than just the basic needs. We want those humans to, including you and me to live well, to have a decent livelihood and to enjoy some of the fruits of the technological progress. Yeah, I agree. We should talk about your paper called Concentrating Intelligence Scaling Laws and Market Structure in Generative AI. So what is market structure and what is the market structure of specifically generative AI foundation models?
01:09:45
Speaker
Yeah, Concentrating Intelligence. That was a title proposed by Claude. I think it's a very nice pun because the markets in which these companies are operating are becoming more and more concentrated. yeah What is market structure? Market structure basically looks at how many players there are in a given market. and how they are related to each other, whether they are, for example, vertically integrated between suppliers and producers and how much competition there is between the players.
01:10:21
Speaker
And as AI is playing this really growing role in our economies, the market structure of that market is also going to become more and more important. And I should say it's not only the market structure for the AI systems themselves, but also the market structure for compute, for chip production. And that's really where we see the biggest monopolies right now. What's the role of economics of scale and scope in this market structure? And perhaps you can explain those concepts. Yeah, so economies of scale means that as you produce more, your unit costs of production goes down. So in a lot of
01:11:08
Speaker
economic areas, that's not the case. ah For example, if you are a hairdresser, if you want to cut 10 heads, it's going to take 10 times as long as we want to cut just one head. On the other hand, in the context of AI, we face these massive training costs. And then the more you operate the AI systems, the more inference you engage in, the more you can amortize that massive training cost over a larger number of output tokens, so to say. And that means the cheaper per unit, the all in cost of operating the AI becomes. So yeah in the context,
01:11:58
Speaker
of our current foundation models, people are estimating that the cutting edge systems are soon going to be like in the in the range of billions of dollars. And that means only very few players will be able to afford to participate in that market. And that means there's going to be a small number of players. The market will be very concentrated. And those players therefore have some market power which allows them to charge more than they would be able to charge in a situation of perfect competition. And economics of scope here would mean that after you've trained a large foundation model, you can serve that model to billions of people in the best case scenario, let's say, and and you benefit economically from that.
01:12:53
Speaker
yes So economies of scope means that there are essentially positive spillovers between producing things in multiple different markets. So let's say if you want to offer haircuts and cleaning services, There are not a lot of positive spillovers between those two. And that means if you are a company that offers both, you won't be able to do those two things more cheaply and you won't be able to undercut rivals because of that. On the other hand, there are some economic areas where there are very strong economies of scope. So let's say if you generate a language model that acts as a call service center agent,
01:13:45
Speaker
and It turns out the best way of doing that is actually to create a very general system. That system may also be really good at providing you with medical diagnosis. So it's two separate markets. But you can spread your costs of producing the system over both markets or I guess in the case of generative AI, especially as we come closer to AGI. you can spread that cost of serving all these different markets across the whole system. I see. So so there's economic pressure to build a general model that can work in very in many different sectors. And you only have to train that model once. So you only incur the training costs once, but then you can deploy it in a lot of different sectors. And that's economics of scope. That's exactly right. Yeah. And that means you can serve multiple different markets at a lower cost.

AI Development and Organizational Structures

01:14:37
Speaker
And you can see how that force is very strong in the context of gender DVI and of foundational models, right? Yeah, there's pressure to create more general models. And I guess there's a debate around whether the future will be a bunch of specialized models or whether the the future or the the best kind of economic approach or the approach that makes more most economic sense. is to have one general model and use that model for everything or to have a bunch of specialized models. I've heard some arguments that specialized models would be able to have lower costs because you wouldn't have to use this giant model to do very simple tasks. I don't know what you think of that. What is the overall kind of market pressure? Would you say that the overall market pressure is in the direction of more and more general models?
01:15:27
Speaker
Part of that discussion is highly technical, so I'm not really qualified to speak to the technical parts, but I'll add what an economist would think about it. As economists, we we would say everything should be produced as cheaply as possible, right? And that would suggest that just like think of a complex human hierarchical organization, you would have lots of things that can be decided by workers that are at the bottom of the hierarchy, then some things need to be escalated to the managers and then ultimately the toughest decisions will be escalated to the CEO. And I could imagine
01:16:11
Speaker
that we will soon have AI systems that will operate in a similar way that simple functions are farmed out to simple and cheap sub-modules and then some things are going to be escalated and much more compute intensive and more expensive. Let's say general intelligences would deal with those more complicated problems. Yeah, you mentioned earlier vertical integration. Perhaps you could explain that concept and then explain whether it's an advantage to be more vertically integrated for AI companies. Yeah, so vertical integration is if you have a company that performs multiple steps of a production process in-house. So to make it tangible, let's say right now the voyage
01:17:02
Speaker
AI value chain looks is you first have a compute value chain. You have companies like ASML in the Netherlands that produce the machines to create chips. Then you have TSMC in Taiwan that actually produces the chips based on specifications from Nvidia, which is really good at chip design. And then you have, let's say AI companies like OpenAI or anthropic that buy the chips or rent the chips and actually run their AI systems on them. So vertical integration would be if a company
01:17:43
Speaker
basically encompasses is multiple steps within that chain. So for example, Google DeepMind designs its chips and then also uses them to create AI systems. And that would be vertical integration. There's ah one more step of this, which is there are the foundation model companies like let's say OpenAI and Entropic. And then there are office suites like Microsoft Office or let's see, systems like Gmail that employ generative AI systems as increasingly weaved in parts of their applications. So you could also view those as separate steps of the production process. And again, in the case of Google, they would be vertically integrated.
01:18:32
Speaker
In the case of OpenAI and Microsoft, you can still argue those are two separate organizations that perform these steps separately. yeah so In your paper, you describe it as upstream vertical integration, which meaning integration with the suppliers of inputs, such it as chips or data or entity, and then downstream vertical integration, which would be integration with distribution. so

Vertical Integration in AI Market

01:18:58
Speaker
As you mentioned, Gmail or other Google products to get the AI features into the hands of consumers.
01:19:06
Speaker
ah Yeah, so you you were also asking for the benefits and disadvantages of vertical integration. And I would say the main benefit is that if you have integration, then you don't have multiple steps of the production process that each charge to their own profit margin. But instead, you have one player that decides what is the best profit margin overall to charge and that reduces monopoly distortions. But the main downside is you'll have less competition in such a market. So, for example, let's take an extreme case. Imagine Nvidia owned OpenAI.
01:19:49
Speaker
And then uses all its chips to only give them to open AI and nobody else would have the chance to train AI systems that are anywhere near. So you would have much less competition in such a market. And as a result of that, all the benefits from competition, which are not only pressures to lower prices, but also innovation that comes from competition would be much lower. So is there a tension here between what's beneficial for a company, which would be to be maximally vertically integrated, and what would be good for society or or the consumers, which would be to have competition ah in the market? Yeah, this tension is very much there. From the perspective of consumers, we don't want monopolists because they will charge excessive prices and they tend to slow down innovation.
01:20:50
Speaker
And from the perspective of companies, being a monopolist is wonderful because you can charge incredible margins that you could otherwise never charge. Like if we look, for example, at the income statement of companies like Nvidia, more than half of every dollar in revenue that they make goes straight into profits. Yeah, that's extreme. Okay, what do we know about how the top AI players right now are competing with each other? What do we know about the different strategies of OpenAI, Google DeepMind, Anthropic and so on? Yeah, I should say I don't know too much detail about the cost structure of these companies. But it does seem that the competition is really quite fierce. yeah It seems all the top players are in a race to gobble up market share.
01:21:43
Speaker
I assume they are charging probably just at their variable cost, meaning they take the training on the chin and then they are just charging the user, however much compute it costs to run inference without really earning much of a margin. And as a result of that. The market is very competitive right now, but the problem is the following. We have seen this a number of times in the past. We have seen this with, let's say, operating systems where players at first sell their systems at a very low cost. And then once they have locked in of the market, costs suddenly go up. And so one of the risks in the market for
01:22:34
Speaker
let's call it large language models, is that people become somewhat locked into a player and then that player would suddenly have much greater market power and would have the ability to raise prices significantly. Now the question is, and that's again a technological question, how easy would it be to lock in users? versus how easy would it be for users to just switch to another provider. Let's say the current provider increases the costs too much. And I guess at some level, it's an open question. I think
01:23:14
Speaker
People always underestimate how slow consumers are to move, especially when they have to get used to a specific interface, get used to how to write their applications around a specific API, and so on and so forth. But I think there will still definitely be some competition. Now I say one other thing that's really interesting about the competition that we are seeing right now is it's not so much about price, but about who has a slightly better model. And as a user, I ah both relate very much to that because I always switch to whatever is the best model at any given moment. And I also see that right now users are deriving a lot of benefit from this type of competition.
01:24:08
Speaker
Yeah, but you worry that once users are logged in, companies will begin charging them higher and higher prices and they'll be difficult for users to switch at that point. Yeah, so that would essentially be the worry. There's something to that because one issue here might be that if you have a bunch of your personal data in, say, Google services and only Google's AI has access to that data, it might be very difficult to switch to another AI because then you lose all of this context that
01:24:39
Speaker
could perhaps make an AI very useful in your life as a personal assistant, say. To what extent is this a kind of self-solving problem? Because if Google begins charging me just exorbitant prices, at some point, it'll become too much, and I will switch, right? So at some point, competitors will be ready to take over, to take my business from Google.

Regulatory Measures for AI Market Concentration

01:25:01
Speaker
To what extent will this problem solve itself? And to what extent do we need regulation to solve it? I think ultimately it's an open question. I think you're absolutely right that force of competition is there to some extent, but let's say you have used, for example, Gmail for the past decade or longer. Which is true by the way. You have used Google Calendar for the past decade or longer.
01:25:27
Speaker
and Moving all that to a different platform, it would entail a significant cost. So Google would have essentially the market power to raise their prices up to the point that you find, oh, now it's really worth it for me to switch to Microsoft Office. And I trust that Microsoft won't do the same thing two years down the line.
01:25:52
Speaker
What regulation might mitigate this problem of of increasing concentration in in the AI market? first thing is we need to appreciate that in some ways these systems have a tendency towards natural and monopoly. They are so expensive, it is not actually desirable for us as a society to train 100 competing foundation models at a cost of a billion dollars each because that would be a needless duplication of effort.
01:26:28
Speaker
So given this kind of cost structure with very high fixed costs, it is actually socially desirable to have a relatively small number of players and having said that, We usually still want some competition because of all the reasons that we have just gone through. So that means in the near term, it's probably most desirable to have a small number of players, but more than one, that compete with each other without duplicating efforts too much.
01:27:09
Speaker
And so one angle where regulators and competition authorities I think should pay close attention is this issue of vertical integration. So as we discussed in our example of the Google AI system, If you are vertically integrated, it becomes much, much harder for consumers to switch or for businesses to switch. And that means they have more lock-in, they have more pricing power, and they have more ability to extract monopoly rents if they choose to do so in the future. And competition authorities can lean against that in two ways. First of all, by looking very carefully
01:27:55
Speaker
at takeovers and vertical investments. So we we don't want, let's say, a small number of tech companies to gobble up all the startups that are producing related generative AI services. And secondly, they can also basically prescribe ah big tech companies that they need to have a certain amount of open standards that they need to, for example, Let's take the example of an AI assistant that can read your calendar and can give you advice or write emails for you because it also integrates with your email. If startups have the ability to integrate with your calendar and with your email in the same way as big corporations have with their proprietary system, then this danger of vertical integration and of its anti-competitive effects would be much mitigated.
01:28:54
Speaker
Okay, as the last question, how does market concentration relate to power concentration? here Here I'm thinking, if a player in the AI space becomes enormously profitable and takes a large fraction of the market, will they also be able to influence the the regulation of their industry to an uncomfortable degree? Will they be able to do regulatory capture, for example? Yeah, I think that's a very difficult question. And there's the first somewhat more straightforward answer which you have hinted at that we have experienced throughout our history whenever there are some very large corporations that they can essentially bend the rules a little bit in their favor by engaging in lots of lobbying, by engaging in regulatory capture, and so on and so forth.
01:29:51
Speaker
Now, I will say that in the case of AI, the risk of power concentration is probably different and much more significant than with any other technology in the past. Because our intelligence is ultimately what made mankind, what made humanity the most successful species on the planet. And if we develop AI systems that are more intelligent than us humans, then it is reasonable to expect that the power that they embody will also be greater than the power of humanity. And this is why AI alignment and so on, the quest to make AI act in human interest is such an important question. So I think
01:30:48
Speaker
Once we come close to or get to AGI, the question of power concentration with AI will take on a new dimension, a dimension that we have never seen before when it comes to power concentration among companies in the past. And maybe at that stage, I as an economist should also observe what would be the best utopian answer to such a situation. And that would probably be to have an AI that is created not by competing companies, but that is an all in kind of moonshot effort that is advanced.
01:31:37
Speaker
by either the leading nation state in this area or ideally a group of nation states in a similar way to, for example, CERN, to create AI systems that we all collectively invest in. and that we will hopefully be able to align in human interests and operate in a way that we can coexist with them such that we will all benefit and such that we can experience a level of flourishing and that is perhaps unimaginable looking at the current world. Well said. Anton, thanks for talking with me. Thank you for having me, guys.