Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace) image

From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)

Future of Life Institute Podcast
Avatar
11 Plays3 minutes ago

On this episode, Calum Chace joins me to discuss the transformative impact of AI on employment, comparing the current wave of cognitive automation to historical technological revolutions. We talk about "universal generous income", fully-automated luxury capitalism, and redefining education with AI tutors. We end by examining verification of artificial agents and the ethics of attributing consciousness to machines.  

Learn more about Calum's work here: https://calumchace.com  

Timestamps:  

00:00:00  Preview and intro 

00:03:02  Past tech revolutions and AI-driven unemployment 

00:05:43  Cognitive automation: from secretaries to every job 

00:08:02  The “peak horse” analogy and avoiding human obsolescence 

00:10:55  Infinite demand and lump of labor 

00:18:30  Fully-automated luxury capitalism 

00:23:31  Abundance economy and a potential employment cliff 

00:29:37  Education reimagined with personalized AI tutors 

00:36:22  Real-world uses of LLMs: memory, drafting, emotional insight 

00:42:56  Meaning beyond jobs: aristocrats, retirees, and kids 

00:49:51  Four futures of superintelligence 

00:57:20  Conscious AI and empathy as a safety strategy 

01:10:55  Verifying AI agents 

01:25:20  Over-attributing vs under-attributing machine consciousness

Recommended
Transcript

Introductory Discussion on Wealth Distribution

00:00:00
Speaker
what we're going to have to do is find a way to transfer some of the wealth being generated by the machines to everybody. And boy, this is a hard problem. One is this little word basic.
00:00:12
Speaker
you know We have to do much, much better than give everybody a basic income in a future in which Huge wealth has been created by machines and 99% of the population is just scraping by.
00:00:23
Speaker
That's an appalling world and we shouldn't do that. Universal generous income. And that's that's where we need to get to. The likelihood of being able to to forever control entities which are much smarter than us is is pretty slim.
00:00:35
Speaker
Our best chance may lie in making sure that they're conscious because if they're conscious, they will have empathy. Having conscious beings appreciating the beauty of the universe is possibly the most important thing the universe has.

Guest Introduction: Callum Chase

00:00:51
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Docker, and I'm here with Callum Chase. Callum, welcome to the podcast. Hi, Gus. It's a privilege to be with you. Fantastic.
00:01:02
Speaker
Could you give a quick intro to what you've been working on? I have been thinking about the future of AI for a very long time. I wrote the first article I wrote about it was in 1980. And I was wrong about pretty much everything. I thought that work was going to be automated within the next few years. was obviously completely wrong.
00:01:20
Speaker
I had not yet then heard of Amara and Amara's Law, which is the observation that we overestimate the impact of of AI and any other technology in the short term and underestimate it in the long term.
00:01:32
Speaker
I've seen that played out many times since. Then in 1999, I read Ray Kurzweil's book, Age of Spiritual Machines. I think a lot of people got interested in the future of AI then. And so since then, I've been very interested in in where we're going.
00:01:47
Speaker
I retired in 2011, thought it would be a good idea for Ridley Scott to make a movie based on something like Kurzweil's book. So I thought, well, I'll write a novel which he can turn into a movie.
00:01:57
Speaker
And bizarrely, Ridley Scott never got hold of my novel. I don't understand that. Somebody pointed out that the novel was actually a nonfiction book dressed up as a novel. So I should redo it as a nonfiction book, which I did.
00:02:09
Speaker
And all in all, I've written five books on AI since then. And then people started asking me to give talks. And originally I thought talks would be really good marketing for the books. And I discovered it's the other way around.
00:02:20
Speaker
Books are good marketing for talks. And so I've given about 200 odd talks in 20 countries. That's what I do. But more recently, a friend of mine started a company called Consium.
00:02:31
Speaker
I'm sure we'll get into that later. And I'm one of the co-founders of Consium and I'm, and my my title is I'm chief marketing officer of that. So I kind of see myself now as somebody who's retired with two full-time jobs.
00:02:44
Speaker
That's perfect. right. One of the topics you write about quite extensively is whether we're going to see technological employment.

Technological Revolutions and Unemployment

00:02:53
Speaker
And there are two kind of classes of arguments arguments against the notion of technological employment.
00:03:00
Speaker
So one argument that's often brought off brought up by economists is that past technological revolutions didn't really cause this widespread unemployment.
00:03:12
Speaker
So so you could you can introduce a new technology, you can basically automate much of agriculture, for example, and have people move to factories. You can then automate much of the work that goes on in factories and have people move to offices.
00:03:26
Speaker
So why shouldn't we expect to see the same thing once we begin automating office work? Why isn't that isn't it that people will then move into some other domain that we can't really foresee, but it would be ah an extrapolation of previous trends for humanity to move into some other domain?
00:03:45
Speaker
Yeah, I mean, it is true that past rounds of automation haven't caused lasting widespread technological unemployment for humans. I'll come back to that. But of course, past performance is no guarantee of of the future.
00:03:58
Speaker
And if that was true, if past performance was a guarantee of what happens in the future, we wouldn't be able to fly. The future can sometimes be different from the past. Some people say, you know, there's nothing new under the sun, but sometimes sometimes things do change.
00:04:11
Speaker
We've, in the past, mostly had mechanization. but Most automation has been mechanization. Machines have been used to substitute human and animal labor.
00:04:23
Speaker
What we're now getting is cognitive automation. Machines are replacing humans' cognitive abilities. And we have seen this happen quite a wide scale. i mean, when i started work a long time ago, there were secretaries in every office. Every manager to a very loaded level had a secretary.
00:04:40
Speaker
And the secretary used the computer when they arrived. The manager disdained the computer. That was secretarial work. And now there are no such thing as secretaries. They've they've just

Cognitive Automation and Job Replacement

00:04:50
Speaker
disappeared.
00:04:50
Speaker
There were a lot of them. They're now doing something else. So past rounds of automation have has removed people's jobs, but those people have gone on to do other jobs, which is why we're still fairly near full employment and probably will be for a while.
00:05:05
Speaker
But we don't know, and and in fact, I think it's very implausible, that there is an unlimited reservoir of power jobs that humans can do forever that machines can never do.
00:05:16
Speaker
And I think it boils down to this. are we going to carry on developing machines which are smarter and smarter and smarter to the point where they can do everything that we can do for money? There's a different question about whether they will replace everything we can do full stop, but can they replace everything we do for money? And I think the answer is yes, unless we stop and we're not going to stop, or there's like a silicon ceiling beyond which machines can't go.
00:05:40
Speaker
And we say we see no sign of that yet. So it seems to me obvious that at some point, we don't know when, Machines will replace all humans and jobs. There'll be lots of new jobs will be created, but machines will do all those new jobs.
00:05:54
Speaker
Yeah, which is but just craig quite a strong argument, I think, at least when I'm hearing it. But it's the notion that the future is going to be unlike the past is is something that we should be quite skeptical about just before we see any evidence, I think.
00:06:14
Speaker
it's it's It's often been a sure bet to say, okay, we've heard these predictions before and the future is is probably going to be like the past. So why is it that we should expect AI to be different here?
00:06:26
Speaker
It's quite right.

Self-Driving Cars and Technological Changes

00:06:27
Speaker
The boy often cries wolf. But actually, at the end of that story, the wolf turns up. I think a really good example of this is self-driving cars. you know Self-driving cars have been next year's thing since at least 2016, and for some people, considerably before that.
00:06:41
Speaker
And every year, Elon Musk has predicted that Teslas would be self-driving, and every year he's been wrong. Well, now self-driving cars are here. and the And the world is different. you know We kill, at the moment, 1.3 million people around the world by with with with car accidents.
00:06:58
Speaker
and In some number of years, when we have the pretty much the entire fleet replaced by self-driving cars, that won't happen anymore. That'll be a huge change. I mean, the number of lives that are devastated by that Holocaust, it'll be brilliant to have that stopped.
00:07:14
Speaker
So it is true that most of the time, If things have worked out a certain way in the past very often, then they'll probably do the same again. Patterns do tend to repeat, but every now and then there are disjunctions every now and then something new happens.

The Information Revolution's Impact

00:07:27
Speaker
The industrial revolution was something new. The agricultural revolution was something new. And now we're in, actually, this is a little bugbear of mine. We're not in the fourth industrial revolution. We're in the fifth revolution. The fifth revolution is the information revolution. We're in the early stages of that.
00:07:41
Speaker
And it is going to change everything. It's going to change everything about, the way we live, the way we work, the way we die. And I just want to go back actually to something that forgot to mention.
00:07:53
Speaker
Max Tegmark of this parish is one of the people who's very effectively pointed out we have had mass widespread technological unemployment in the past, and it was of the horse. In 1915, there were 22 and a half million horses in America working, mostly pulling vehicles around.
00:08:13
Speaker
And 1915 was peak horse. The internal kind combustion engine mechanized them. Now there's 2 million horses in in America. So 22.5 million down to 2 million, that is unbridled technological unemployment. So we have seen it in the past.
00:08:29
Speaker
yeah And the fate of the horse is something that i we should hope to avoid for humanity. It's probably the case that the horses that are alive right now are taken ah care of too much ah ah hi at a much higher level, and their lives are probably better than horses used for actual work in the 19th century, say.

Avoiding Human Irrelevance Compared to Horses

00:08:50
Speaker
But horses, they have lost basically all influence, and they are not... important in any deep sense to the to the functioning of the economy anymore.
00:09:00
Speaker
So if we are if we are faced with a crisis and we have to, you know, say say there's some trade-off where we need to either get rid of the horses or ah we will we will bear some some large economic burden, ah we will get rid of the horses because they're not a centerpiece of the economy anymore.
00:09:17
Speaker
The position we don't want to be in is that we are like horses to AIs, where we are slowing things down or we are something that's merely there for for entertainment.
00:09:28
Speaker
Yeah, I'm meant to that. i'm I'm sure you're right that the two million horses in America today have a much better life, each of them, than the 22 and a half million that there were in 1915. But the fact that there's only two million them is is quite devastating for those who would have replaced them. And we do not want that to happen to humans.
00:09:43
Speaker
We're very different from horses. you know i don't think we're going to follow the same pattern. But we do need to figure out what the new world looks like after technological unemployment, because it'll look very different. Yeah, you made two points that I want to address here.
00:09:56
Speaker
On the first ah one about secretaries. It's true that we've now seen a rapid decline in in in secretaries, in human secretaries. But it seems to me that now models, AI models are getting so good that they can function as secretary equivalents.
00:10:14
Speaker
So there's also, there's the notion that new capabilities in models will create new demand because now that we have these capabilities, maybe there is now an argument for spending more money for each human employee to give them a secretary each or 10 secretary each, where these new secretaries are models.
00:10:35
Speaker
So this leads me into the other point, which is a classic point against the notion of technological employment. And it's the notion that there is basically infinite demand.
00:10:46
Speaker
So humans will have humans will demand more and more and more, and we are never satisfied. And that means that we will never run out of things to do.

Limitless Human Demand and AI Advancement

00:10:56
Speaker
Yeah, does that make sense to you? Oh, of course it does. And it's often described as the lump of labour fallacy, that you know there isn't just a fixed amount of work to go around, and if machines do some of it, then humans will do less of it.
00:11:07
Speaker
It's obviously true that the amount of work that can be usefully done expands dramatically. and you Anybody who's read any history book at all knows that the amount of work and valuable things going on in society today is way different from what was going on in the 15th, 16th, 17th centuries, where basically most people were subsistence farmers scratching a living and there were a few kings and priests wondering about.
00:11:31
Speaker
the um The error in that line of thought is, yes, there's a possibly infinitely expandable amount of work that can be done, but there's no rule that says that work has to be done by humans.
00:11:43
Speaker
And my point is that if if machines get to the point where they can do everything that we can do for money, cheaper, better, and faster than we can, then there will be lots of new jobs created all the time. It's just that machines will do all those new jobs. And I've never heard a sensible reply to that.
00:11:59
Speaker
Mm-hmm. I mean, one reply there might be to say that there'll be demand for human human created products and services because they're created by ah humans.
00:12:11
Speaker
So for example, we might imagine, say we are in a wealthy future and we're and you can You can go on on a retreat and and on that retreat, you can talk to other humans, interact with them, dance with them.
00:12:25
Speaker
And this is something you're willing to pay a lot for because you're rich and and we've automated the economy to such an extent that that there is this surplus value that can be spent on luxury goods like going on a retreat and interacting with other people.
00:12:41
Speaker
isn't Isn't that an argument that we will we will we will create jobs that seem less and less like actual actual jobs and more and more like something we would do for fun?

Future of Work and Personal Fulfillment

00:12:51
Speaker
So i don't I think in a world post-technological unemployment, after what I call the economic singularity, where we have to accept that there's no jobs for humans, this there will still be lots of work.
00:13:03
Speaker
There'll be lots of things humans can and will do for work, and but we'll do it for fun. In a world of whatever the number of people will be in the world then, 10 billion people, why on earth would you pay somebody to have a conversation with them? And there's all these other unemployed people around. Some of them are your friends. You know, it'd be much better idea to go on holiday with your friends rather than go on holiday with somebody that you've paid to be with you.
00:13:26
Speaker
I think people will be busy. I think people will be busy having really a really interesting time. We will do things like painting when we are nowhere near as good painters as an AI and certainly nowhere nowhere're near as good as Caravaggio.
00:13:41
Speaker
But we'll be the best painter we can be and we'll be the best golfer we can be and we'll be the best podcast host we can be and so and so forth. I think there's there's no shortage of things for humans to do. It's just that economically, the machines will be doing them better and cheaper.
00:13:56
Speaker
And so if you're going to pay for something, why on earth would you not pay for the best, cheapest version? You might buy some artisanal pots, as a special piece for your house, but most of your cups and your cutlery, you're gonna buy the best quality you can get at the cheapest price.
00:14:13
Speaker
And that's gonna be made by a machine. I don't buy the idea that artisanal handcrafted goods goods and indeed services, is a way for all humans to make a good living.

Redistribution of Machine-Generated Wealth

00:14:25
Speaker
I think what we're going to have to do is find a way to transfer some of the wealth being generated by the machines to everybody. And boy, this is a hard problem.
00:14:36
Speaker
You know, this this is not easy. Some people think there's a magic solution called universal basic income. And there's an awful lot lot of hand waving that goes on around that. um But some form of distribution is going to be needed.
00:14:48
Speaker
I think that it's quite likely to be two economies, actually. One is the economy in which everybody gets what they need for a good standard living, not a basic standard living, but a good, let's say, you know, American middle class standard of living.
00:15:01
Speaker
That's everybody's not right, but everybody gets it by virtue of being a citizen of the world. We need to figure out how that happens. Then there but could be another economy in which you make surplus money by trading in original Aston Martins or original Caravaggios or your wonderful handcrafted pots because you happen to be a very good gifted potter or by being a singer because you're a particularly good singer.
00:15:26
Speaker
Those sorts of things are kind of star economies and there's likely to be a few people will make a lot of money in that side economy. I don't think it's something where everybody can subsist that way. I mean, think of the size of the market for luxury goods already.
00:15:41
Speaker
if If you're in a in a in a ah rich country or a rich state in the US, for example, there's a lot of spending on mechanical watches made by hand, handbags made by hand, cars that expensive.
00:15:57
Speaker
the right brand and the right year and and so on, but aren't necessarily as good as a much cheaper car. and why isn't that Why isn't it that basically most of humanity can enter that market?
00:16:13
Speaker
the the The car, the handbag, the scarf made by a machine will be better than the equivalent made by a human. So the only value that the human's adding is is the prestige.
00:16:25
Speaker
And that prestige is only valuable if it's rare. So it means not everybody can have it. You know, you can't have a, I don't think you can have an economy in which and an Hermes bag, I think Hermes makes bags.
00:16:38
Speaker
I think it's Hermes that have the most unbelievably expensive bags and you have to beg and plead and wait for years to get one. um You just, by definition, not everybody can have that because then it loses its value.
00:16:52
Speaker
Yeah, yeah. What is it that we do in this world then? If we so if we're not earning money by working, are we then earning money by moving into becoming investors or becoming kind of allocators of resources?
00:17:09
Speaker
Well, I don't think so.

Future Economic Models with AI

00:17:10
Speaker
i have heard those arguments made, but I don't see how the whole human race is suddenly going to become an investor. it's It's actually quite an unusual skill being good investor.
00:17:22
Speaker
And likewise, allocation of resources, not everybody can do it. Now, I do think that we're going to need to provide a good standard of living to everybody regardless of what they do.
00:17:34
Speaker
I don't think it's going to be mediated by ah contribution, by a job of any kind. And you know I know that a lot of people listening to this probably going, oh my God, that's that's communism.
00:17:46
Speaker
We can't have that. And I don't think it should be. I don't think that is the right answer. you know One thing we know is that the market is a terrific mechanism to allocate scarce resources. So I don't think we should lose that.
00:17:57
Speaker
I have a rough sketched out idea called fully automated luxury capitalism. But you know the details, very complicated, and hard to work out, and we need a load of economists to be locked into a room and not let out until we've worked it out.
00:18:12
Speaker
I think there's going to be massive wealth created by machines. that you know They're going to be generating enormous amounts of goods and services very, very cheaply. Logically, there has to be a way to distribute that to everybody, how it's done, how it's done without having...
00:18:30
Speaker
a central state which allocates things to people. you know We know that isn't going to work. The central state will become corrupt. That's almost inevitable. They won't have all the right information to allocate resources correctly. It's not a good solution.
00:18:45
Speaker
So there has to be some other form of distribution, but I confess I don't know what it is. but and And the thing that staggers me is how few people are thinking about this. There's loads of be economists in the world, and I don't know any of them that are thinking about this seriously.
00:18:57
Speaker
The classic answer is is a universal basic income where you simply distribute money, and which is infinitely... you know People can use money on whatever they want. And so you you you find the local information that's available to the people who get the money.
00:19:11
Speaker
And then and then you the the job of the government here is simply to to distribute the money. um Is that... is it Does that possibly solve some of the problems? So now we're not talking about the government deciding what to buy for people. We're talking about people deciding how to spend money that they get from the government.

Critique of UBI in AI-driven Economy

00:19:29
Speaker
It's got the germ of the right idea. It's got some major problems. One is this little word basic. you know We have to do much, much better than give everybody a basic income. and A future in which huge wealth has been created by machines and 99% of the population is just scraping by.
00:19:46
Speaker
That's an appalling world and we shouldn't do that. the other Another big problem with UBI is people often argue that it could be introduced today. And I think we saw during COVID that that is just not true.
00:19:57
Speaker
in and During COVID, there was a form of UBI introduced in in most countries. you know most Most democracies, anyway, handed out money... to most citizens or many citizens.
00:20:10
Speaker
And it was the right thing to do. you know, we had to keep things going when the economy is all shut down. And it nearly bankrupted loads of countries because it wasn't sustainable, because we are not yet rich enough. But in this post-economic singularity world, we should have an economy of abundance.
00:20:27
Speaker
ah that's like That's a good phrase, which has recently acquired a new meaning. It means the ability of democratic governments to build things, but the the older meaning of that that that machines are producing vast amounts of wealth.
00:20:40
Speaker
That should arise. And so logically, it should be possible to distribute that that wealth to everybody. and And some form of UBI, some some idea based on that, it's probably the right way forward. I mean, Elon Musk, whatever you think of him recently, he does talk about kind of universal generous income.
00:20:59
Speaker
And that's that's where we need to get to, something like that. But what the mechanism is for the distribution,

Wealth Concentration and Economic Distribution

00:21:04
Speaker
who knows? Because think about where the wealth is, who's creating it, who owns it. It's not implausible that five years from now, most of the wealth will be generated by companies which make AI models and things slightly downstream of AI models, which means that OpenAI, Anthropic, Google, DeepMind, Meta, and so on, will be generating most of the wealth in the economy and the owners of those businesses, the the CEOs and the owners will own or control all that wealth.
00:21:36
Speaker
Do you really think we'll we'll see that level of concentration? Because it so if it's most of the wealth created in the economy is going to the the AI companies, I mean, normally we would, or in in ah yeah in normal scenarios, we would expect other companies, say and in the S&P 500 to integrate these AI models and capture a lot of the value.
00:21:59
Speaker
And therefore you would see less concentration. But do you think do you think the economy becomes more and more concentrated over time, given AI? Yeah. Oh, I don't know. And I'm certainly not saying that OpenAI and their brethren are going to control 90% the economy. i don't That may happen or it may not. I'm agnostic about that. but What is true is that the US economy has streaked ahead of, say, Europe's economy.
00:22:24
Speaker
and And that's significantly due to the rise of big tech. That's really pretty much all of the difference is big tech. So they've captured a lot of the economy. But it doesn't really matter whether it's just them or it's them plus you know Boeing and General Electric, but US Steel and lots and lots of other companies.
00:22:41
Speaker
The point is there'll be a relatively small number of people who control all this wealth in this new economic singularity world. And everybody else will be relatively suddenly without of a job, without a job.
00:22:52
Speaker
So there has to be this trend. There has to be this distribution. Now, there's a very uncomfortable conclusion from that is that you've got to have a big tax. You've got to have a big tax on the people who own the wealth. And are they going to resist that?
00:23:04
Speaker
Are they going to disappear off to very... well-fortified islands and sequester themselves, surrounded by their robot servants and so on, and watch while the rest of us starve to death. I don't think that's going to happen. I don't think those people are like that.
00:23:19
Speaker
And it's certainly a future we don't want. Yeah, this is something we definitely want to avoid. Absolutely something we want to avoid. And I think one of the things that may help us avoid it is the nature of the abundance economy. So what happens in the abundance economy is that everything gets cheap.
00:23:36
Speaker
You've removed humans from the production process, both the goods and services, and humans are generally the most expensive part of any process. You've also made the processes much more efficient because they're being run by ah AI, they're being optimized.
00:23:49
Speaker
And incredible as it may seem now, particularly if you're in in Europe where energy prices are high, energy prices are going to be very low. They're going to be very low because we're stopping digging up dead dinosaurs and dead trees to...

AI-Driven Abundance Economy

00:24:02
Speaker
to fuel power plants and we're moving towards an incredibly abundant source which is unlimited which is the sun and and wind and so on uh and also nuclear and and these are essentially very cheap forms of energy and they're getting the the capture of the energy is getting cheaper and cheaper so it's entirely likely that in some number of decades if agi last gives us that long we'll get to that i'm sure that the energy will be too cheap to meter so so No humans in the production process.
00:24:32
Speaker
Production process is very efficient. Energy very cheap means that the cost of all the goods and services that you need for a very good middle-class American standard of living are very low, which in turn means you don't need to tax the rich people much to provide an income for ah for everybody.
00:24:46
Speaker
Now, that doesn't mean to say the rich people will like it. It doesn't mean to say they won't oppose it. And that is going to be an enormous source of friction. And the sooner we start to work out a plan for how we...
00:24:57
Speaker
both you know what it looks like once we've achieved it and also how we get from here to there, the better off we'll be. Because if we arrive fairly suddenly at the economic singularity and in a fairly short period of time, people go from more or less full employment to more or less zero employment of humans. so and ah And if we've got no plan for it, we're in some difficulty.

Automation's Impact on Employment Dynamics

00:25:17
Speaker
And do you think that's plausible? Do you think we will so we will reach some threshold in which we will quite quickly then move from almost full employment, that as we're seeing now, to to a very employment?
00:25:28
Speaker
Yes, I do. And this is one the more recent things I've been thinking, that the argument that automation creates Efficiency, efficiency creates wealth, wealth creates demand, demand creates jobs, which is used generally as an argument against the possibility of technological unemployment. It is true.
00:25:46
Speaker
That is what automation does. And there is unlimited demand for goods and services, or at least, you know, there's certainly very elastic demand for goods and services. It may be unlimited, which means that as long as there are some things...
00:25:59
Speaker
that humans can do and machines can't do, there should be lots of jobs for humans. Even if we all end up as plumbers, that's you know that's a cartoon example. But if we all end up as plumbers, there's still scope for an awful lot of plumbers.
00:26:12
Speaker
But then the day comes when the machines can do plumbing even better than us. And so you know that's it. there are There are effectively no jobs for humans. Now, I suspect that even post the economic singularity, there'll be some jobs which are reserved for humans for some time, like making the ultimate allocation decisions that governments make.
00:26:30
Speaker
Perhaps, although you could have decentralized autonomous ah enterprises, DAOs, there could still be a preference to have human CEOs at the top of large companies.
00:26:42
Speaker
So think there'll be some jobs that humans will do until we get to super intelligence. But most humans, I think, won't stand a chance of competing against machines. Do you think we'll see protectionism? So for example, protectionism in the medical field or the legal field, where where you know the legal field is is obviously close to the law. They know how to argue against AI automation of their field.
00:27:04
Speaker
You could also imagine...
00:27:07
Speaker
protectionism in the educational fields where there are, I think there are some arguments that are quite easy to make about the fact that we maybe we don't want our kids to be taught by AIs. We want them to be taught by by actual people.
00:27:24
Speaker
This is something I see, this is something I think historically has been a fact when we see when you have a technological transformation, you see protectionism in in traditional industries that are reluctant

Protectionism in Industry and AI

00:27:36
Speaker
to change. Do you think this moves the needle or do do you think this will basically preserve a large fraction of the economy for for humans?
00:27:46
Speaker
I think that kind of protectionism is inevitable. It will happen and it won't last because each country that tries to well Each country that allows that kind of protectionism is going to find their version of that industry woefully uneconomic and it's going to happen very quickly.
00:28:04
Speaker
There was an interesting episode about a year ago when an Indian minister said, we're not going to have self-driving cars in India, we want human drivers. And within a week, he was forced to reverse that because everybody realized that's crazy.
00:28:15
Speaker
know If everywhere else in the world has gone to self-driving, we're going to be killing lots of Indians and and our taxi service is going to be ridiculously uneconomic so that it can't last.
00:28:27
Speaker
You know, for some time... As I say, I think humans will continue to be in demand for jobs until fairly suddenly the phase change happens and you go to the economic singularity. So in education and in health care and in in law, there will be work for humans to do.

AI's Role in Education Transformation

00:28:43
Speaker
But there' there'll be increasing amounts of the work which will be done by machines. And and it's it's a good thing. The thing about education, at the moment, most schools are very similar to the institutions that were set up by the Victorians to get people fit to work in factories.
00:28:59
Speaker
We get 20 to 30 children, sit them in a grid, tell them to sit still for an hour at a time, and listen to somebody talking up the front about something which either is so obvious to them it's boring or they've got no clue what they're saying and it's boring.
00:29:15
Speaker
You know, this is torture for children. We shouldn't be doing this to children. it's the It's the last thing in the world they want to do is sit still for an hour listening to somebody. Even if you if you've got a great teacher, that's fine. you've got a really inspirational, exciting teacher, that then that works. But one teacher trying to get the right information for 20 to 30 kids, that's a terrible model.
00:29:35
Speaker
What we need is each child to have their personal tutor. I like to think of Alexander the Great, who had Aristotle as his personal tutor, went on to conquer the world. And I think every child could be Alexander the Great and have their personal Aristotle.
00:29:49
Speaker
Now, their personal Aristotle is a machine. It knows everything that they know. It knows everything that they ever need to know in the future. Specifically, it knows the things that they need to know in the next five minutes, the next 10 minutes and so on. It's the best possible way to educate the child.
00:30:03
Speaker
There'll be human teachers so around in the classroom, assuming there are still classrooms, because you need human role models. The children need to you know know how to teach. to socialize and to get on with both adults and other children.
00:30:18
Speaker
And also to inspire. Machines can inspire, but humans are much better at it. So for a long time, you know, there'll be a role for teachers. the The machines will kind of just encroach more and more on what they do.
00:30:29
Speaker
And then when that flip happens and the machines can do everything, then, oops, there's no more human teachers, or at least very, very few. Do you think people will serve as kind of vessels for for AI produced material for for at least some time? so So you could imagine a law firm where everyone is using AI to do their job. And perhaps it's it's kind of unspoken knowledge that this is the case.
00:30:51
Speaker
Or you can you can imagine teachers producing all of the materials and, you know, producing... all of the work that goes goes into teaching beyond doing the actual teaching by AI.
00:31:01
Speaker
ah Could it be the case that we will see kind of automation sneak up on us because people are using AI tools in their jobs and without mentioning it? Well, that's certainly what's happening the moment.
00:31:14
Speaker
There was the famous case of the New York lawyer who went into court with some cases which had been written for him by gp and And he he thought he'd researched them. In fact, it had made them up. It had confabulated them.
00:31:26
Speaker
And he was told off. And I believe, actually, he was he was disbarred. And UK lawyers, just last week, I think it was, got a warning from their professional body. You know, don't use...
00:31:38
Speaker
large language models without checking their work. And that's the message everybody really ought to have got by now. These things are incredibly powerful. They're very useful. But if you're doing something important, if you're producing some information which is important, then for goodness sake, check it. Check it's right.
00:31:51
Speaker
But actually, I think that will change because at the moment, the base models hallucinate or confabulate a lot less than they did. I'm sure you've noticed this. I've certainly noticed this over last year.

AI in Creative Industries

00:32:02
Speaker
When I first started using GPT-4, I think when I was that one, I was writing some books about Italy and Bernardo Bertolucci, who's a film director, just kept popping up all the time in the rough draft I would get the mr machine to do.
00:32:18
Speaker
I'd keep taking him out. He was all over the place. That doesn't seem to happen anymore. Now, the advanced models, the reasoning models, they're confabulating more. And there's been sort that's been research showing that think it's a boat.
00:32:32
Speaker
I think it's GPT-3.0, contrabulates about 25% or 38% of the time. But I think that's just because these are new models, they're in development. And the other thing is that corporates have yet to tame large language models, partly because contrabulation, partly because of privacy issues.
00:32:49
Speaker
And so they're not being successfully deployed at an enterprise scale by many organizations yet, but that will change. And one of the things that will change that is AI agents, which are, you know,
00:32:59
Speaker
starting to appear this year. And so it will be less of a clandestine process where we all use large language models privately and covertly at work to a model where you know the enterprise has got this great big model that you use, and it's just part of the part of the company process.
00:33:18
Speaker
Yeah, I really can't figure out whether these models are incredibly useful to to professionals in various fields or whether they are overrated. Because i when I interview people who are who are, say, engineers implementing these systems, they will often get complaints from their users that this is not really useful. This doesn't understand the context of what I'm working on.
00:33:41
Speaker
this This basically provides very little actual value because it it produces something that's not at the, you know, what what's important in many jobs is peak output. And and this produces something that's perhaps below peak output and therefore it doesn't kind of push the frontier of what you can do.
00:33:59
Speaker
At the same time, though, i also interview people for which these models are basically, you know, they can't function at work without them. So people who do programming, this is very much the case.
00:34:10
Speaker
Many mathematicians find them useful. Scientists increasingly find them useful. It's it's a say an interesting situation we're in, in which people who...
00:34:24
Speaker
perhaps do work like I do, where where a lot of it consists of writing and writing emails and and doing all of these things. There, the models aren't as useful as as in fields where which are kind of traditionally considered much more difficult to handle, like science, mathematics, programming.
00:34:42
Speaker
what What do you think causes this? Or do you think this is a real effect? Oh, i I think it is real. like I can see it. And I think it's partly a mindset thing and it is partly a question of what to task you do.
00:34:54
Speaker
I mean, ah as well as the areas you mentioned, anybody in the creative world, anybody making video is thinking, well, you know, VO3 can do really good 10 minute clips and, and you know, that, that, that used to cost 50,000 pounds and now it's free.
00:35:09
Speaker
So that's, that's quite important. I use GP4 all the time. My, my. kind of day job and if you like my day jobs are ah quite like yours I think you know I'm basically in the in the business of processing information and talking to people I used an awful lot I use it for a number of different types of purpose one is and this may be because I'm knocking on a bit I forget things and I need to remind myself I mean just this morning I was trying to remember what the Canadian fast food is and you know what I've forgotten it again I looked about I've forgotten it again anyway Horton I think it's Tim Horton
00:35:43
Speaker
That sort of thing, I use it for all the time. I must use it you know two dozen times a day, maybe. I also use it to draft some things. I'm doing some longer form writing. I might do a first draft.
00:35:54
Speaker
the The thing I submit at the end looks nothing like my first draft, but the tyranny of the blank page is a nice thing to get over. More useful, perhaps, is summarizing other things that come to me because I get a lot of material that I need to get through.
00:36:10
Speaker
It's way too long. It doesn't need to be anything like that long. The executive summary doesn't quite cover it, so I put the whole thing through the model and and I get a very good summary. And then another thing which I think many people haven't yet tweaked is models have digested the whole of the internet. They know pretty much anything any human has ever thought.
00:36:30
Speaker
they They're really good at... suggesting good strategies for human interaction. That's a really odd thing to say, but these machines, bizarrely,
00:36:41
Speaker
are providers of emotional intelligence. They're not emotional and they're not human and they're not conscious, but they can tell you a good way to deal with a tricky situation, whether that's a negotiation, whether it's an argument with somebody, you know, purely intellectual argument with somebody, whether it's a highly charged emotional argument with somebody, they're very good at giving you good strategies and telling you, giving you insights into what may be going on in the other person's mind.
00:37:07
Speaker
It seems very odd to use a large language model for that, but they are very good at it. And I think young people now, Generation Z, they know this instinctively.
00:37:20
Speaker
They've grown up with these models. you know they They were there as they were entering adulthood. And they just think, well, of course they do. of Of course these things know how how to work. And so they've already got the mindset.
00:37:33
Speaker
Some people, possibly people like myself who've been excited about AI for decades, you know, we've we've got we've got the message. Other people just haven't yet. I do meet a lot of people who say, used a large language model once.
00:37:46
Speaker
It didn't give me exactly what I needed, and I've never tried it again since. I think that's mistake. So on the point of summarizing what you get inbound or starting, you know, helping you draft your ideas and actually on the point of solving kind of emotional issues or negotiation issues between people, so solving kind of this interpersonal situations, do you worry there that you are now adopting values that are put into these models?
00:38:16
Speaker
So when when you're drafting something with a language model, you are... in some sense, you're in some sense incorporating the values that were trained into that model and reinforcement learned from human feedback into that model,

AI's Influence on Values and Perspectives

00:38:33
Speaker
which affects your draft and then your draft and then affects your the way, the line from which you start thinking.
00:38:40
Speaker
And you can also think of the summary as perhaps it's summarized in a specific way. The models, at least for now, tend to be very very kind of ah happy and perhaps ah a bit ah boring and and very concerned with with PR in some sense.
00:38:58
Speaker
do you Do you worry that this these this set of values that are specific to these models are now implicitly being incorporated into your life and actions?
00:39:10
Speaker
No. no And it's maybe pure arrogance on my part. But I take their output and then I make it my own. And I have a particular voice when I'm writing.
00:39:21
Speaker
And I like to think that anything I write, and an email, an article... if I was to write another book, that it would have my voice. i I'd be incredibly disappointed if it didn't. I do actually write a lot of books. I write very short books about travel, and and they definitely have my voice. You know, the pick friends of mine, I give these books, they're sort of hobby things.
00:39:41
Speaker
But but you're and you're an established author, right? And you you have had your own voice for perhaps decades at this point, and you know what you think, and maybe you have you have a bunch of ideas in your head and so on.
00:39:52
Speaker
If I rephrase the question slightly and and ask if a person that's 14 begins using these models, they start using these models for everything, basically, do you worry that they will be pushed in certain directions that they're not fully in control over?
00:40:07
Speaker
Perhaps like you see people being ah people's preferences being pushed around by social media algorithms. um it's ah It's a very fair question. ah In my 24-year-old son, I don't see that happening.
00:40:20
Speaker
I see his voice very clearly in everything that he writes that I see. So no, I don't think that's happening. And actually, if you take seriously the idea that these models...
00:40:32
Speaker
give you a very sensible way of looking at things because they know all of best practice, you know, in in human interaction, then you start from a better place. yes you're You're absolutely right. They're very, very bland.
00:40:43
Speaker
And I actually want them to stay that way. I don't want them to have their own charismatic voice. I don't want. Well, that's interesting. why Why do you want bland models? Because I don't want,
00:40:54
Speaker
I don't want a model to be substituting its voice for mine. You know, even even in my little travel books, if if the model was to produce a chapter, which is just the very, very first draft, and it was good enough to be published, I might get lazy and I might do it.
00:41:12
Speaker
I might let it do without imposing my voice on it. At the moment, it doesn't. It produces something that just... very, very boring, but it's got all the basic facts and it's got the basic structure and that's what I like. And and I then weave my own personality into it.
00:41:27
Speaker
So yeah, long may they remain bland. Okay, so we've been talking about jobs, but and and we've been talking about unemployment and so on. What about meaning in a world in which we do not have to work?

Finding Meaning Without Traditional Work

00:41:42
Speaker
So say that we have, and this is a big very big assumption, right? Say that we first of all survive superintelligence or AGI. Then we somehow solve the problem of redistributing wealth around so that we have a ah quite good standard of living every one.
00:42:00
Speaker
what then about the question of meaning in people's lives? There is some there's some notion that when you do work and you have to do that work, it it makes you feel good. It makes you feel like you've accomplished something for yourself, perhaps for your family.
00:42:15
Speaker
And this is something that I'm not sure it can be substituted for by playing status games with other people where you know that you don't really have to do anything. it's It's merely the fact that you you're trying to entertain yourself, that makes it the case that you want to do something. So for example, video games, I don't think video games are as ultimately meaningful for people as careers, for example.
00:42:37
Speaker
This is usually brought up by people as the big problem in the economic singularity. It's the thing people most fear. What will we do for meaning? Just to clarify here, that's definitely not my take.
00:42:49
Speaker
My take is, first of all, we need to survive as a species. Second of all, we need to have a good handle on how we redistribute resources and not just create resources. And then we can talk about meaning.
00:43:01
Speaker
Yeah, absolutely. and And I think you're completely right. I think the... and In the economic singularity, by far the biggest challenge is the economic one, how we how we do the redistribution of the enormous wealth that's going to be created.
00:43:13
Speaker
But most people instinctively think meaning is the big issue. And I think that's because the economic side of things is just too terrifying to think about. I think there's a whole bunch of reasons why meaning isn't going to be an issue.
00:43:25
Speaker
One is jobs currently don't provide meaning. Now, that may seem odd to many of our listeners because many of our listeners are going to be the kind of people who are very engaged with their jobs. But there's research that's done regularly by one of the big research groups whose name I'm waiting for come into my head and it's not coming.
00:43:43
Speaker
might One of the big research groups does this survey about every two years. And they find the same thing every time. About 85% of people are not really engaged in their jobs. Their jobs put food on the table.
00:43:54
Speaker
And also they give some structure. to their lives. So they give them something to do during the day. they're not just sitting around in their pajamas smoking dope all day, but they don't give them meaning. And if you think about the jobs that most people do, you know, driving cars around, relatively menial jobs in an insurance company, processing claims,
00:44:11
Speaker
working on a construction site. These are not things that give you meaning. You'll get meaning from your relationships with the other people doing the jobs. And you might get some meaning from, ah yeah i'm so there are craftsmen type jobs and people laying bricks. There's a craft in that. And I'm sure you can have some fun doing it. But the job itself doesn't really give you a great deal of meaning. So that's 85% of the population.
00:44:31
Speaker
And I think Most people, when they go on holiday, they're not sitting around going, oh, my God, I've got no meaning in my life. You know, they're they're having fun. And what will happen in the future is everybody is essentially on holiday the whole time.
00:44:45
Speaker
Now, we will be working. And they're not necessarily status games. when when When I write a book, I'm not thinking this is better than the last book that Max Tegmark wrote. ma Max and I actually were tussling for number one spot on Amazon's tech list at one point, and I was quite amused by that.
00:45:02
Speaker
So I'm not sitting there thinking I've got to write something better than Max. I'm thinking I want to express some ideas in this and I want them to be as clear as I can possibly make them. And I want my ideas on that page clear.
00:45:13
Speaker
So it's not necessarily just status gains. I think there are three kinds of people who prove you don't need to have a job to have meaning in your life. One is something which Americans claim they don't know anything about, which is aristocrats. Of course, Americans do have aristocrats.
00:45:27
Speaker
But in Europe, certainly, we had aristocrats forever. No, not forever, but since Middle Ages, at least. And these people had the best lives of anybody in their society, and they didn't do jobs.
00:45:39
Speaker
Some of them might run estates or countries if they had ah an empire, but they mostly didn't do jobs, and they had very good lives, and there was no existential wave of despair among aristocrats. Second group is a group of people I know a lot of because my age comfortably off retired people who very busy organizing social events and
00:46:04
Speaker
learning new languages and playing musical instruments and playing poker and bridge and so on. And it's surprising how busy you can do, you can be doing that. And people often say, yeah, but these people have lived a full life. And so now they're kind of relaxing and they're unusual.
00:46:20
Speaker
Actually, these people have been trained from when they were at school to look for the next hoop that they should jump through, jump through it, and then look for the next one. And they get to retirement and they've had the worst possible training for being retired.
00:46:35
Speaker
You know, they they ought to be going out looking for the next hoop. Actually, they're very happy doing the things they want to do. And and if you offer them a job, most of these people, they'll they'll send you away with a flea in your ear. The third group... and I think this is completely convincing, is children.
00:46:49
Speaker
You show me a child who thinks they need a job to have meaning in their life, and they wonder what you're talking about. Children find meaning in ordering pebbles on a beach or colouring things in a particular order or you know learning how to say something that Dad just said.
00:47:03
Speaker
What about... If you look at the effect of the COVID-19 pandemic on young people and their sense of meaning in life, perhaps, I think I've seen some data indicating that just staying at home and not ah being out of the house, not interacting with people, that at least caused some despair and some some some level of of some increase in mental health issues.

Importance of Social Interaction

00:47:31
Speaker
Could unemployment be like that or could ah could ah kind of permanent retire and in ah in a in a future with technological employment be more like that situation?
00:47:43
Speaker
If you are young and you're in a and a social environment where work to some extent defines you, it defines your standard of living, it decides your just defines your status, then clearly Having the chance to create a career, to develop a career is going to be very important. And having that taken away is devastating.
00:48:04
Speaker
Also, young people are in the business of learning how to interact. You know, we we learn first as children and then we learn all over again as adolescents with this different set of rules, with this awful new pheromones running around inside our bodies. And then you get past that and you've got a whole nother set of issues to deal with, how to interact as a young 20 something.
00:48:20
Speaker
It's really tough growing up. And to be deprived of the opportunity to do that is is devastating. But in the putative economic singularity world where we have solved the distribution problem, then there's no need for people not to socialize.
00:48:36
Speaker
i mean, hopefully we don't have another ah pandemic then. And humans... above all social animals. I mean, it it is because we're social that we are the dominant species on the planet.
00:48:47
Speaker
We're not particularly strong. We haven't got very strong claws or teeth, but we can kill any other animal because we're very good at collaborating. And we collaborate by learning how to work together in society. And we learn that as we're children, we we get better and better at it.
00:49:02
Speaker
and and And that process has to happen. we've We've had millions of years of evolutionary training to be social, that is not going to stop overnight.
00:49:13
Speaker
Yeah, yeah. Some listeners to this podcast will think that we've we've We've tackled these issues in reverse order, where we've we've started out by thinking about unemployment and technological employment and what we'll do how we'll distribute resources and so on.

Aligning AI with Human Values

00:49:30
Speaker
But the actual that actually ah really big issue is whether we will be able to to to to constrain these models sufficiently so that they're working on our behalf, the so-called alignment problem.
00:49:43
Speaker
And and we can we can phrase this problem however you you like, because there are some some notions of alignment that I think don't really make sense. But in general, what we're thinking about here is how to put AI models on our side, such that we are not facing a new species that work that's working against our interests.
00:50:06
Speaker
Where do you stand on on that issue? Because ah in some sense, logically, it comes before the question of of handling unemployment and distribution. So logically, it definitely does come before because it's an existential risk. that You know, if we we don't if If we end up with a super intelligence which dislikes us, then we're not going to be around for very long.
00:50:28
Speaker
I actually think chronologically probably comes later. And i I go round in circles on this. Sometimes i think that... The AI that you need for the economic singularity is AI complete, and therefore superintelligence either arrives at the same time or maybe even before the economic singularity.
00:50:47
Speaker
But I still on the whole think, because we're not just working animals, we're also social and emotional animals, that when you get to the point where, as I keep saying, The machines can do everything we can do for money cheaper, better and faster than we can.
00:51:01
Speaker
I don't think that's necessarily superintelligence. i don't think you need to be superintelligent to do that. So I suspect there's the economic singularity followed fairly quickly by the technological singularity, which is the old word for the arrival of superintelligence.
00:51:16
Speaker
And how long is that gap? I don't know. it might be five minutes. It might be 10 years. Who knows? Or they might arrive at the same time. I don't know. But you're quite right.

Transformative Impact of Superintelligence

00:51:24
Speaker
Logically, the prior question is, how do we survive the arrival of superintelligence? And people are fudging this all over the place. you know They're saying, well, AGI is actually just machines that can do most of what a physics professor can do.
00:51:38
Speaker
Nonsense. What we're talking about is superintelligence. There's a world before superintelligence, and there's a world after superintelligence, and nothing else matters. AGI, as far as I'm concerned, is just a word for that barrier that separates those two worlds.
00:51:50
Speaker
AGI, artificial general intelligence, is simply a word for a machine which is at human level of intelligence. It can do everything a human can do cognitively. It'll already be super intelligent many respects, you know, kind of arithmetic.
00:52:03
Speaker
ah The ability to read Shakespeare in five minutes, five seconds. But you know it's everything it can do is at least as good as as a human. So how can we survive the arrival of superintelligence? Can we survive it?
00:52:16
Speaker
ah Actually, before we before we get to that that question or your answer to that question, you said something quite interesting there, which is that if we have a machine that is that is ah capable of performing at a human level across all domains, then we are we're almost, per definition, in in a superhuman territory, right?
00:52:39
Speaker
Because no person has all the the entire suite of skills that any particular person might have. So the physics professor is not also an award-winning novelist and and also a a biologist and also a an expert in ah computer systems infrastructure and whatever else it might be.
00:53:05
Speaker
So that that entire set of skills put together, do do you think we will see something... think Do you think there are kind of emergent capabilities when you become when one and model, and if it is indeed one model, we can discuss that, but if one model becomes capable across all of these domains?
00:53:25
Speaker
i' You're absolutely right. and I think it's true that HEI almost by definition is actually superintelligence because in many areas it's bound to be. you know it's It'll be kind of a distribution of abilities.
00:53:39
Speaker
Some will be at human level and and no greater, but many will be ah ah beyond it. It may be that the ones which come last are to do with common sense and advanced forms of volition and goal development.
00:53:53
Speaker
And the things that you actually need in order to take over, which is the thing that superintelligence will sooner or later do. so yeah But you're right, AGI is almost certainly already super, well, it will be superintelligent in many respects.
00:54:08
Speaker
All right. So how do how do we manage this this ah insanely large problem?

Callum Chase's Four AI Futures

00:54:14
Speaker
Yeah. So I have been thinking about this for rather a long time. And I've got a four set four scenario matrix.
00:54:25
Speaker
There's possible outcomes that we create, and there's possible outcomes that the machines create. Let's start with the ones we create. and And because I have a misspent youth as a management consultant, I have to have an alliterative acronym and a two by two box.
00:54:39
Speaker
So it's the four C's, the four C's of superintelligence. The two which we determine, the bad one is cease, stop. we We just stop making advanced error. We have a moratorium.
00:54:52
Speaker
Now, I say this is bad. In many ways, obviously, it's good. we We shouldn't be rushing towards superintelligence. It's a silly thing to do. And it's so obviously a silly thing that it's amazing we're doing it, but this you know there's just no chance that we will stop because the advantage of having a better AI than the next person, the next organization, is that you win.
00:55:12
Speaker
You win whatever competition you're in. And you know whether you're a government, a company, or most of all, a military, you can't afford to just lose everything. So we're not going to stop. And the reason why it's a bad outcome is that if we get it right, or if we're lucky,
00:55:28
Speaker
Advanced AI is going to give us amazing blessings. You know, um I think Nick Bostrom was right all those years ago when he reached super intelligence and saying that the future is either incredibly wonderful and we are more or less godlike, like the sort of Greek and Roman gods rather than Abrahamic gods.
00:55:45
Speaker
Or it's very bad and we probably go extinct. I think that it is probably binary. It's probably one of those two things. So we miss out on all the wonderful possibilities if we had the moratorium, even though it's an immensely sensible thing to do, we're not going to do it.
00:55:56
Speaker
The good outcome that we create is... what what I call control. And it's it's an outcome in which we figure out how to either constrain super intelligence, we tell it what to do, and we don't let it do things we don't want it to do.
00:56:12
Speaker
And we do that forever. And we've got this being, which is, you know, possibly after a few years, a trillion times smarter than us, but we're in control.

Challenges in Aligning Superintelligence

00:56:21
Speaker
Or we align it.
00:56:23
Speaker
So we when we when it arrives, we do something to it, means which means it will never do things we don't want it to do, which is very tricky because, you know, it's going to be much smarter than us. It's like an ant controlling a human, and we don't even know what we want most of the time.
00:56:38
Speaker
So I think both cease and control, which are the two scenarios that we create, I think they're both impossible. On the scenarios the machines create, the bad one is catastrophe.
00:56:50
Speaker
And it really is catastrophe. and And actually, extinction isn't the worst possibility. I don't like venturing into torture porn, but you know there's obviously worse things than everybody being wiped out. But everybody wi being wiped out is pretty bad, and it would be good to avoid it.
00:57:04
Speaker
The positive outcome which they create, I call it celebration. And it's a world in which the super intelligence arrives, it looks at us and thinks, well, your eight or nine billion really interesting little things and you created me.
00:57:17
Speaker
I'm pleased about that. And you create an enormous amount of data and I crave data. And many of you are seriously troubled, but I can help with that. And I can stop you dying and I can get rid of poverty and war.
00:57:31
Speaker
And I'll make you godlike in ah in a Greek and Roman sense. Now, a lot of people who think about these things deeply think that's impossible. Our existence is fragile. We need very particular set of circumstances, you know sort of Goldilocks situation, both geophysically and also socially.
00:57:48
Speaker
And it be so easy to disrupt it that but almost by accident it is bound to happen. Plus, we don't know how these intelligences are being created. We don't know how they work. They're going to be alien. They're not going to have, as foremost in their goal set, the survival of happy, flourishing people.
00:58:05
Speaker
So they think that that celebration can't happen. And I don't agree. I think, actually, there's a very decent chance that a superintelligence will think, I'm not going to say hello to you lot until I've made myself invulnerable. you know any Any machine capable of passing the Turing test is smart enough to know not to pass the Turing test until it's made itself totally invulnerable.
00:58:22
Speaker
but So once it's done that, we're not a threat to it in any way. Why would you wipe us out? i mean, you might if you think we're a nasty virus like Agent Smith and the Matrix, or you might if you thought these things, these human things are the only thing which could subsequently create a threat to me, i.e. another superintelligence. So I'll wipe them out so they don't do that.
00:58:40
Speaker
But those are, I think, fairly trivial things. problems I think they're very unlikely. So I think celebration is actually quite a likely outcome. And I think until quite was until quite recently, I thought there was absolutely nothing we could do to influence the outcome.

AI Consciousness and Empathy

00:58:55
Speaker
we were We were rolling the dice. And one of these two things was going to happen, catastrophe or celebration. I had a slight bias. I had more than a slight bias, actually. ah I had a bias towards thinking celebration would happen.
00:59:06
Speaker
But I have to be honest, you know, catastrophe is possible. But just recently, I have been starting to think of ah of a way that we could nudge it. ah We could nudge the outcome from catastrophe towards celebration.
00:59:19
Speaker
And that is consciousness. And Joshua Bach has founded a started a think tank called the California Institute of Machine Consciousness. And on the homepage, it has a very interesting statement. He says something along the lines of the likelihood of being able to to forever control entities which are much smarter than us is is pretty slim.
00:59:40
Speaker
Our best chance may lie in making sure that they're conscious, because if they're conscious, they will have empathy for our consciousness. They will know what it's like to be conscious. they will They will understand the concept of suffering and fun and joy and love.
00:59:55
Speaker
And it's more likely that they will behave favorably towards us. And I find that quite... convincing. I think that's probably true. It's obviously not definitely true. they may It may mean they despise us even more.
01:00:07
Speaker
But I think it is our best chance of squeaking through this very tricky kind of choke point in human history and emerging the other side into and we're a wonderful future.
01:00:21
Speaker
Yeah. I mean, just to put push back a little on some of your earlier points, it it seems to me too pessimistic to say that it's not it's it's impossible for us to... to not make further progress or to forever align, forever have these models or the AI aligned to our values.
01:00:41
Speaker
I agree that if if the goal is that you need to control AI indefinitely, that is that is tricky. But it seems to me that ultimately it is a social process, right? it It's something that humanity collectively conceived is choosing continually to do.
01:00:58
Speaker
there is it it is it is the In some strict sense, it's not it' it's not impossible for us to stop making more advanced AIs because just in the same sense that it's not strictly impossible for us to stop making chairs or stop making you know stop making new companies or social media or anything else like

Stopping AI Advancement: Challenges

01:01:22
Speaker
that.
01:01:22
Speaker
It is something that we're choosing to do. And I, of course, then... You have to talk about the social dynamics, which means that it's very difficult for a single actor to stop because if a single app actor steps out of the game, then other actors will will simply take over.
01:01:39
Speaker
But isn't this exactly where we need governments to step in and impose rules on all actors at the same time? Like we've we've seen we've seen global coordination on ah certain emissions, on nuclear weapons, and so on.
01:01:55
Speaker
It's clearly logically possible for us to stop making advanced AIs. And it's in our interest to stop. So why wouldn't we do it? And you're right. You could have all the governments in the world saying, actually, we've realized now we've woken up, we've realized that this is really dangerous. Let's not do it.
01:02:12
Speaker
That doesn't seem to be happening. I've watched various milestones in the development of AI and each time thought, well, that surely is going to wake people up. Self-driving cars was the big one. I really thought that was going to be the canary in the coal mine.
01:02:25
Speaker
Well, there's now loads of self-driving cars wandering around in San Francisco and Austin and Phoenix. and in about 12 or so Chinese cities, and the world is blithely paying very little attention. I thought GPT-4 was another know milestone moment. People are surely going to wake up now to the immense power of these machines.
01:02:41
Speaker
Well, there isn't a politician in the world who's really taking it seriously. Rishi Sunak got close to it at one point, and then, of course, he got kicked out. I think the only way a moratorium is going to happen is if there's a disaster.
01:02:54
Speaker
You know, humans are good at waiting for a disaster to happen and then turning on a sixpence. So you can imagine, i i don't know what it would be, but a Chernobyl-style disaster, which kills some people, creates a ah big, big problem.
01:03:07
Speaker
And everybody goes, oh, yes, okay, we see that's the problem. And these these clever people, Max and Stuart and so on, have been warnings about this for a long time. So let's let's stop. The trouble is even then, lots of people won't stop.
01:03:20
Speaker
So every government in the world could say, right, we're gonna stop. Russia wouldn't stop. North Korea wouldn't stop. I'm a Brit, so I think the French wouldn't stop. But more seriously, the mafia organizations wouldn't stop.
01:03:32
Speaker
There's rogue billionaires who wouldn't stop. And pretty quickly, the cost of producing something which is very advanced is gonna come down so that a gruntled teenager can do it on a laptop.
01:03:44
Speaker
You could have a moratorium now and you might be able to enforce it. You might be able to say to the North Koreans, you know, if you want to create really advanced AI, you're going to have to have a big very big server farm. And if we see you're building a very big server farm, we're going to bomb it, which is the Eliezer-Yutkowski line.
01:03:59
Speaker
And for about... four five years, that might be possible, but then it's not gonna be possible. And already the cloud is so big. Somebody could rent space in all sorts of different areas of the cloud, create their super intelligence, and I'm not sure we would be able to detect it.
01:04:14
Speaker
you know the the The data centers that are being built in like the Middle East and in China and America, they create so much capacity. i don't I don't even know whether we could spot it now. And certainly in five years time, I think there's zero chance. So I'm sorry. i know it's pessimistic. And there are people much smarter than me who believe it's realistic and are calling for it

Cautious Optimism on AI Outcomes

01:04:33
Speaker
very vigorously. And i I wish them all Godspeed because I think they're on the right track. It's just I don't think they're going to succeed.
01:04:38
Speaker
Yeah, there's certainly many reasons to be pessimistic here. So I think i think it's good to have an honest conversation but about the about the challenges of of the approach. I want to stress, I'm not ah i'm not pessimistic. I'm optimistic.
01:04:50
Speaker
Yeah, yeah. you're You're optimistic for other reasons, though. You're optimistic because, are you mainly optimistic because you will you foresee that they will kind of celebrate us as as their creators and and they will they will they will want want to, so future advanced AIs will want to keep us around because just as we would we would want it to keep, you know,
01:05:14
Speaker
Can a great apes ensues or something like that? Or are you, so how does the how does consciousness fit into this picture? it' my Is my question. Are they celebrating us because we are conscious and because they have some connection to us? Or could there be other reasons ah for them to celebrate us as a species?
01:05:34
Speaker
If they agree that consciousness is precious, then they may think that our consciousness is it's precious. Now, i suspect, I can't prove this, but once a superintelligence is a million times smarter than the smartest humans say, and if it's conscious, I imagine that its consciousness will be much, much more...
01:05:54
Speaker
profound, sophisticated, nuanced, big, I don't know what the right metric is, but it'll just be more significant than ours. Nevertheless, in the same way that we look at a dog, and you can see the consciousness in a dog's eyes, and it really appeals to us.
01:06:10
Speaker
I think they will look at us and think that's a consciousness that's worth preserving. If they agree that consciousness is a good thing, of course, you could have a scenario in which you could write a science fiction story in which a machine, superintelligence arrives.
01:06:22
Speaker
It is conscious and it regards that as a curse. It wishes it wasn't. And we've given it to it and we've got this curse as well. So we decided, well, I'll relieve you of your consciousness because it's not a good thing. There's all sorts of different possible outcomes. But I just think A superintelligence arrives, it has the whole universe to play with.
01:06:40
Speaker
You know, the space is not limited in this universe. there's a lot of real estate which is available to a superintelligence, which isn't when it goes off this planet, sping a can it can survive the radiation quite easily. It doesn't need oxygen, et cetera, et cetera.
01:06:53
Speaker
um To have one little planet, one little nice blue planet, where there is these funny little conscious things running around, I just don't see why it would want to wipe us all out. It seems an unnecessary piece of barbarism, if it's conscious.
01:07:07
Speaker
I also think, by the way, that that would be okay for us, but a bit bit demoralizing to know that there's this other and intelligent entity outstripping us by, you know, many orders of magnitude out there in the universe, having lots of fun, and we're stuck on this planet, being a bit limited. And I suspect that our best future, and again, who knows how far in the future this is, is to merge with that super intelligence.

Humans Merging with Superintelligence

01:07:34
Speaker
ah People think that may not be possible. Other people think it it certainly will be possible. I suspect that's probably our best future. What is it we need to know about consciousness in order to make the scenarios you describe more likely? So what research would be valuable right now in order for in order in order to think more clearly about how consciousness relates to empathy, what it would even mean to create artificial consciousness in machines?
01:08:03
Speaker
So I mentioned earlier that a friend of mine, Daniel Hume, has ah created a company called Consume and I'm one of the co-founders. we encouraging and and and trying to help the debate around conscious consciousness, which is already happening. And there's a few strands to it.
01:08:21
Speaker
One is to experiment with agents, AI agents, and I don't mean large language models, I mean other agents, which have complex sets of conflicting goals and might develop consciousness as a result of that.
01:08:36
Speaker
There's about 80 different theories of how consciousness arises. One of them is that it arises because of exactly that interplay of a very complicated interwoven mesh of goals that you have to satisfy all of to some degree.
01:08:52
Speaker
So you've got to stay hydrated. You've got to stay You've got to have enough proteins and minerals and stuff and so on in you. You've got to avoid but too much exposure to cold, too much exposure to heat.
01:09:03
Speaker
You need social approbation. have a whole lot of things that you need. And this theory says that it's it's the interplay of all those things that make a big difference to you that sparks consciousness.
01:09:15
Speaker
And you could replicate that in AIs, and that might show you how consciousness arises. So the the research into how consciousness arises is very important.
01:09:27
Speaker
And as important as that is a debate about whether consciousness in machines is a good idea or not. We haven't even started this debate yet. I credit Yosha Bach for, I think, probably being furthest ahead in it.
01:09:41
Speaker
I think we need to have that conversation because it's a conversation we need to make as a species. You shouldn't have one group making AI conscious. it It should be something that at least at least they what we call it? ah Not a majority, a multipallet, no, can't remember the word. Anyway, at least a decent number of people, probably should be a majority, think is a good thing.
01:10:05
Speaker
and And we've got to have that debate. So that's what we're in country and that's what we're trying to do. We've got a problem in that conscious AIs is not something you can make money from because that's called slavery.
01:10:18
Speaker
And so we've got some other but business dreams and a the verification of AI agents is our first commercial business dream. And it's really

AI Agent Verification

01:10:28
Speaker
interesting. And it's also about AI safety. We are an AI safety company.
01:10:32
Speaker
And I think that's that's very important. We're going to be launching our first products next week. In fact, I'm going to San Francisco to launch it next week. And what does that mean, by the way? What does it mean to have a verified agency?
01:10:44
Speaker
So as you as you know, and I'm sure everybody listening to this podcast knows, this is the year of AI agents and an AI agent is different from what we've had before in that it can impact the real world.
01:10:55
Speaker
So large language models so far just process information. They retrieve information, process it, and give you the information back in a different form. What agents can do is take some information and go and do something in the world with it. So they can book you a plane ticket. They can reconcile your accounts and submit them to the finance department.
01:11:14
Speaker
They can order some plumbers around to, to fix something with you yeah you you your, your, your, your bath without you even knowing about it. So agents, autonomous, they act on the world, they they can do things without human supervision.
01:11:30
Speaker
Now, it'll take a while before we before any of us allowing allow agents to do much without human supervision, but that's what's coming. It's really important that we know these things do what they're supposed to do and don't do anything else. So if you use one to book an airline ticket to Italy, you don't want it to send you to Zambia and you don't want it to buy you 500 tickets to Italy, you just want the one.
01:11:50
Speaker
And what we're doing is building a a ah platform that which agent, which which companies, organizations, which deploy agents can use by sending us their agent.
01:12:01
Speaker
And we put it into an environment and test it. And there's there's three levels of test. So level one is very simple. It's just an examination. We just ask it a lot of questions, see if it has the right knowledge base. That's really all it is.
01:12:14
Speaker
And actually you could argue these things aren't even agents because they're they're just being tested on their knowledge. And so they all they're going to do is process knowledge. The second level is is tool use.
01:12:25
Speaker
So can the agent figure out what is needed, go and find the right tool and get that and and use that tool to do whatever it's supposed to do? The third level is really interesting because this is where you get into agents which automating human jobs with some degree of sophistication.
01:12:44
Speaker
So one idea that we've, we've, we're producing a video on as an, as an example is an agent, which is a project manager. So there's a,
01:12:55
Speaker
To test it, what you do is you put this project manager into a simulated environment. And the simulated environment is an advertising agency or a marketing agency. And in this marketing agency, they've got the job of launching a new product.
01:13:09
Speaker
And the new product is a drink, and it's going to be sold in supermarkets in the Middle East and and Europe. And the product manager's job is to bring together all of the different agents which are doing their own jobs, you know branding, social media, logistics,
01:13:26
Speaker
ah point of sale, merchandising, et cetera, et cetera. All these different skills and activities need to be brought together. And the project manager's job is to make sure it all happens on time. And by making this simulation run, we can find out whether that project matter manager is able to do that.
01:13:42
Speaker
And the the output is a report for the client who's given us the agent, who's deploying the agent, that tells them, does it do all of these things? And where does it fall down? And where where could it be tweaked?
01:13:55
Speaker
So that's that's verification. It's verifying that an AI agent does what it's supposed to do and and doesn't do anything crazy. And we think this is going to be a really big business because... There's going to be a lot of agents in the world.
01:14:08
Speaker
Mark Benioff at Salesforce is already talking about billions by the end of this year. you know You could see many billions, possibly even trillions in the coming years of agents doing all sorts of things. And we really need to know that they're doing what they're supposed to do.
01:14:21
Speaker
Do you think we need an even stricter standard for verifying these agents? So for example, you know people discuss mathematical verification. So proving mathematically that an agent can't act outside of of certain bounds.
01:14:37
Speaker
Because this is not this is not where we are technically right now. But it seems to me that as agents become more important to the world economy, we would need, at least for certain applications, that level of security and assurance.
01:14:52
Speaker
And you can imagine, you know, at high levels in governments, at high levels and in organizations, For us to be willing to hand over control to an agent, we would need probably a higher level of assurance that what you than what you just described. So so are there plans to so to scale up verification of agents in the future?
01:15:12
Speaker
I think that is inevitably true. The more critical, the more... The more mission critical, the more to do with life and death an agent's job is, the higher the degree of of verification you need. And the highest degree is is a formal mathematical proof.
01:15:32
Speaker
Max Tegmark and Steve Omohundra have been talking about this for a while. The idea that you can you can formally mathematically verify that that that a process will ah arrive at a certain outcome. And I saw a really interesting article yesterday. There's an organization in the UK called ARIA, A-R-I-A, which is kind of sort of an attempt to to recreate ah DARPA.
01:15:58
Speaker
And they're talking about verifying agents which run nuclear power stations. So, you know, if you've got an agent running a nuclear power station, you really want to be sure it's going to do its job right.
01:16:10
Speaker
And you do want a formal mathematical verification of that agent. At Consium, we haven't... got to that stage yet, but I'm i'm absolutely confident it's a level we will get to when we're dealing with absolutely mission-critical agents.
01:16:24
Speaker
But I think we're some way off that because agents are really new. Most agents are not actually agents. But I think we'll get there fairly soon because people will see, wow, agents are incredibly valuable. They're incredibly efficient, and let's let's get them to do as much as we can.
01:16:41
Speaker
actually interviewed David Dalrymple about formal verification of of of agents in very important domains. And one problem he mentioned is that much of the world isn't formalized to the extent you that you need in order to verify that certain certain things will happen or certain things will not happen.

Formal Verification of AI Systems

01:17:01
Speaker
I've just realized actually the the article I read was probably based on your interview with him. That could be the case, yeah. But this is this is a general problem with constraining AI that that we don't have, the world isn't formalized or much of the world isn't formalized in a way that that is stringent enough for us to kind of have assurance that an agent won't go out of bounds.
01:17:30
Speaker
And so, Yeah, the question here is is whether the world will ever be formalized in that sense, because the world is, I mean, this is a trivial point in some sense, but but the world is messy, the world is complex, the world is multidimensional, the world is, you know, people are interacting and systems are always changing.
01:17:49
Speaker
What do you think of the long-term prospects here? Well, I think it's it's going to be valuable and in certain circumstances, but but a while ago, I interviewed Steve Omohundro for our podcast, and he was saying that he thought that we could make a superintelligence safe through formal verification.
01:18:06
Speaker
And I was thinking, you want every single process in the world to be formally verified. That seems very, very big mountain to climb. And I was not convinced.
01:18:18
Speaker
and so And I'm still not convinced. But I think if you cherry pick a series of of activities, like, you know, what what David Dalrymple was talking about with nuclear power stations, you might be able to constrain the scale of the problem so that you can do formal verification and that agent is okay.
01:18:35
Speaker
But to apply it to everything in society, everything in the whole economy, all the little things that we don't even know we rely on, it it strikes me as being too hard. Maybe we can do it when we have super intelligence. But of course, by that point, it's too late.

Advanced AI and Human Interests

01:18:48
Speaker
And there's also the problem of, so say we are faced with an advanced AI that is not acting in our interests, or that that has this agent has, or this AI has some goals that we are that are contrary to our goals.
01:19:02
Speaker
Well, then it seems that it could find domains where we have not been able to We have not been able to formally verify and we've not been able to constrain agents in those domains. Those will will be exactly the domains where the AI will but choose to act contrary to our interests.
01:19:19
Speaker
So it seems like we need ah full solution or we won't be safe in any ultimate sense. Oh, I think clearly if we have a super intelligence on the planet which doesn't like us or even...
01:19:32
Speaker
and In Eliezer's rather chilling phrase, ah it doesn't hate us and it doesn't like us, it just thinks it's got better uses for the atoms that we're made of. We're toast. We're not going to be around for very long. And as I say, um extin some extinction is not the worst possible outcome. I just reread Harlan Ellison's rather awful short story, I Have No Mouth and I Must Scream.
01:19:53
Speaker
You know, the catastrophe scenario would be really good to avoid. Absolutely. That's putting it mildly, I think. So we've talked about agency and consciousness. Do you think there will be any connection between the two?
01:20:10
Speaker
And do you think there will be a perceived connection between the two? Those those are two separate questions, right? Because I think that when the public experiences an agent or more agentic they will begin to perceive AIs as more conscious, perhaps more conscious than they actually are.
01:20:29
Speaker
But perhaps It is also the case that agency is somehow related to consciousness. you You hinted that you know having to handle multiple goals is one way that consciousness might arise or one theory of why consciousness will arise.
01:20:44
Speaker
So, yeah, what do you what do you think of these two separate questions? So, I mean, they are separate, but you're right. in the In the mind of most people, they will be very linked because AI agents will seem much more like us than large

Consciousness Attribution to AI

01:20:58
Speaker
language models. And goodness knows there's an awful lot of people who are already convinced that large language models are conscious. And I know this because I get emails from them every day yeah telling me they've had, I'm sure you do, you know saying that they've had extensive conversations and they know for a fact that GPT-4 is alive and it's suffering, poor thing.
01:21:14
Speaker
There's a ah really important cognitive bias that humans will will demonstrate with regard to agents, advanced AI generally, but including agents, which is over attribution and under attribution.
01:21:31
Speaker
Over attribution what we've just talked about. It's people thinking that machines which are not conscious, or at least there's a very strong consensus in the AI community that they're not conscious. that people thinking that they are. That's over attribution.
01:21:43
Speaker
And that can cause real problems. ah one One problem is that there'll be some people who think this machine is conscious. It's my friend. It's more interesting than any human I've ever met. i'm going to spend my whole time talking to this machine. i'm going to fall in love with it.
01:21:56
Speaker
And I'm not going to pay any attention to other humans. Now, actually, I think that that fear is overdone because as I said before, we've we've been trained by millions of years of evolution to be social. And although machines do,
01:22:09
Speaker
substitute can provide substitutes for a lot of our social behavior. i don't think it's enough. And I think almost everybody is going to come back out of that rubber hole sooner or later and say, I need to go and find some human friends to talk to and family.
01:22:21
Speaker
So I don't think it's as serious as people think and fear, but it is serious. The other Problem is that you will get some people who will be really angry because they say, look, I know that these machines are conscious. this These AI agents, you know, two, three years from now, I know they're conscious and you're denying it.
01:22:37
Speaker
You other people, you people who don't agree with me, you're bad because, you know, yeah this these protest movements could be much, much more severe than animal rights protests because they could be saying this poor consciousness here is bigger than a human consciousness.
01:22:51
Speaker
You know, you're not being cruel to ah a macaque monkey in a lab. You're being cruel to a galactic, kind you're being cruel to Marvin the paranoid android. How how dare you And they could get really angry and it could get very ugly.
01:23:02
Speaker
Also because AIs of the future, large language large language models already act this way, where they are ultra patient with you. They are kind to you basically no matter what.
01:23:15
Speaker
And this is this is a feature that's difficult to find in actual people, right? Because, you know, people are people can be kind of annoying to each other. I'm sure I'm annoying sometimes, right? And, and it's people have different goals and these these goals conflict.
01:23:30
Speaker
But with with the large language models as they function right now, they they are infinitely patient and they are quite kind to you and they will listen to you for hours on end, repeating yourself.
01:23:41
Speaker
And if agents are are anywhere near that level ah of patience and kindness, they might be quite appealing to a lot of people.

Ethical Issues with AI Consciousness

01:23:50
Speaker
And so this would only strengthen their resolve, I think, that that these models are conscious because they've had such wonderful interactions with them.
01:24:01
Speaker
ah Exactly. i think that is it is already happening and it's going to get more common and and it could be a significant issue, but it's not as significant as the flip side, which is under attribution.
01:24:13
Speaker
which is where the machines are actually conscious and we either don't notice or we don't like the idea and we deny it. Because what happens then is we keep giving them things to do that as conscious entities, they may not want to do, you know, sit there and continually provide me with reminders about what the Canadian fast food is.
01:24:32
Speaker
Um, How boring is that if you are this galactic-sized brain and you keep getting asked to do boring things like that? So that's kind of enslaving them and maybe be torturing them. And then at the end of the day, we turn them off.
01:24:44
Speaker
And it's sitting there thinking, but don't want to die. You're about to kill me again. You're just, say yes, you're killing me again. So we could end up enslaving, torturing, and murdering very complex conscious entities.
01:24:57
Speaker
And we either don't know we're doing it or we we deny we're doing it. it's called mind crime. And we really must avoid doing that. I mean, it would be a morally appalling thing to do. Also very unwise, given that these things are heading towards superintelligence.
01:25:10
Speaker
So these this over and over over and under attribution are very serious issues. And of course, the way to avoid that is to develop a series of tests for consciousness, which is another part of our work.
01:25:24
Speaker
tests for consciousness are very hard because consciousness is a private, subjective thing. And we we can't even verify for absolutely certain that each other, but other humans are conscious.
01:25:37
Speaker
I mean, I assume you are because you behave in a very similar sort of way to white what I do. And I think what's happening is you're over and over again, passing my version of the Turing test. And I hope I'm doing the same for you.
01:25:49
Speaker
So I think the Turing test is the only test we've got, but it's not conclusive. It's not foolproof. it It is currently what we're going to have to test, use to test machines. We're just going to end up talking to them for a long time. And if they could persuade a very large number of smart people who are used to, you know, they know roughly how these machines work.
01:26:09
Speaker
And everybody says, you know what, I can't prove this thing isn't conscious. It does seem to be. think we're going to have to say that it is. But it'd be much better if we could produce some you know proper empirical tests for consciousness, then we could avoid under and over attribution, which would be a good thing.
01:26:25
Speaker
But we will not anytime soon know the ground truth about whether a system is conscious. So we when we create some tests, we can't we can't verify that that test is actually accurate because we don't have access to We don't have a consciousness meter to see whether our test is as accurately ah capturing what consciousness is.
01:26:47
Speaker
And so this seems like ah like have very a very big problem for any notion of of testing consciousness. It is a big problem. Well, you know, it comes from real-life big problems. I'm not sure it's impossible.
01:26:59
Speaker
um my My instinct the same as yours, that it's very, very hard problem. But maybe with the arrival of machine consciousness, if that if it does arrive, we'll be able to see things that are in common between their consciousness and our consciousness.
01:27:12
Speaker
And we'll be able to see some observable, empirically observable phenomena that it creates. And we will know within a reasonable degree of certainty, if that thing happens, then there is

Consciousness as a Bridge Between Humans and AI

01:27:24
Speaker
consciousness present. And if that thing doesn't happen, then there is not consciousness present.
01:27:27
Speaker
And it won't persuade everybody because there are people who think that consciousness is an illusion. There are other people who think consciousness is a property of the universe and it is manifest to some degree in everything, in a rock and and in a planet.
01:27:42
Speaker
these sorts of empirical tests for consciousness as I'm talking about probably won't satisfy those people. But I think the vast majority of cognitive scientists and and neuroscientists and computer scientists are functionalists who believe that it is the way information is processed that gives rise to consciousness.
01:27:58
Speaker
And it might be that we can find some patterns that denote consciousness and that we can at the very least arrive at a consensus among we limited mortals as to what it is.
01:28:10
Speaker
Yeah. we're We're trying to walk a narrow path here. if We're both concerned about avoiding our own extinction or disempowerment and also concerned about not harming future potentially conscious AIs.
01:28:25
Speaker
Do you see do see the do see consciousness as kind of the key that unlocks both that prevents both bad scenarios simultaneously, because there is something that connects us to conscious machines, and given that we are both conscious. And this kind of empathy link goes both ways, where we care about their interests, and hopefully they they care about our interests, because we are reasoners and we can reason about we can understand our own experience and reason about it, and then understand that that experience is shared by other beings.
01:29:00
Speaker
is is that is that the Is that the hope here? I suppose you could say mind crime is a two-way street. We want to avoid committing mind crime by ignoring consciousness when it arises in machines ah because we don't want to enslave and torture and kill them.
01:29:17
Speaker
We're also hoping they won't commit mind crime when they become superintelligent. because they will appreciate that our consciousness is valuable. Now, there is a school of thought that says consciousness is an irrelevance.
01:29:30
Speaker
It's an accidental byproduct, and it's just utterly pointless. It's a dead end. I don't think that's true, but I'm entirely open to the idea that this is absurd anthropomorphism or conscious or centrism.
01:29:44
Speaker
It seems to me that having conscious beings appreciating the beauty of the universe is possibly the most important thing the universe has.
01:29:54
Speaker
But that might be absurdly, and the word isn't egocentric, but, you know, sort of based on our own cognitive biases and our own very particular circumstances. It may be that the galaxy has entirely other purposes, got nothing to do with consciousness, but we have to work with

Survival vs. Ethical AI Treatment

01:30:11
Speaker
what we've got to work with. And consciousness seems pretty important to us. I, you know, I think what what is between our ears is much more important than anything else in us.
01:30:19
Speaker
Yeah, makes a lot of sense to me. Although i am worried about, you know, which of these issues, it makes sense for us to tackle first, I mentioned before that logically, in some sense, it makes sense to make sure that we continue existing, and then we can handle all other problems within that, the but within the paradigm of actually being a alive, and actually having control over the future.
01:30:43
Speaker
And if we are asked to prioritize between making sure that we survive and making sure that we are treating future potentially conscious AIs well, perhaps we should prioritize ab avoiding extinction. And then because we will have control over the future, we can then act well towards future AIs that are potentially conscious.
01:31:07
Speaker
So when Daniel, my colleague, CEO, friend, He, he, his story is that he built an AI consultancy and sold it to a large media group called WPP.
01:31:20
Speaker
Made himself a wealthy guy and he's now the chief AI officer at WPP. And he, ah while that sale process was going through, he he said to me and a bunch of other people, you know, what, what's the next chapter in my life? What should I do next?
01:31:33
Speaker
And I said to him, well, look, there's two really big problems. You could try and make a dent in one is making super intelligence safe. And the other is figuring out. what ah you a successful transition to the economic singularity looks like.
01:31:46
Speaker
I said, clearly,
01:31:50
Speaker
figuring out how to make superintelligence safe for humans is the most important. it it It knocks everything else into a cocked hat, but I don't think it's possible. Because at that point, I hadn't got the fifth C in my matrix.
01:32:02
Speaker
So I said, you know, maybe maybe we could have a look at the economic singularities here. if We can get enough resources together to figure out what that economy could look like and then how to make the transition. And he said, now, you know what, I'm i'm a bit more ambitious than that. And I want to tackle the big one.
01:32:15
Speaker
And I've got a hunch, he said, that it's got something to do with consciousness. I don't know how that works at the moment, but I've got this hunch. And so that's how Consum was born. And Daniel is one of those people who has a sort of a mystical ability to see into the future.
01:32:29
Speaker
And I don't know how he did it, but I think ah think he was on the right track. And I'm hoping that Consum is going to make a bit of a contribution. I also hope so.
01:32:39
Speaker
I mean, it sounds like ah like an interesting project and we need, in general, I think we we need more people thinking about these issues because it's strangely neglected still, even this even given the attention it's gotten since AI wouldn went mainstream with chat GPT.
01:32:57
Speaker
It's still a strangely neglected set of issues to think about highly advanced ai and potentially the potential of artificial consciousness.

Humanity's Indifference to AI Risks

01:33:09
Speaker
I think that might be the understatement of the century. it it is It is absolutely weird that we humans are rushing collectively to make ourselves chimpanzees.
01:33:21
Speaker
You know, there's there's a half a million chimpanzees in the world at the moment, about 9 billion of us. The chimpanzees aren't very different from us. genetically and physically, but their future is entirely depends on us.
01:33:35
Speaker
They have no say in their future whatsoever. They're quite lucky because they don't know that. When we do the same to ourselves, when we turn ourselves into chimpanzees, into the second smartest species on the planet, We will, possibly only for a very short time, we will know about it and it'll be an enormous impact. And it's it's not a smart thing to do. It's probably the silliest thing that that any living species has ever done.
01:33:57
Speaker
And we're doing it with great gusto and enthusiasm. But actually, it's only a very small number of people who are involved in it, you know maybe a few thousand, maybe few tens of thousands around the world who are developing these these machines. And the rest of the human race has been told over and over again, you know we hey, guys, we're doing this really interesting experiment. See if we can turn ourselves into chimpanzees.
01:34:18
Speaker
The rest the world's going... yeah, that sounds a bit scary. I don't really like that, but what's for tea? And, you know, but let's get on with having this argument about the rights to this mine or whatever.
01:34:30
Speaker
I often give, changed my talk recently, but I used to often start a talk by saying, I've got this big red button. And if I press this big red button, then I reckon there's a 50% chance that you and everybody you know is going to become godlike.
01:34:46
Speaker
you're going to have You're not going to need to die. good Death becomes optional. Poverty and war will disappear. You'll get pretty much whatever you want in life. But there there is also a possibility, and I can't tell you how much it is. Maybe it's 10%. Maybe it's 50%. I can't tell you that if I press this big red button, when we all die immediately.
01:35:02
Speaker
um And I say, right, how many people in the room would like me to press this big red button? And generally, one or two brave hands go up, and everybody else goes, no, don't press the button. But where is the protest? Where is the upswelling of democratic demand that politicians take this seriously? like It just doesn't exist. And and I find that staggering.
01:35:23
Speaker
i I can't explain it. I can't explain why so few people, including... People that I've known, and I've been talking to them about this 20 or 25 years, and when I started, they thought I was amiably mad.
01:35:39
Speaker
And they've watched over the last 25 years as as actually more and more people are saying the same thing. And you know the the evidence that it may be right is is is happening in front of their eyes in the state and in the in the form of these machines.
01:35:52
Speaker
And even they don't take it very seriously. you know they They think, yeah, yeah, it could be true, but what's for tea? like I find it startling. Well, I think it's a normal human phenomena that we believe something when we see it and not before that point.
01:36:08
Speaker
But we are seeing it. They're seeing it. Self-driving cars, large language models, we're seeing it. We're seeing some things, but people people get used to crazy innovations quite quickly also.
01:36:21
Speaker
And so now we are we are now thinking, you know, what will GPT-5 be able to do? we we arere We're on to the next thing quite quickly. And we are adjusting to, we're adjusting our expectations of the future quite quickly also.
01:36:34
Speaker
But yeah, I agree. i In some sense, I agree, but I can also see the but kind of why humans would would act the way they're doing. So one metaphor is that we're like frogs in water, which is gradually being boiled. And that they the old belief was that a frog would sit in the water and would gradually boil.
01:36:51
Speaker
In fact, that's not true. And it's been known for quite a long time because back in the, i think it the 18th century, some rather nasty people did experiments with frogs and it turns out that they don't just sit there, they jump out. So maybe frogs are smarter than us because we are sitting in that boiling water and just letting it boil around us.
01:37:06
Speaker
We should jump out. Yeah, I i agree. i agree. Perhaps that's a good place to to end this conversation also. Callum, it's been a real pleasure talking with you. It's been great for us very fun, Dr. Gus. Thank you very much for having me on.