Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
11| AI, Risk, Fairness & Responsibility — John Zerilli image

11| AI, Risk, Fairness & Responsibility — John Zerilli

S1 E11 · MULTIVERSES
Avatar
144 Plays1 year ago

AI is already changing the world. It's tempting to assume that AI will be so transformative that we'll inevitably fail to harness it correctly, succumbing to its Promethean flames.

While caution is due, it's instructive to note that in many respects AI does not create entirely new challenges but rather exacerbates or uncovers existing ones. This is one of the key themes that emerge in this discussion with John Zerilli. John is a philosopher specializing in AI, Data, and the Rule of Law at the University of Edinburgh, and he also holds positions at the Oxford Institute for Ethics in AI and the Centre for the Future of Intelligence in Cambridge.

For instance, John points out that some of the demands we make of AI with respect to fairness are simply impossible to fulfill — not due to some technological or moral failing on the part of AI, but that our demands are in mathematical conflict. No procedure, whether executed by a human or a machine, can consistently meet these requirements. We have AI research to thank for illuminating this.

In contrast, concerns over a 'responsibility gap' in AI seem to overlook the legal and social progress made over the last centuries, which has, for example, allowed us to detach culpability from individuals and assign it to corporations instead.

John also notes that some of the dangers of AI may be more commonplace than we imagine — such as the use of deep fakes to supercharge hacking, or our psychological tendency to become complacent with processes that mostly work, leading us to an unwarranted reliance on AI.

Notes:

(00:00) Intro

(3:25) Discussion starts: risk

(12:36) Robots are scary, embedded AI is anodyne

(15:00) But robots failing is cute

(16:50) Should we build errors into AI? — catch trials

(26:62) Responsibility

(29:11) There is no responsibility gap

(42:40) Should we move faster to introduce self-driving cars?

(45:22) Fairness

(1:05:00) AI as a cognitive prosthetic

(1:18:14) Will we lose ourselves among all our cognitive prosthetics?




Recommended
Transcript

AI Discourse: Optimism vs. Doomsday

00:00:00
Speaker
current discourse around AI seems to oscillate between boundless techno-optibism on the one hand and forecasts of our impending doom on the other. And I must confess that, even in my own opinions, I find myself wavering sometimes between these two extremes. And I think one of the reasons for this is it's so hard to figure out
00:00:19
Speaker
the sort of societal and technological changes that AI is going to introduce. Even just looking at the current state of LLMs like Clord2 and ChetGBD4, we're still figuring out all the capabilities that they possess and how we're going to implement those, how we're going to use those in our businesses and our daily lives. And they're just the thin end of the wedge, those models that are going to become more capable, at least that's what we expect.

Guest Introduction: John Zerilli

00:00:50
Speaker
I guess this week is John Zerilli. He's a philosopher and assistant professor at the University of Edinburgh, where his field is AI, data and the rule of law. And he also holds positions at Oxford's Institute for Ethics and AI, and at the Centre of the Future of Intelligence at the University of Cambridge. So he's really at the centre of where cognitive science, artificial intelligence and the law all meet.

AI Ethics: Fairness and Responsibility

00:01:17
Speaker
And in our discussion, we worry about some of the risks from AI, and they may not be the sort of sexy headline risks that grab most of the attention. But we also talk about some of the ways in which AI is not really something entirely new, but it's throwing into focus problems that we already have, or perhaps it's taking them to a new scale. For instance, in terms of fairness, John argues really compellingly that
00:01:46
Speaker
We always have had issues with fairness. AI has actually enabled us to formalize those and understand those better. And with responsibility, where we think that it may be hard to portion responsibility in a world of AI, Dawn points out that we've developed legal structures over the last centuries that have confronted exactly these sort of challenges.
00:02:10
Speaker
Well, if we think about the way in which AI may extend our phenotype, if you like, and enable us new powers, while at the same time causing us to lose certain skills that we currently do possess, well, those sort of things have happened in the past with the invention of writing, for instance.
00:02:27
Speaker
Well, before we crack on with the episode, I want to give a hearty recommendation for John's book, A Citizen's Guide to Artificial Intelligence, published by MIT Press back in 2021. Two years may seem like a long time ago, given all the things that have happened with AI since then. However, I think all the points that he makes are still relevant and they're very well made.
00:02:51
Speaker
I should add that he has many co-authors in this book, but there is an interesting story to weigh that the authorship of the text works, which we will discuss in the episode. Without further ado, I'm James Robinson. I'm one of the founders of OpenSignal, and I'm not a robot, although there's an argument that I am partly a robot. This is Multiverses.
00:03:25
Speaker
John's already. Thanks for joining me. Thank you for having me.

AI's Potential and Risks in Society

00:03:30
Speaker
So I wanted to start by talking about what we should worry about most. So there are small opportunities with AI. Well, actually significant ones in terms of improving, for example, medical diagnoses and replacing a lot of really boring work. And associated with those are some threats as well, some risks.
00:03:55
Speaker
disruption to the labor markets and so forth. And there's also huge opportunities, the opportunity of perhaps solving the climate crisis of coordinating societies much, much better, avoiding wars and so forth, but possibly associated with the technology there are a much, much greater set of risks as well, potentially existential risk. Where do you, as someone who thinks about the ethics of AI,
00:04:25
Speaker
What do you devote more time to? What do you think is worth giving time to here? I think the problem that AI poses isn't a single problem. I think there are all sorts of problems that it presents, some of which are more remote, some of which are more immediate, and the ones which are popularly increasingly described as the
00:04:54
Speaker
ones to panic about, the more remote ones, in a sense find a counterpart in what are popularly thought of as the more immediate ones. So people talk about existential risk and they tend to mean something
00:05:11
Speaker
cataclysmic, catastrophic of a kind where once we cross a threshold there's no turning back, where the enslavement of the species is considered a real possibility.
00:05:27
Speaker
In terms of immediate threats, which as I say are generally not thought of in the same sense of panic and fear, you do have existential risks already.

Immediate AI Threats

00:05:41
Speaker
So for instance, if a particular bio facility which presented a biohazard
00:05:53
Speaker
had some sort of major cybersecurity breach, which meant that the bio hazards could escape. Well, there's an existential risk right there. And that doesn't seem to require any more gains in technology, compute speed. That's just a matter of bad faith actors getting their hands on the dial, so to speak. So there's a problem with what you would call
00:06:22
Speaker
AI in the immediate term, before we even start talking about super intelligence and artificial general intelligence, which poses an existential risk. So I would worry about cyber security risks. I would also worry about the stuff that's not existential, but that poses dangers to those that are affected by the technology. So racialized minorities, sexual minorities,
00:06:52
Speaker
women, gender, imbalances in the workforce, these are things which AI does have an impact on now. And it's all very well to say they're not a matter of species life and death, but they could very well be a matter of individual life or death. So there's this broad array of problems and whether or not they pose an existential risk
00:07:18
Speaker
doesn't seem to map on to how sophisticated the technology is. So you can have unsophisticated technology that also poses an existential risk. I mean, the computer system that is responsible for, let's just say if it happens, hopefully it won't, that's responsible for detonating a nuclear power plant right now in the Ukraine could be as devastating and possibly more devastating than Chernobyl. I mean, that's an existential risk
00:07:48
Speaker
And you might be able to describe the technology behind it as AI, even though it's not very sophisticated when put against future iterations of deep learning. And then you have more sophisticated technology that could lead to an artificial general intelligence, but it may pose zero existential risk whatsoever. So this is a long way of saying that
00:08:12
Speaker
There are two measurements. One is about technological sophistication, and that's one scale, and then another about risks and how many people stand to be in danger. And those two measurement scales are orthogonal to one another. That's my understanding.

AI in Critical Systems: Transparency and Risks

00:08:34
Speaker
If you think about those two dimensions, it's interesting to note that
00:08:41
Speaker
the sort of Zafaricia example, as you say, we may not think of the control systems there as AI and they're probably, most likely that's not the best classification of them, but it is certainly the case that we might imagine that AI will be used more and more in the control systems of nuclear power stations, bio-facilities, as you mentioned as well. So in some ways,
00:09:11
Speaker
we are this is an extrapolation or an intensification perhaps of a problem that we already have like how do we incorporate technology into the security of you know how do we incorporate software into the security of of these kind of hardware problems and we might say okay well as long as the AI you know the AI is clearly going to do a better job than what's previously done but I wonder again coming back to those two dimensions is there perhaps a kind of
00:09:41
Speaker
Is there kind of a region in the middle of this where the AI technology is somewhat better in some ways than the technology that we have?
00:09:51
Speaker
But worse in other ways, maybe it's less transparent or interpretable. And we get to a kind of local minima where bad things happen, where we trust the AI too much. We overestimate its capabilities, which wouldn't happen with, for one of the words, more old-fashioned software systems.
00:10:20
Speaker
is that a kind of, otherwise I'm not sure I see the concern because maybe we're just making things better by introducing more sophisticated controls.
00:10:31
Speaker
So I think what you're alluding to, and correct me if I'm wrong, is something to do with the extent of our reliance on a technology becoming greater as it's able to do more things, perhaps. And interestingly, a question that's often put to those on the more panic-stricken side of the AI ethics versus AI safety debate is, but can't we just turn it off? Can't we just pull the plug?
00:10:59
Speaker
And Jeffrey Hinton gave a very short and, in its own way, interesting answer. Recently, when he was asked this question, he said, no, we won't be able to just turn it off or pull the plug because by the time that we'll want to do that, we'll be too dependent on that system. We would have gotten to a point where we simply can't turn it off because if we turn it off, a whole bunch of other things will turn it off as well. It'll be so integrated.
00:11:27
Speaker
And it will have infiltrated our systems to the point where it just wouldn't be feasible to turn it off. So if the question is, are we worried that the technology will
00:11:44
Speaker
Actually, I'm not quite sure what the question was. Can you repeat? Yeah, I think you've rephrased it very nicely that perhaps we overestimate. Perhaps we'll end up overestimating some of the capabilities of AI. And as you say, that might lead to complacency. So we relax the oversight, I suppose, is one way of thinking about it, that we have of it. Or we entrust it with too much capability.
00:12:15
Speaker
OK, we've covered some of the downsides here, but when it surpasses human capabilities, then that complacency issue goes away, right? Would we then be justified in ceding more control to these systems? So in a domain-specific sense, AI already has
00:12:42
Speaker
If you just take the ability to process factors, the input space of a deep learning network would vastly exceed the inputs or the factors that we in our own minds can hold when making a decision. Does that mean that we are inclined to give
00:13:03
Speaker
or to cede authority to it. It does mean that we are inclined to do that. There is research showing that as a system gets more technologically sophisticated that we tend to defer to it. It's a very well-known thread of literature there.
00:13:19
Speaker
If the degree by which it exceeds human intelligence happens on a large enough scale and across multiple dimensions of intelligence, so not just the ability to hold multiple factors at any one time, but also speed and also pattern recognition abilities and all sorts of things along multiple dimensions, will we then
00:13:45
Speaker
be more prone to defer to it or would we be less likely to defer to it. Here I think the answer depends on the extent to which we see it as being a competitor. So there's research from the same broad field that found out that when humans
00:14:07
Speaker
work with a sophisticated technology, they tend to defer to it. From that same body of literature you have other findings which are quite interesting. They show that when a robot
00:14:20
Speaker
becomes very sophisticated. Humans don't like it. They feel threatened. But if it's something more like embedded in the systems that you're using, something like a virtual agent like Siri, or just embedded AI in the sense of, you know, a complicated spreadsheet, software program, when they make, sorry, when they become sophisticated, we tend to defer to them, we like them. So there's something about
00:14:48
Speaker
whether humans feel threatened or not. And when these embedded AI and virtual AI systems make errors or process things in ways that we don't expect, we generally trust them less.
00:15:04
Speaker
When they do well, when they do what they're supposed to do by our lights, then we defer to them. When robots do well, we distrust them. When robots make errors, we like it, by contrast. We get a warm and fuzzy feeling. Yeah, it's cute. So the answer to the question then will be,
00:15:25
Speaker
will depend on the mode of presentation of these systems whether we come to think of them more as HAL from Space Odyssey or whether we think of them as just sort of embedded in the systems around us if they're invisible. What's interesting is that these are not problems that are inherent to AI as such that there's something to do with
00:15:47
Speaker
psychological response to AI. And so the complacency issue to recap is something like as AI gets better or systems get better, we tend to rely on them more and more, even if we're not
00:16:04
Speaker
necessarily justified in doing so. We just kind of create, we have a habit of saying, oh, you know, it knows it's saying, I'll let it do it. But interestingly, when something reaches a very high level of expertise, if it's a robot, we feel somewhat threatened, we don't like it. And maybe we're somewhat kind of distrust worthy. But then if it does make an error, like, okay, well, no, it is just a stupid old robot again, that's fine.
00:16:30
Speaker
But the packaging seems to be the difference. If it's got an embodied packaging, like a robot, then it's almost like we view it as either a conspecific or as a predator. Whereas when it lacks a body, then it just seems like a table, a chair, a window, just something that's part of the environment, but not an agent in the environment.
00:16:52
Speaker
I mean, what

Designing AI for Human Oversight

00:16:53
Speaker
does this say to how we should design AI in a safe way? I mean, one thing I remember reading in your book was throw a line. Maybe we should build in errors into AI so that we kind of avoid that complacency problem. If there was some kind of space repetition of deliberate errors, it would kind of remind one, oh, I can't just blindly trust this. And that's a very counterintuitive thing for an engineer to do.
00:17:21
Speaker
Is there like, you know, I'm not sure how serious that suggestion was. It's taken seriously. I mean, they call them catch trials and they don't always, it doesn't always go by that name, but it is taken seriously to simulate errors to keep the operator on their feet. I believe it happens
00:17:41
Speaker
A lot in the training of pilots as well that you sort of want them to be in situations that are not just smooth sailing autopilot situations but more out of the blue situations where they have to
00:17:59
Speaker
turn on their, or put their thinking cap on and work through the problem. I'm told, I don't know if this is true, but that I remember being on a bumpy landing once and the champs sitting next to me and it's like, oh, that would be, that would be the one time out of 10 when the pilots forced to land, you know, without autopilot to keep their eye in. Right. Right. Yeah. And on the other point, you know, on the packaging of AI, so it seems,
00:18:28
Speaker
safer if we were to package more up into robots. So maybe we should make AI look scary, right? Is that a good idea? Actually, that's not hard. I haven't thought about it, but that's not a half bad idea. Making it look at least like something that needs monitoring rather than something that we can just allow to blend into the background. Yeah, I think that's right. I mean, I think instead of having
00:18:56
Speaker
If chat GPT was sort of a robot sitting at a desk typing out queries, I would do a bit of a double take. Whereas the, I don't know, it feels somewhat sanitized. Interesting thing about this, sorry to cut you off, the other interesting thing you mentioned chat GPT, and for all that, it is quite astonishing.
00:19:20
Speaker
I mean I don't myself use it very much because I don't trust its answers. The tone of its standard modus operandi is quite stereotyped. So I'll type a question in and I'll get it delivered back with the same sort of
00:19:44
Speaker
official um what what i'm not quite sure what the term is but it's like an official newspeak it's it's pretty bland it feels like a kind of um boring high school essay where someone's been told oh you have to prevent both points of view yeah that's right and that has been programmed in some ways yeah you have to take very fast but yeah and you want it to say something interesting and you know uh
00:20:08
Speaker
Kind of out there. Yeah, but which it simply doesn't opinionates and and anything that requires Integrating information from before 2021 or you know from the last time it it was fed new data It is unreliable any question you give it will generate an answer that's unreliable because it's just not It's not up to date with what's been happening so I mean if you can overcome that problem
00:20:36
Speaker
There is still the fact that the way it interacts with you is in this very stereotyped and, like you said, boring fashion, which constantly reminds me that there's nothing behind it. The lights are off. There's no one there. Which is fine, if it were reliable. But it's not reliable. The information that I get from it is
00:20:59
Speaker
It's just not reliable. I couldn't use it. If you give it information and you want it to put it in another format, or if you've got references that you want to reconfigure into the Chicago Manual Style format, fine. But if you're actually wanting reliable information about something, it's...
00:21:21
Speaker
I haven't thought about it, but it's probably like consulting, as we did in the old days, an encyclopedia in print form. You would get reliable information from it, but it was always only accurate to a point because it might have been published 10 years before it was put on the library shelf.
00:21:46
Speaker
change now is so great that one year having elapsed is almost like 10 years back in when I was at school. Yeah I do find it extremely useful for certain things and I maybe discuss those in a bit but I do wonder if if this kind of lack of reliability is a good thing in terms of it being introduced to us
00:22:11
Speaker
in a somewhat half baked form because if it did have much more recent chaining data and if it was let's say you know 95% reliable perhaps we would again fall into this trap of complacency and everyone would have just adopted it for so many things and in those five percent of cases where it's failing we would
00:22:30
Speaker
We have problems, so there's been a lot of criticism in the OpenAI for releasing this generally, but is there a case that maybe they've done us all a service by educating us as to showing us something in a form where we start to appreciate its problems, limitations?
00:22:52
Speaker
Two

Is AI's Unreliability Beneficial?

00:22:53
Speaker
things there. So the answer to the question I would think is no, they haven't done us a favour. And there's two parts to it. Firstly, the first part is I'm not sure how widespread my own response to it is. I tend to think people
00:23:14
Speaker
are still overestimating its abilities. So it might be fallible, but not fallible enough to put people on notice that it's not completely reliable. I'm not sure. I haven't seen anything to suggest that people think it is unreliable in the way that I do. The second part is, if it were actually achieving something like 95% accuracy, 95% reliability,
00:23:44
Speaker
I wouldn't have a problem with that. And if we became complacent in circumstances where it's achieving 95% reliability, I don't think that's a problem. I've made the point somewhat obliquely in the book that you referred to, but
00:24:00
Speaker
in more academic work, I've made the point that complacency is really an issue if the thing that we're complacent about operates so much better than a human anyway. Because then there's no concern. If we were constantly looking out for it to make a mistake, but it just happens to be so much better than what
00:24:23
Speaker
or the most proficient human expert would achieve, then that seems to be wasted energy. We could actually put our energy towards other things rather than worrying about it making a mistake. So that's the second reason why I don't think they've done us any favors by giving us this thing that has these shortcomings and limitations. I think they would do us a favor if they made it.
00:24:52
Speaker
as reliable or much more reliable than a human, in which case then the complacency issue for me would fall away. Yeah, it was probably overestimating my own reliability when I said 95% I evolved. But yeah, I think maybe that's a benchmark, which if it hit, as you say,
00:25:10
Speaker
we'd all be golden. And it's good to hear, I think, that we do forget sometimes the benefits here. And talking about holding AI to double standards in many cases where we demand more of it, for example, in terms of transparency and in explaining its own reasoning, than we demand of ourselves.
00:25:36
Speaker
But I've sidetracked myself there because I wanted to talk a little bit. I must sort of defend a little bit.
00:25:42
Speaker
Open IR and some of the other and the other elements there because I'm using them a lot But not probably for very different purposes as yourself. So maybe for purposes for which they are more reliable Yeah, so the code generation. Yeah, for example, they're great on that. It's incredible. Yeah Yeah, and one thing I've been using it for recently which and this might be something they can refer back to is connecting structured data so a database where we have
00:26:12
Speaker
Well, I won't talk about, I'll use the different examples to sort of protect some, possibly sensitive information, but so let's imagine that you're a supermarket, and I worked in a supermarket in their demand forecasting division, and we would always get questions, so does a Tesco, and the CEO of Tesco would say, oh, in my local store in Hertfordshire, we were out at Bananas on this day. Like, what the hell is going on, guys? And then, you know,
00:26:41
Speaker
scores of 20-something, 20-something math and physics graduates of type away, try to figure out what had gone wrong, and it was a big enterprise. So what I've been doing now is just connecting similar structured databases to LNMs, and you can say, why were there open bananas in this store on this date? And it will go in, and it will create a SQL query, so it will create a
00:27:10
Speaker
write some code, translate that into code, and it will bring back a result. And it will also show you the query that it ran and so forth. And it is really interesting because it often works incredibly well. And it will summarize the results for you as well. It will say, oh, because we underestimated the weather. It was a really hot week that week, and the model that orders fresh food.
00:27:40
Speaker
We had the wrong predictions for heat there and people were like, you know, like, wine picnics and stuff. Reaching for the gelatos rather than bananas. So you can kind of, I mean, it's been like a revelation to me and like how well things like that could, in the flashy parlance of startups, democratize access to data.
00:28:04
Speaker
But it's also been eye-opening how sometimes it just goes wrong, and the query just does something completely off the wall. There's many questions we could ask about this. One thing I've been thinking about is responsibility in this, like what does happen when the query goes wrong. So a little bit of a segue here, but maybe, yeah. Do you think that these sort of models could, you know,
00:28:33
Speaker
undermine the apportionment of responsibility. Let's say the CEO goes out and says, okay, well, we've got to fire the guy who does the web forecasting or something. And it was on the basis of a LLM hallucinating, essentially, and getting the query wrong.
00:28:56
Speaker
Should it be me who set up the interface, who gets fired or? Yeah, so on these questions, I'm a pretty traditional person, pretty traditional guy on these questions. And I guess this comes down to my first career having been in law. So I've looked at the current system of a Porsche, so the regime for attributing responsibility to someone in a chain.
00:29:26
Speaker
And the principles that have been developed over a couple of hundred years still seem to work here. I've not seen anything to suggest that those systems or those principles wouldn't work. So the first thing to say is that we have an adversarial system in this country and in the common world. So if someone's going to be blamed, it always comes down to some victim
00:29:54
Speaker
somewhere making a claim.
00:29:57
Speaker
and then wanting to sue someone. So they will pick whoever is in their sights. That person then always has the option of cross-claiming against someone else who they think is more implicated in the cup than someone else, other than themselves. And then of course, you have a question there whether as between those two, the defendant or the cross-defendant,
00:30:27
Speaker
is either more responsible than the other or is responsible to the exclusion of the other. But all of that gets sorted out on established principles, where you basically, you ask yourself, say if it's a claim in negligence, you ask yourself, who had the duty of care in this circumstance? What was it that they could reasonably foresee would go wrong?
00:30:56
Speaker
and what should they have done to prevent the damage that was reasonably foreseeable and using established principles like that and then tests for causation and then tests for what they call remoteness of damage you can pretty much
00:31:12
Speaker
work out a reasonable settlement. And it just happens, naturally, in the course of litigation or the process that precedes litigation that anticipates litigation. So maybe we don't end up fighting this out in court. But because we know how this will shape up, the parties adopt a particular stance where they know that if they go to court, this is probably what will happen. Therefore, perhaps we should
00:31:34
Speaker
push harder or perhaps we should relent a little bit and settle for less money. It all just comes out in the wash and I haven't seen anything that suggests that that's not going to that system, that those principles won't operate in this area. What it would take to get an AI be one of the defendants so to speak in that really just depends on
00:32:04
Speaker
Another question, which is what does it take to be a responsible agent? What does it take to have moral agency? And here again, I mean, there's disagreement, but it's not large disagreement. You've got two types of agents, roughly speaking. You have cognitive agents, and this basically includes the whole
00:32:32
Speaker
whole of the animal world. So anything that acts as an agent, even a bacterium, could be classified as an agent. And then you've got moral agents. So what does moral agency bring to cognitive agency? So what does cognitive agency consist in? Cognitive agency consists in something like having a feed-forward mechanism, a very basic level, where you take in inputs, and then these inputs are worked on, and they're fed up a hierarchy, and then come out as an output at the other end.
00:33:01
Speaker
So that's part of what it takes to have cognitive agency. Another tweak to that in evolution was when evolution developed recurrence where you take the output at any particular level in that hierarchy and feed it back into that level. And that enables a system to kind of generate its own kind of memory.
00:33:23
Speaker
to retain in memory, so to speak, or in mind long distance relationships. This helps tremendously in navigation. So bees, for example, so the neural systems of bees who are very good at navigation have recurrence. Another step would be to have multiple networks, multiple recurrent networks, so then you get parallelism. And that's, you've got that in mammals, fish, reptiles.
00:33:53
Speaker
So that's all to speak speaks to the cognitive agency here. What do you need above that to get a system to be a moral agent? Well, most people, most philosophers who think about this, okay, they disagree about certain details, but they all agree more or less that it requires the ability to not just sort of have
00:34:16
Speaker
reasons for acting, but to be able to revise those reasons, to change one's goals. As one philosopher, Christine Korsgaard, puts it, it's not just the ability to have reasons, but to see what reasons you have. So a lion might
00:34:36
Speaker
move a certain direction for the reason that there's prey over there on the savannah and they're going to chase the prey. But they can't see that reason. They can't sort of step out of that process and examine that as a reason, whereas humans can.
00:34:51
Speaker
And moral responsibility seems to be connected with this ability to reflect, revise, and act upon reasons, but also change the reasons we act upon. So that answers the question about when can an AI be a defendant and potentially one of the people who will be attributed responsibility in any scenario where something
00:35:15
Speaker
So as I see it, to sum up, there's no responsibility gap as far as I see it. The tried and tested principles over several centuries developed in the courts will work just fine until AI develops this extra capacity. And I haven't even mentioned the role of sentience in that. So some philosophers would say even having the ability to revise one's goals isn't enough. You kind of need sentience because without sentience,
00:35:44
Speaker
You can't punish the thing. You can't blame the thing. I mean, if it's just dead matter, it's like slapping a brick. You know, what's the brick going to do? So I mentioned that, but that's what it will take.
00:36:00
Speaker
to get a system being held responsible. I think, I mean, there's certainly grounds for optimism here certainly on the first point, as you say, the structures seem to be in place and they're not trivial structures either. You mentioned they've evolved over the past couple of hundred centuries and you reference Carl Mitchum in your book, I should mention the name of your book which I
00:36:26
Speaker
we'll do it at the beginning of the podcast but to remind people it's a citizen's guide to artificial intelligence and as an aside it's very interesting way that you wrote it because you have all lots of people wrote different chapters but then you sort of provided the the style and rewrote things almost like an ai fan in all the stuff i wrote about half the book and the rest of it i found off to people who i thought had more expertise and then they gave me notes and then i
00:36:55
Speaker
sort of reconstituted them so that the whole thing would read in one voice. Yeah, this is a good exercise in collective intelligence and coherence. But yeah, so the reference to Karl Metru has just said, yeah, we've evolved, as technology has evolved since the Industrial Revolution, we've evolved that understanding or mechanisms for assigning responsibility and there's
00:37:20
Speaker
very simple examples of that in terms of the invention of corporations and actually some, I think probably the corporation as a side is an interesting example where just the creation of that legal framework was itself a technological innovation in the sense in that it separated the liability of individuals from their companies and it just I think probably led to a lot of more risky endeavors which
00:37:51
Speaker
So far, it's been a good thing. Without that, we wouldn't have the railroads and things like that. On the second point as to...
00:38:03
Speaker
sort of agency and responsibility. I think, yeah, I agree with everything you said. I think there is a maybe something that's just snuck in there, which is it's very hard perhaps to determine when those reflective capabilities are present in something or not. I mean, famously, skeptics a few centuries have
00:38:27
Speaker
question whether it's present in other humans, right? We see all the outward sides of it, but seeing the inward reflections is just not.
00:38:41
Speaker
Well, with humans it's not possible. Interestingly, there might be a way that we can kind of actually measure it in these AIs better than we could in humans. I mean, we know how many loops or normal networks just do a single kind of pass, as it were, through the circuit. They don't cogitate, I suppose one might say.
00:39:08
Speaker
Nonetheless, I think there are questions about how we know that some system has attained a certain level of cogitation, of cognitive sophistication. Absolutely, there will be questions. And I think it's quite an emergent proxy as well. I would say that there are these kind of feedback
00:39:29
Speaker
there are self-reflective behaviours or capabilities in it in lots of things as you mentioned like you know bacteria have some form of agency I agree with that I think and I think it parallels the questions around consciousness I don't think we can quite separate them out you mentioned that some people would say well sentience and
00:39:53
Speaker
can be separated out from this. I think certainly I wouldn't be able to cleanly divide between consciousness at least and the ability to self-reflect, but I don't think that self-reflection is an on-off switch either. I don't think there's a single point in human evolution where I was like, what's going on? We're conscious now, right? I really like
00:40:18
Speaker
Douglas Hofstad is kind of thinking on this and it's no accident that his book, going all the way to the back is, you know, a thousand pages long, full of lots of different examples. I haven't kept up with this part of the consciousness
00:40:36
Speaker
there are people who take sides on the question whether consciousness is something that fades in or whether it comes on in an instant and the people on the side that think it's more like an on-off switch would say what can it possibly mean to be sort of aware in the sense to have how can you how can you have sort of have
00:41:01
Speaker
something it's like to be. Yeah. I mean you sort of have a touchy-feely sense. Like the minute you can feel it all, that's it. You're in it. It's like you can't be half pregnant. Yeah. You're rather pregnant when you're not. I haven't invested much time at all, but I've heard people argue quite passionately for both sides of that. Yeah. Yeah. I think I need to explore the other side. That's always a good approach. So we've talked a bit about responsibility and it's
00:41:32
Speaker
I would say, again, we see this pattern of AIs in some ways not a new thing.

Legal Principles Addressing AI

00:41:38
Speaker
It's maybe possibly going to stretch the capabilities that we already have in place or the systems that we already have in place. But yeah, at least in that case, I don't think it's going to break it. Just give me one little thing about that. At the end of the 19th century, the question arose, what if my shape
00:42:01
Speaker
walk across the boundary between our two properties and start eating your grass. And the court has just said it's the responsibility of the person that owns the sheep. Even though the sheep have their own minds, it's just the landowner who owns the sheep that's gonna be blamed for it. So, I mean, even if you get AI being extremely sophisticated, short of actually being a capable defendant in its own right,
00:42:32
Speaker
And one response is just to go back to that principle and just say, well, you made it. So if it does, if it has a mind of its own, that's neither here nor there. I think, you know, it's important that we don't...
00:42:47
Speaker
It does seem that there are some hang-ups in responsibility that are maybe slowing down progress in some places. The thing that's coming to my mind right now is self-driving cars. We have about 1.3 million road drafts per year now. Your book said 1.2, but it's probably gone up since then, because I looked this morning.
00:43:05
Speaker
So 1.3 million road deaths per year, it's the eighth largest cause of death and I'd be interested to know if anyone's looked at quality adjusted years, loss of quality adjusted, probably even higher there. It's a terrible thing and we may not be for
00:43:25
Speaker
at the level of self-driving capabilities that we need to be yet to solve that fully. But we could possibly take a pretty big chunk out of those numbers even where we are now. With the technology. Yeah. But what I want to say is that many people are concerned that if we were to jump to having complete deployment of self-driving technologies and then
00:43:50
Speaker
we would have issues with responsibility. But in your book, again, you seem to have a pretty simple solution for that, right? And if I get you right, you think that just having no fault compensation schemes. So you just need insurance, right? Which if an accident is caused,
00:44:12
Speaker
Like however it happens, like you don't blame the driver, you don't take the AI to court, it's just forget about all that, just compensate, right? I mean, that seems to me like a good way of making progress. Yeah, it's a system they have in New Zealand for personal injury. Yeah, and it works pretty well. I mean, I lived in New Zealand for two years and
00:44:35
Speaker
I actually made a claim on the scheme once for an injury that I had and it seemed to work pretty fine. I didn't have to blame anyone. I'm not well versed in the accident compensation scheme in New Zealand, but I'm sure there are people that would be able to say there's room for improvement. But it seems like
00:44:57
Speaker
a sane, rational way forward. Yeah. And even if it's not perfect, if what we're gaining from it is, let's say, an extra million or a reduction of a million lives lost a year. So, you know, that's a million extra years of human life per year. I mean, that seems like a good trade-off. Yeah, so

Fairness Challenges in AI

00:45:22
Speaker
I wanted to
00:45:25
Speaker
I want to talk about fairness now, which again, I think is something where it's not so much that AI is changing the game entirely, but it's perhaps throwing a light on issues that we already had, maybe possibly exacerbating, maybe not. You have a really nice sort of breakdown of the different dimensions of fairness. So perhaps you could take us through those.
00:45:51
Speaker
So one of the things that I like about working in AI and AI ethics is that it tends to throw new light on all problems. So 20 years ago if you would ask me what does it mean to be fair, we could have had a conversation about how fairness might mean a lot of different things depending on who you are.
00:46:15
Speaker
I think maybe a better answer even back then would have been fairness in general means something like equal treatment. So that if, for example, someone gets a parking ticket
00:46:33
Speaker
and the car in front of it, who is also illegally parked, doesn't get a parking ticket, that's unfair. So it's, fairness has something to do with that. It's different from justice. Justice is about, in general, everyone getting their due.
00:46:49
Speaker
And that's a notion that goes kind of back to Thomas Aquinas, everyone getting what they are owed, where fairness is more about equality in that system of giving people what they owe. What strikes me here actually is already with this very intuitive definition that there is a tension in that you want to treat people equally
00:47:11
Speaker
given the same set of, you know, background. Yes. Given that they both parked illegally. Yes. But not given that, you know, one is black and one is white, right? That may not be something that you want to include in that kind of background. Right. Like what are the. Yeah, it's the equal treatment of equals and the unequal treatment of unequal. And
00:47:40
Speaker
Unfairness would be the equal treatment of unequal and the unequal treatment of equals so if you've got two people that are for all relevant purposes The same and yet one gets the parking ticket and the other one doesn't that seems to be unfair Yeah, so that's how I would have had a conversation 20 years ago, right? AI doesn't change that so much but it it
00:48:08
Speaker
puts a slightly different spin on it. So 20 years ago, maybe people would have, in a class, in a seminar, would have come up with different definitions of fairness. They might have all agreed with the general definition about equal treatment, just as you did. But then, just as you did, they might have gotten a little bit more in the weeds and sort of tried to mark out what it means in specific cases.
00:48:35
Speaker
So with AI, that has actually happened. You've got these attempts to operationalize fairness for different AI systems. And so one way of operationalizing fairness would be, for example, to ensure that whatever the demographic that the algorithm is applied to, whatever demographic it's deployed on,
00:48:59
Speaker
community, the Hispanic community. The algorithm should work so that the same number of false positives and false negatives, roughly speaking, are present in both demographics, across these demographics. That would be fair by one operationalization of fairness. Another way of operationalizing it would be to say that the same
00:49:31
Speaker
The same, let's say, score that the algorithm gives you should, it should mean the same thing regardless of the demographic that it's applied to. So 7 out of 10 should basically pick up roughly the same proportion of people, whatever the demographic, so that it holds the same way. If 7 out of 10
00:49:55
Speaker
means that you get way more people being picked out for getting a loan, for example, than in another community, another demographic, where the members of that other demographic also get a 7 out of 10, then that makes that algorithm unfair. So that's another way of operationalizing. There's a whole bundle of them, right? And it happens that
00:50:22
Speaker
are using pretty simple algebra, you can show that so long as these different demographics have different base rates, so for example, we're talking about loans, okay, so let's talk about that.
00:50:38
Speaker
So long as the Hispanic community overall has a different rate of defaulting on its loans when compared with the African-American community, then you are not going to be able to get an algorithm that satisfies all of the different operationalizations of fairness. It's going to fall foul of most of them.
00:51:03
Speaker
even as it satisfies one. And if you want to prioritize the other one, then you're going to have to sacrifice the original definition. So the light that that shows, the light that that throws on age old questions of fairness is we kind of already knew that different people would have different intuitions about fairness. What we didn't know is that they're actually irreconcilable.
00:51:31
Speaker
We didn't know that. There was an old debate that goes back to the Fox and the Hedgehog about whether values are fundamentally coherent. It can cohere in an overall system or whether some values just conflict with others and are irreconcilable. There was that debate, but this has made it
00:51:55
Speaker
clear that to a certain extent some values are incontinutable. Yeah. Yeah. I think it's worth just going, looping back over this. You said it very clearly, but it's such a striking and important result, which is it's nothing to do. AI is not introduced to problems here. It's just fundamental difficulties with fairness. And we have this choice between
00:52:17
Speaker
You know, you want to have, it's called classification, error parity, if you want to give it a fancy name. So, for example, in the loans example, you'd want to say, okay, that the number of people
00:52:34
Speaker
not not granted loans who uh should have actually been granted loans it might be actually easier to talk in terms of another example from the book the recidivism one so uh because kind of hard to say that someone wouldn't have defaulted on a loan when they weren't given it if you're not right so the the recidivism one is is about calculation of the likelihood that a um say a prison inmate a prison inmate goes on
00:53:02
Speaker
another offense if they release on parole. And a false positive would be something like it says that they will recommit an offense and it so happens that they are released and they don't and false negative would be it says that they're not going to recommit an offense they're released and they do and you want those kind of the rates of those errors to be the same for different groups of people you know say okay well you know I don't want to
00:53:27
Speaker
I don't want to tread over cautiously for the African American community and I have very few false positives and false negatives and on the other hand do something different for Hispanics. That's fair, right? That seems fair. Good, so we want that. We also want to say, oh, but if I say that there's a 0.7%
00:53:52
Speaker
or a 70% chance of going on to commit an offensive young, that means the same for everyone. And that, again, to give this offensive name, this is the kind of calibration requirement. Again, that seems fair. That seems more than fair. That seems completely rational, right? If it doesn't mean that, then what does your number even mean?
00:54:14
Speaker
But you can't have your cake any of it, as it were, right? That's just provable. And so, yeah, in some ways, AI is doing us a service here, in that we're formalizing the difficulties that we had in our kind of folk notions of fairness. Yeah. I'll say there's something really interesting about that. So you just made the point that that second measure of fairness
00:54:41
Speaker
Calibration seems, you said, more than fair. It just seems rational or something like that, right? Well, here's another really interesting result. A philosopher at the Australian National University, Brian Heddon, has a fantastic paper where he shows that actually, if you get all of these different measures of fairness, and you test them all,
00:55:05
Speaker
by against a procedure which everyone intuitively would agree is fair, then only one of them yields the result that the algorithm is fair. Let me explain what I mean. So let's just say there are 10 different measures of fairness.
00:55:34
Speaker
Then you've got this separate little thought experiment.
00:55:40
Speaker
where you, I don't know, you, I can't quite recall the details of it. It's been some time since I read it. But there's like a way of allocating, say some particular resource where everyone would agree that that is fair. It's purely random. It's almost like throwing dice. And you just, no one's gonna dispute that it was fair. If the dice come up one way, then that's the result. If the dice come up another way, that's the result.
00:56:22
Speaker
a journal called Philosophy and Public Affairs. So anyway, everyone, you kind of have to agree from the from the get go that this this dice throwing procedure is fair. And what Brian does is he says, OK, which of now that we all agree that this dice
00:56:34
Speaker
This is extracting a lot of detail from the paper. People should go and read the paper by Brian Heddard.
00:56:42
Speaker
thing here is fair, which of all these definitions of fairness would be met? So he applies the first measure of fairness, say it's parity. Well, turns out it would result in that dice-throwing experiment. It would say that that's unfair.
00:57:05
Speaker
And then you just work through them. And all of them say that the dice throwing experiment is unfair by the light of those particular measures. Except when you get to calibration, when you get to calibration, the one that you said.
00:57:21
Speaker
seems more than fair. That yields the result that the dice throwing game is fair. And so Brian concludes on that basis that the only measure of fairness we really should care about is calibration, because that's the only one which yields the result that this dice thought experiment comes out as fair, which we all antecedently agreed was fair. There's been some interesting commentary on
00:57:50
Speaker
on that paper, but I was extremely impressed by the elegance of the setup. It's a very smart way of thinking about things. It's interesting though, because I also feel like maybe it's starting with the assumption that our intuitions are correct, but sometimes we realise that the manifest image and the scientific image are in opposition. And so maybe there is an argument that actually
00:58:19
Speaker
Like, our intuitions are wrong. But we just need to reflect on that pretty strongly. I want to mention another, because another notion of fairness, which I don't know if it's kind of somehow encompassed in one of the others, but it's, I think you call it anti-classification, which is just not using, for example, ethnicity in your models, or even proxies for that, as far as one can avoid.
00:58:47
Speaker
And that and the other kind of requirements can all be in
00:58:58
Speaker
Yeah, that can clearly conflict with predictive accuracy as well. How do you view, by the way, is calibration and predictive accuracy, are they kind of separate dimensions or are they almost like the same thing? I guess they are separate, right? Because one can have a well calibrated model, but because we leave out many things from the model, it's less predictive. Yeah, that's my understanding. Or less accurate. So not only do we have this conflict between the dimensions of fairness, we also have a conflict between fairness and
00:59:29
Speaker
the quality of the predictions because, you know, there are cases where ethnicity, maybe not so much itself, but as a proxy for many other things, for example poverty and so forth, can be very predictive of certain things.
00:59:50
Speaker
But nonetheless, I think that's probably more understood, the conflict between predictive accuracy and fairness.
01:00:03
Speaker
So there's conflicts between different measures of fairness. Yeah, and conflicts between any one of those measures of fairness and accuracy. Yes But again, this is not Don't blame the AI right just just I don't know just blame the mathematics of life The mathematics of life and whatever it is in our society that means that you have different base rates for these different
01:00:32
Speaker
phenomena or recidivism risk or whether it's defaulting on a loan. Why is it that different demographics are like that? And you can probably track a lot of it, maybe not all of it, but a lot of it down to injustice that has been filtered through intergenerationally from colonialism, original dispossession, you know, acts of violence against native populations. And then that sort of filters through the opportunities the next generation have and
01:01:02
Speaker
And then the new generation grows up with the sort of the kind of inheriting the emotional trauma of the older generation. I mean, these are very complex. Yeah. And questions. They are. Yeah, we don't want to go down too much. You know, is there an opportunity that AI can help us address some of the things? One of the questions at the back of my mind often when I think about these is,
01:01:32
Speaker
you know, broadly speaking, do we try to do something like the French do, where, you know, ethnicity is factored out of
01:01:40
Speaker
They try to factor ethnicity out of everything. A simple example, if you're filling in a library application here, you have a box. You can say prefer not to say, but you can tick your ethnicity if you like. That doesn't happen in France, right? You're not allowed to collect statistics on things of different ethnicities.
01:02:04
Speaker
French would say well, you know, this is this is what it means to treat everyone equally and to be fair but of course You then can't track things like our you know, our our public services being Reaching the people that they need to in your disadvantaged communities. Yeah, and so forth. Yeah So again, I think you know AI sort of supercharges these issues because I
01:02:33
Speaker
we can either choose to put this possibly sensitive data into our models and maybe use those to correct for some of these things. Or we can say, no, we just don't want any place for these in the society that we build going forward. But then we might make it harder to correct for the errors of the past. So I just laid down a lot on the plate there.
01:03:00
Speaker
It reminds me that even with the criminal recidivism example, if you did take away factors like gender,
01:03:11
Speaker
that you really are going to degrade the accuracy of the algorithm just because men are much more likely to re-offend than women. So if we take gender out completely, you can see that that would be tremendously unfair on women who get basically treated the same way as men. As they offer insurance purposes here. Yes, yes, yes.
01:03:34
Speaker
So that's a case of the equal treatment of unequals, which gives you an unjust result, an unfair result. Yeah. Yeah. So yeah, where do we go from here? We have, we've seen, let's put it like this. So what are the opportunities then for AI to
01:04:03
Speaker
to do better than humans in some of these things. We have a lot of maybe with recidivism or something like that. Previously, we've always had to take these decisions. We've always had to estimate the likelihood of people defaulting on loans or of committing, recommitting offenses.
01:04:23
Speaker
We're now very worried because, or many people are very worried because we want to bring algorithms into this. But isn't that just a simple response to say, well, algorithms are, they can do this better. We can formalize our requirements or our preferences. And they can potentially explain themselves better as well.
01:04:54
Speaker
So I would say we should think of AI in the same way that the telescope was thought of in the 17th century.

AI in Drug Discovery

01:05:06
Speaker
It's naturally augmenting abilities that we already have. So with my naked eye, I can't make out the surface of the moon, but with the device, with these two lenses, put a certain space apart.
01:05:19
Speaker
you can suddenly see the surface of the moon in astonishing detail. So Daniel Dennett talks about cognitive prosthetics, things that you can sort of use to enhance your ability, your reach, your purchase on the world. And I mean if people didn't use telescopes to hit one another over the head, but
01:05:48
Speaker
With AI, it seems we can do the stuff that gives us greater purchase on the world, but we can also hit one another over the head. We can also do harm. So sometimes in my more cynical or bleaker moments, I'll be apt to think the internet was a mistake. And then somebody replied to me recently and said, so was fire, so was the wheel.
01:06:17
Speaker
which just goes to show, with every great Promethean discovery comes both the potential for great opportunities, but also devastating challenges. And I think it was, was it Oppenheimer? It might have been Oppenheimer, in fact, who said something, who quoted the Bhagavad Gita and said, I am become death, the destroyer of worlds, when he knew
01:06:47
Speaker
power that had been unleashed with the atomic bomb. So AI does, I see the greatest potential for it in drug discovery because, you know, all life as we know it is based on DNA-based pairs. And because these nucleotides, four little things that combine and recombine infinitely
01:07:13
Speaker
This is discrete mathematics on steroids and the opportunity to discover proteins and create interventions, therapeutic interventions that the human mind unassisted would just never think to come up with. Drug discovery for me is the huge boon, the huge potential of AI. And obviously also,
01:07:42
Speaker
accelerating discoveries that will help us wean ourselves off fossil fuels.

AI: A Dual-Edged Tool

01:07:50
Speaker
As for the dangers, well, I mentioned one before, which is cybersecurity, cybersecurity threats, which are connected with AI, but also just the fact that everything now, we are so connected
01:08:12
Speaker
beings now, that thinking of all the passwords that you have to keep track of in order to shield yourself from vulnerability, sometimes it's overwhelming, just the extent to which we are dependent on the technology, that if something were to happen to those systems, if the internet really ever did crash, I think it would just be
01:08:42
Speaker
Horrific. So, I mean, that's a danger. That's not often called an AI danger. It's not an AI existential risk, but it's at least as much as any other one of the fantasy scenarios that has been sketched for us, something to worry about. So it's fire in the wheel again. We're back.
01:09:10
Speaker
It's another Promethean moment. Yeah, it's very hard, I think, to weigh up the the risks versus the
01:09:21
Speaker
And that's possibly the case with many of the inventions in nuclear energy as well. We know there is some small possibility that destroys everything, but it's also one of the possible answers to the climate gap crisis. And it seems like we're kind of taking a bet, flicking a coin. Every time it lands the right side up, everything gets better. But at some point, maybe we
01:09:51
Speaker
We end up with it going the wrong way.
01:09:55
Speaker
Um, and it's hard to calculate the, your expected sort of out, you know, returns in that scenario. You might say, okay, well, it's just, you know, the probability that we're going wrong. Um, you know, assign that some zero utility, but maybe, maybe it's infinitely negative, right? If we destroy the whole, and we get into like problems of mathematical problems within, with infinities. But, um, I mean, coming back to just the kind of more,
01:10:24
Speaker
Yeah, immediate problems, I guess. Yeah, certainly drug discovery, that's a big benefit. It was interesting to see, though, recently that people ran that backwards, and they're like, oh, just decide for me the worst viruses that you could. So a lot hinges on, again, it's these cognitive prosthetics which magnify the capabilities of humankind. And if one is an optimist or a
01:10:54
Speaker
Aristotelian just believes that everyone's going to be good, then this is probably going to end well. On the other hand, the pessimist might say, well, actually, it only means a few folks to go the other way, and this could end very badly. So yeah, I struggle with the
01:11:17
Speaker
Actually, the mathematics here of how we balance these. I mean, if you think about it, in a country like the US, everyone can have a gun in their back pocket, in theory. So the question, and yet, that society is obviously, it has had to confront
01:11:43
Speaker
It's gun laws in very tragic ways multiple times in the past. But it's not like everyone has a biohazard in their back pocket. So the worry with AI is whether
01:12:02
Speaker
everyone will have a fire hazard in their back pocket because you might have enough garage hobbyists who can fine-tune an LLM where it can do real damage to systems, hospitals. It can breach, say, security in court data systems. It can
01:12:29
Speaker
erase records about who's committed crimes, I mean all sorts of things which is like everyone having or enough people having a biohazard in their back pocket. That's the worry. But other countries outside of the U.S. regular guns so that not everyone can have a gun. The question is can we do something similar for biohazards? Can we stop? Or from AI? Can we stop?
01:12:58
Speaker
the people in the garage, the bad actors in the garage, using their fine-tuned LLMs to wreak havoc. Even if it's things like on a mass scale, ringing people up with a fake, a deep-faked voice, pretending that it's a loved one. And it doesn't have to work every time, but it can work, you know, if it works 1% of the time. So that is the question. Whether we can, there's an equivalent of
01:13:25
Speaker
a law against guns or a law that regulates guns in this area. It's taking shape. We don't know if it is a gun yet. We don't know what the shape of
01:13:46
Speaker
as many different things, as many specialised systems. It's got to be more nuanced than that. But certainly the deepfake example is one which many people have been calling for. Well, there's very few legitimate use cases for deepfakes. I actually would quite like a personal deepfake so I could sometimes sit in some meetings and look thoughtful and make good comments. But it might do a better job than me. But also dubbing in films when the lists are all out of sync with the dubbing. Yeah.
01:14:16
Speaker
But there's a lot of cases that we don't think are legitimate here. Let's presume that the laws are made that do this, that prevent the misuse of this. Do we think that they could be effective or is this just a sort of technology which is just very, very hard once it's out of about box to put it back in?
01:14:44
Speaker
My sense is that it's too early to say. I think it's just too early to say. You can write a law that will, like the EU's law, will purport to cover a certain range of activities and regulate them.
01:15:02
Speaker
whether it works or not, we have to wait and see. I mean, that is the random, sort of the best thing, the closest thing to a randomized controlled trial we're gonna have. How effective are the EU laws gonna be over the next, say, five years? Yeah. Yeah, I mean, yeah, I think that'll be interesting. I mean, one of the kind of grounds for hopefulness here, presuming that one does want the laws to work, and maybe not everyone does, is that at the moment there are
01:15:29
Speaker
a few companies which have built very, very powerful models, which are the ones which are attracting all the attention. And it's not like I can have a huge LLM on my machine at the moment, but there is a parallel
01:15:45
Speaker
I've heard the worry that if we do regulate this, it may have this kind of paradoxical results where people get much better at being conversationally efficient. The regulation would stop you going to Amazon and buying a million hours of compute time without some scrutiny over what you're doing with that. The regulation would stop you accessing open AI.
01:16:10
Speaker
API for DeepFolks and that's you've got some kind of bad approval for that. But the regulation would be ineffective against you saying I'm just going to build this myself, I'm going to use a small server farm. Like necessity is the mother of invasion. Exactly. So the regulation changes the necessity. Yeah.
01:16:32
Speaker
Again, we're probably not qualified to answer this, but we're certainly qualified to speculate. Yeah, I don't know. Is that something that we should worry about? So if any of the Silicon Valley people make this case for why we shouldn't have regulation, that would be very interesting, because then that's basically saying the regulation will
01:17:04
Speaker
associated with techno-libertarian utopian Silicon Valley type. Look, I think it is a possibility that some regulations will, as I say, create new incentive structures and necessities around which then we devise ingenious means to circumvent them, which may end up defeating the purpose of the regulation, yes.
01:17:31
Speaker
I mean, again, how far that will happen. We need a randomized controlled trial. I don't know. I don't know how true that will be. We can apply the same reasoning to guns and say, yeah, but when people aren't allowed to get guns, readily and en masse, won't they just devise ingenious little slingshot things to put in their back pocket?
01:17:55
Speaker
don't look like guns and then they'll use those and then that will defeat the purpose of the regulation. Well that hasn't happened. Yes it can happen but it just hasn't and we don't know whether it'll happen here. Let's get even more speculative. I want to go down this route of cognitive prosthetics and
01:18:24
Speaker
Is there a kind of very different sort of danger here where we just lose sense of what it is to be, to have agency, to think about things that we outsource. We've already outsourced

AI and Human Cognition

01:18:43
Speaker
so much of our memory. I always like to think that this started not just before books, but with language itself as a way of
01:18:54
Speaker
you know, putting labels on things. I think Christian and Earthie said that at the moment that you tell a child the name of a bird, he no longer sees the bird. And then of course, you know, there's kind of memory techniques where we can memorise large sequences of text, but somehow in that kind of formalisation of
01:19:18
Speaker
structure where we're sort of suppressing the kind of hazardousness and then perhaps even the creativity that we have associated with our normal memories and then books to a further extent. But now we're in a position where not only memory is potentially being outsourced but so much of our reasoning will be outsourced we've talked about.
01:19:40
Speaker
And that's not just over the AI. I mean, I think if we look at actuarial tables and things like that, there's clearly benefits for making insurance decisions on lots of data. And we can't really say that the decision is being taken by a human anymore. They're kind of orchestrating lots of ledgers and movement of ink across paper and so forth.
01:20:03
Speaker
But now with AI, we come to a point where, again, this trend can be continued to another level where it may possibly put us in not just a kind of quantitatively different state, but a quantitatively different one. This is probably the apex of our speculations.
01:20:29
Speaker
Is that something that justifies worrying about sort of a future in which humanity continues, no existential threats, but is completely alleviated? I'm not sure if you know Socrates had this worry about writing. Oh, right. Yeah. So Socrates was worried that if we started writing, yeah,
01:20:56
Speaker
our memories would corrupt our memories because we'd then be pouring our minds on the paper, so to speak, and then our minds would be empty and we wouldn't be good at memorising stories and epics and all sorts of things.
01:21:15
Speaker
um and maybe he was right maybe he was right in a sense we're not getting memorizing stories maybe he's right we don't know what we've lost because we're not able to do those things yes that's true but i would reckon that um
01:21:32
Speaker
We have, to use Richard Dawkins' expression, an extended phenotype. It's like the spider web, the spider's spider web, right? Our extended phenotype extends to libraries and archives and now digital databases and calculators and all sorts of instruments. That's all part of our extended phenotype.
01:22:03
Speaker
I don't have a problem, myself, thinking of the human as being continuous with their technology.
01:22:14
Speaker
So I don't think there is a worry here because whatever we don't have to spend our time doing anymore, it will presumably be on things that we don't really want to. We won't stop doing the things we want to do. It'll only be things we don't want to. So that'll free us up to do other things. And I don't think anyone would say, no, we really should do things that we don't want to do.
01:22:43
Speaker
And up to a point, we kind of have to just to be responsible beings in the world. But I don't think anyone will say that given the opportunity to spend more time doing what we really want to do and what we really find meaning in, we shouldn't take that opportunity. So I worried less about that. There is a concern though, there is a real concern about
01:23:03
Speaker
whether our skills will degrade as we no longer do mental arithmetic, for example. That horse is well and truly bolted. But are we going to lose the skill to interact as human beings? Is our pro-sociality
01:23:24
Speaker
at risk as well. You know, are we going to be alone in our rooms, dealing with our screens, talking to people virtually? I mean, these are worthwhile questions, but the pandemic sort of does give us a sense that most people were not content with living on screens and that the personal had priority over the virtual and personal contact.
01:23:52
Speaker
In other domains, maybe we'll lose our skills, our manual control skills. Yeah, but I reckon people will still want to learn to play the piano and the violin, and they will still want to act together and join choirs and join sports teams. And the skills that we do lose, because we don't have to exercise them anymore,
01:24:20
Speaker
Is it really that important to us to have those skills? Is it important to know how to operate a forklift? I mean, now it might be, if that's your line of work, but if you could do something else, and if that job gets automated, provided you can find something else easily enough, would you lament the loss of being able to use that? I mean, do the people that used to produce horseshoes, farriers,
01:24:51
Speaker
You still have fairies because horses still need shoes, but we don't use them for getting from A to B anymore or anywhere. So is that some skill that we've lost? Yes, we have. There's a lot less at that skill now. Is that a problem? It's not for me, but then I don't come from an equestrian family. Maybe equestrian families would worry more or lament more about that.
01:25:17
Speaker
So it's not a question that you can really answer in a straightforward way. Yeah. But it's one of those questions which does lead you to think about many different things. I think certainly, yeah, as our phenotype has extended, as it were, we've both lost and gained things. Like, you know, it's not really, so we've lost this ability to memorize large portions of text and so forth. Some people still do that.
01:25:47
Speaker
to get Ed Cook on my podcast there, found him memorizing a memory champion. And yeah, he spends his time, or some of his time, memorizing a great piece of text. But by and large, we've lost those skills. But at the benefit of being able to share huge bodies of text through writing, and I can access Socrates' writing now, even though he's no longer with us. I mean, what a marvelous thing.
01:26:16
Speaker
Yeah, we wouldn't have had that to tell us to worry about writing and in this day and age.
01:26:25
Speaker
to worry about all the things that computers will take from us. Yeah. I think there is, you know, there is the potential to have a huge flourishing of creativity. I had a conversation with Christian Burke, which is one of the early episodes of this podcast. And that was sort of the point we ended up on, that, you know, he's a poet. And in writing poetry,
01:26:53
Speaker
that there is so much creativity that can be unlocked through LLMs. It can teach you about miso, it can teach you about rhyme, or it can teach you what is bad poetry because it sort of tends to write very direct things. It challenges you to be more creative. And if you want to learn guitar or learn how to speak a new language, I think now is a
01:27:18
Speaker
a wonderful moment that people can have a kind of personalized tutor for these things. I mean, yeah, just just brilliant. I do. There's still at the back of my mind this worry that if the kind of value in some of those tasks might be undermined if
01:27:36
Speaker
if we get to an AGI world where everything is just done better. I've got two weird examples here. One was just on my way here I was using Google Maps. I used that time instead of thinking about how to get here to actually reflect on the fact that I was using Google Maps and then I could use my time to think about other things.
01:27:59
Speaker
But I was motivated to see that because I have this podcast and I think, oh, I can probably do this a bit better than a machine.
01:28:06
Speaker
The other example that comes to mind is, in American teen movies, one of the tropes is that some new kid arrives at the school and say the local jock or something is taken down on a level and he becomes a nerd instead, right? His identity, his sense of purpose is kind of moved from one sphere to another. But what if that
01:28:33
Speaker
local jog. Like the new guy was also like a great nerd as well and he was just better at everything. Like there is that worry. I mean I think this is one to put in the very speculative box. I mean the scenario that you've sketched assumes that we derive value from things that we can be better at than other people. And maybe that is... I don't want to deny that because I think we do
01:29:03
Speaker
sort of compare ourselves against one another. What strikes me is that Lee So-dol has given up playing Go, right? Yeah. But maybe he's a very competitive person. No doubt he is. So as you say, that made a whole truth. More generally, yeah. And I can speak for myself. I used to learn piano in a sort of competitive way.
01:29:29
Speaker
to win competitions and to be better. But I have no concern with that anymore. So now if I play, I play purely for pleasure. And I just think there will be other things that the kid in the school could do that he'll get a kick out of without worrying if he's better at it than someone else.
01:29:57
Speaker
That's what I think about. I

Balancing AI's Opportunities and Risks

01:30:03
Speaker
asked a group of ancient investors and entrepreneurs on WhatsApp, what should I ask an AI ethicist? And the first question that came back was,
01:30:19
Speaker
why are you guys always so down about the risks of AI and not talking about the benefits? But I think it's been really interesting as I feel like you have a lot of balance in your thinking. And we've highlighted a lot of places where AI can liberate it with time. And part of that balance, I would hope, is also suggesting what I did at the beginning, which is that
01:30:48
Speaker
the fancy schmancy AGI to come may not pose an existential risk or at least not the one that's being thought of whereas the simple stupid AI might pose an existential risk that's another rebalancing yeah i kind of hope to have conveyed yeah and i think the other theme or i'm going to draw out some of the threads these are
01:31:16
Speaker
in some sense, just more of the same. And it's an acceleration or augmentation of issues that that we already have. And that's not too terrible in that in many cases, we have some of the tools here that we we need to make progress. So yeah, I, I would say that I'm gonna, I should say that we're we're we're in the David Hume tower. It's not called the David Hume tower. It's just 40
01:31:45
Speaker
But I'm going to leave this room feeling slightly more optimistic than I came in, I think. Good. The thing that I don't quite understand, I'm not sure how many people do, but the thing that I wonder about with AI, it actually does relate to general intelligence.
01:32:14
Speaker
And my question is whether it's not just the fact that we take in
01:32:27
Speaker
nutrients as a form, as a fuel, which we then convert to energy. It's that we do that in a certain way. We do it in a particular way. It's called metabolism. We take in nutrients which serve both to generate fuel as a source of energy for ourselves, but also as a kind of lubricating oil. Like vitamins and minerals are just also important so that joints work properly.
01:32:57
Speaker
Now is metabolism or is life crucial?
01:33:03
Speaker
to our intelligence in a way that's deeper than is generally appreciated. In other words, without something like the metabolism that we've got, will it ever be possible to get a general intelligence? That's what I wonder about. You can plug a system in or give it some means of generating energy and maybe even make it
01:33:29
Speaker
so that, like a thermostat, it kind of knows when its fuel is running out so it can go and plug itself back into the socket and get another shoot of electrons to energize itself. But that isn't a metabolism, because metabolism, at least in the way we've got it and the way we have it as animals,
01:33:50
Speaker
is more than just getting energy. It's getting energy, but it's also getting a system of maintenance. And it's something about the way we assimilate nutrients. It's like the thing that we take, whether it's vegetable-based, whether it's mineral-based, whether it's flesh-based, that thing then becomes part of ourselves. It's actually, the nutrients, the material becomes part of our living tissue in a way that enables growth.
01:34:21
Speaker
Is that necessary for intelligence so that what we really should be looking at if we want artificial general intelligence is something like artificial life? That's another way of saying it. Can you really have AGI without artificial life?
01:34:40
Speaker
Do we need to recreate the functional dynamics of a corporeal system of the kind we've got where we take energy in this very particular way, we extract energy, but also nutrients in a way that makes it part of our living substance? Is that crucial to intelligence? That's what I don't know. I'm curious. I don't see the, I know that it's there with us, but I can't see why it would be.
01:35:10
Speaker
Yeah. So you can have artificial intelligence to be out of artificial life. I think I certainly buy that there's an embodiment that AI might need, like to be able to explore, you know, wander around, build up, interact with things. But I don't know,
01:35:37
Speaker
there is this fragility to life and also you point to the way that it sort of bodily incorporates, you know, physically incorporates its surroundings essentially, you know, the surrounding fruit or something, stuff it, I eat it, I literally incorporate it. I don't know how
01:35:58
Speaker
Yeah, I need to think about this more, like why that might be necessary to intelligence, or... Well, this is the thing. I don't know if it is either. This is what I wonder. So I don't know, but maybe more than you, I entertain the possibility that it somehow could be. Yeah. But I'm not sure. Yeah. That's probably the point. Yeah, this is a good point, because this is a good point to end on, as
01:36:28
Speaker
it opens up probably as a thought so yeah well John's really thanks so much for inviting me here and whatever this tower is called we have a wonderful view over Edinburgh and the surrounding fields and yeah it's hard not to feel that
01:36:49
Speaker
Hopefully we can make this all work. Yeah. Actually, the view that you can see on what's your right would have looked very similar to what David Hume would have seen, except that instead of being surrounded by the new town and all these other buildings, it was just basically farmland. But if you see pictures of Edinburgh around the time that David Hume was around, it was not all that different from an aerial perspective.
01:37:18
Speaker
Well, let's hope in another 300 years or so. People can enjoy some of you.
01:37:37
Speaker
Thank you so much for listening. Multiverse is taking a short break of a couple of weeks, mainly because I will be on holiday. So the next episode should be in about three weeks time, unless I pull something out of the hat. We do have some wonderful episodes in the diary to record. Highlights include Silent Critchley, philosopher who's written books on suicide, on Bowie, on football, everything under the sun. Also, Patricia Ferrari, a translator of Fernando Barsoa,
01:38:06
Speaker
and himself a poet and an incredible polyglot as well. And we also have Peter Schwartz, author of The Art of the Long View and one of the directors of the Long Now Foundation, so he thinks about long-terminism. So if you have any questions for these folks or just comments on the podcast and people you'd like to see on it or ways that you'd like to see it improve or change, feel free to email me at james at multiverses.xyz
01:38:35
Speaker
and don't forget to subscribe and tell your friends and all that jazz. And with that, cheers, you'll find me on the beach.
01:39:20
Speaker
So,