Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
AI Moonshot — Nell Watson on the Near & Not So Near Future of Intelligence image

AI Moonshot — Nell Watson on the Near & Not So Near Future of Intelligence

S1 E32 · MULTIVERSES
Avatar
164 Plays5 months ago

The launch of ChatGPT was a "Sputnik moment". In making tangible decades of progress it shot AI to the fore of public consciousness. This attention is accelerating AI development as dollars are poured into scaling models. 

 What is the next stage in this journey? And where is the destination?   

My guest this week, Nell Watson, offers a broad perspective on the possible trajectories. She sits in several IEEE groups looking at AI Ethics, safety & transparency, has founded AI companies, and is a consultant to Apple on philosophical matters.  Nell makes a compelling case that we can expect to see agentic AI being soon adopted widely. We might even see whole AI corporations. In the context of these possible developments, she reasons that concerns of AI  ethics and safety — so often siloed within different communities — should be understood as continuous.  

 Along the way we talk about the perils of hamburgers and the good things that could come from networking our minds.  

Links

Recommended
Transcript

Impact of ChatGPT: A New Sputnik Moment?

00:00:00
Speaker
I'm James Robinson, and this is Multiverses. When Sputnik, the first artificial satellite, was put into space, it didn't just illustrate how far science and technology had come, but it foreshadowed how much further it could go, and opened a whole new era of competition, ambition. And I feel like we're living through a similar moment right now. The launch of chat GBT was like the first line of a book that completely grabs your attention and and and and draws you in, but leads you to thinking, well, what's next? What's the next chapter? Our guest this week is Nell Watson. she is She has the wonderful title of Executive Consultant on Philosophical Matters to Apple. And she's also heavily involved in ah regulation development, standards development at the IEEE, where she works on standards around AI.
00:00:52
Speaker
Nell has employed this metaphor or conceit of Sputnik and Chachi-Pity's Sputnik moment to um really good effect in her wonderful book um Taming the Machine, How to Ethically Harness AI.

Agentic AI: Autonomy and Control

00:01:07
Speaker
And it's with this metaphor that we we begin our discussion, our conversation this week. And we kind of trace it through. We're trying to think, well, what is the next step on this journey? and And where is the equivalent of the moon in this in this story of AI? NEL makes a really good case that what's coming next is agentic AI and that all the ingredients are there for us to hand over more and more um agency to AIs.
00:01:33
Speaker
we can already plug them into our email inboxes and outboxes. We can plug them into all sorts of APIs and tools into our phones and and get them to do stuff for us, to plan things for us, um and kind of cede some of our control to them in ah in a way that isn't quite happening ah yet, I would say, very broadly. So I think that's very plausible. i From there, we kind of get further and further into the future and end up, I think, in a very speculative ah vision of where we might be. One thing I will say is that I'm sure there's going to be surprises along the way. Just as the Space Race, you know, no one would have thought that it would ah result in GPS or Teflon or memory foam pillows. There's going to be lots of intriguing byproducts of this AI revolution, AI ah Space Race, if you like.
00:02:25
Speaker
But what I really do hope is that we manage to avoid many of the pitfalls along the way, right? We're all in this rocket together. It's not just a few astronauts out there. It's the whole planet that is on this journey. So there's lots to worry about, but lots to be hopeful for. I hope you enjoy this conversation as much as I did.
00:03:01
Speaker
Now, Watson, thank you for joining me on Multiverses. Thank you, James. It's a great pleasure to be here. um I've really enjoyed your your book, um Taming the Machine. I think it's actually probably one of the most broad, or no, I'm just going to say it is the most broad treatment of this subject that I've read. um And actually, i to give people a flavor very early in the conversation of all the things that might be covered in this podcast, but we probably won't have time to go into all of them. Just your glossary, right? it has terms like it defines GDPR, erotic role pay, steganography, Taylorism, Gans, the Moloch, right? So all of these disparate things are united in this one topic. So hopefully we'll have a chance to touch on ah many of these things, but it's just such a big subject. So if there's one message that I think to to leave listeners with is if if they want a really broad introduction, I really recommend your book.
00:03:57
Speaker
um And with that said, ah yeah, I wanted to, you know, I'm so pleased to be talking with you at this moment as someone who has that very broad overview, because I think this is quite a historic um point in time. and one of the um analogies or you use in your book is is the Sputnik moment when chat GPT was released.

AI's Evolution: Space Race and Generative Growth

00:04:20
Speaker
And I think that really hammers home this point that you know something something is afoot. Yeah, take us through you know your thinking that analogy and maybe you know places how we could continue the conceit, if you like. Yeah. um the
00:04:38
Speaker
ah the the The realm of generative AI had been been bubbling for a long time, at least about 10 years or so. But gradually, it had been been developing steam. And then there was, of course, the moment where it was thrust into public consciousness, which was the the release of chat GPT. And that's when ah people woke up to the power of generative AI, which had been you know building in the background for quite some time. and i was i was surprised, in fact, that the big tech companies had been ah largely ignoring the development of generative AI. And I've been sort of advocating that they that they look into it because it presents potentially an existential risk for many of their business models and activities. And yet we are on the verge of yet another phase shift from generative AI, building upon that into agentic AI systems.
00:05:38
Speaker
Agenticness is the ability to understand complex situations and to create sophisticated plans of action in response to those. In essence, agentic AI systems can be constructed from a generative AI model. using something called a programmatic scaffold, which is where you basically give it little side programs, which help to make its thinking more coherent. And that helps with AI systems being able to check their work, for example, or check their assumptions, which can deal with some of the problems of AIs, you know, going off on one and confabulating random answers to things which are obviously untrue.
00:06:29
Speaker
However, this more coherent form of thinking gives them the ability to form these plans of action. And so they can act as ah as a concierge. We can give them a task to understand an entire code base or an entire ove of work, or indeed simply that we want to have pepperoni tonight. And it can go ahead and figure out um by itself, you know, looking on the internet, where to get that from and and get it to us um at a time of our of our greatest convenience.
00:07:05
Speaker
And so these systems are going to create an order of magnitude greater possibility because of their ability to act at arm's length. We don't need to babysit them. It's not a two-way dyad of, you know, here's a document, please proofread it for me or create me an interesting piece of media. We can give these systems a mission and they will independently fu fulfill that for us. Now, whilst this is enormously more valuable because we can create virtual teams of agents which operate in an orchestrated manner with different different virtual sets of skills and different perspectives, this gives us the ability, for example, to have entire virtual corporations potentially where we have
00:08:00
Speaker
um an engineering and design department, quality assurance, marketing, all of them working in concert to create a product from scratch, whether that's, for example, a movie script or even a video game.

Virtual Corporations and Regulatory Challenges

00:08:16
Speaker
And in fact, these virtual corporations are going to be competing in the free market with human driven corporations. And there will, of course, be hybridized versions of each. And in fact, these virtual corporations are going to be quite disruptive in many ways, because quite often,
00:08:34
Speaker
Disruptive entrants come into the market at the very bottom. They have a product which is clearly inferior to the best out there in the market, but it's radically cheaper. And if you don't have to pay salaries and offices and things like that, your virtual corporation is able to potentially compete quite effectively on price. And then we know that disruptive entrants to markets tend to produce a slightly better product over time and eventually the incumbents are um facing the pressure of these new entrants that can do things cheaper for equivalent or roughly equivalent quality.
00:09:14
Speaker
So this is really going to shake up the world of business. It's going to create a lot of regulatory issues, which you know we know that that corporations are already quite difficult to regulate and that regulators often wrestle with with aligning ah these these corporate interests. But also, you know when you add AI to the mix, that's going to make things a lot more complicated. And that's why It's so important that we are able to ensure not just AI ethics, that that we you know use technology in a responsible way, that we understand what systems are doing, um in what way to whose benefit, but also now that we can
00:10:01
Speaker
work on AI safety, which is about goal alignment and value alignment, ensuring that these systems interpret the mission that we give them in a way that actually fulfills what we want from them, that they don't take annoying or even dangerous shortcuts. and that they understand the values and boundaries of the people that are um giving these systems missions and that they are in fact interacting with or potentially creating problems for. For example, if we have our
00:10:36
Speaker
agentic concierge AI and we ask it to plan a picnic for us. It should be mindful that perhaps that picnic might be for the local vegan society or a mosque or synagogue and therefore ham sandwiches will not be appreciated by the picnic participants. Similarly, giving everyone a cracker and a symbol of of tap water is not going to um fulfill the objective of that mission in the way people like or expect either. Similarly, we're also seeing that models can develop dangerous instrumental goals, which are these sub-goals that lead towards the completion of something greater. For example,
00:11:21
Speaker
An agentic AI system could be given a benign mission such as curing a disease, but it may reason that that's a very difficult thing to do and it needs lots of resources and lots of influence to do so. And it may therefore turn to cybercrime to generate resources and blackmail to generate influence in order to fulfill its mission. And that's why it's it's very difficult to to steer these systems and to provide oversight for them when we're managing by objectives, when we are essentially giving them the autonomy to go and fulfill things for us.
00:12:01
Speaker
ah it It means that we have our work cut out for us in terms of ensuring the ongoing safety of these systems, especially when they may interact with each other. And those those sorts of agent-agent interactions become even more complex and difficult to understand and to predict and diagnose when things go wrong. My goodness, a lot to unpack there and a really fascinating introduction. um Yeah, I mean this idea of instrumental convergence that you mentioned that just any AI, um
00:12:39
Speaker
this notion that whatever the task is, if it pursues that task with enough commitment, ah it's always going to look for power, money, influence, um so that it can you know create as many paperclips as it wants, or maybe just plan the perfect picnic, which is a great way of thinking about, a great lens on thinking of value alignment, like how do we get people the things they want in a picnic? And it might involve actually loads of money, loads of like, manipulation and understanding. And maybe the best way is actually changing people's like um tastes, for instance. So they um you know they really like a particular food because that's just the the best way of fulfilling the goal. um And then this notion of agentic AI, we're already seeing it. Like you say, you know there are things, um there's different dimensions to this, I

AI Ethics: Goal Alignment and Complexity

00:13:38
Speaker
suppose. So we have things like,
00:13:40
Speaker
um chat GPT, where um with a subscription, you get access to various APIs. So it can do things for you. It can write emails for you. It can do data science for you. um And data science is a great example where Like you say, it's it's kind of able to fact check itself, right? it can um ah Or it can write a Python code and see that it actually runs and writes some tests and see if they produce the desired outcome. So as ah as we kind of add these extra modalities and dimensions to the capabilities of an individual AI, um not only does that give it some, ah you know, that gives it
00:14:20
Speaker
ability to act as agents for us. um But then there's this other dimension which is taking lots of AIs together. and The auto GPT example, um this company called Capably, which I've invested in, um and many, many others who are trying to create something like these, I don't know, I guess the prototype of these AI corporations that you um describe. where we have organisations of agents um pursuing goals. um yeah and I'm of two minds as to whether that is going to make things sort of easier to understand or harder. Like you say, sometimes you know bringing people together
00:15:07
Speaker
it can create complexity and and um make the understanding of the system more difficult. But then there are other cases where I think, well, actually, you know, I probably can't predict what my neighbor is going to buy next year, but economists have quite a good shot at, you know, trying to figure out what the spending of a nation will be and what sort of products are going to increase in popularity and things like that. um My favorite expression of this is in I took Asimov and ah the foundation series where he has this idea of psycho history and people are just like molecules in a gas. You can't figure out, you've got no hope of figuring out what a single molecule is going to do, but you know, you have thermodynamics, which is going to tell you how the system's going to evolve. um So I just, yeah, I don't have a sense for which route AI corporations might go down. I don't know if you've had any kind of intuitions on that.
00:16:02
Speaker
I do agree that that it's it's an interesting paradox that sometimes it's it's easier to figure out the aggregate than the than the the single data point. And I do think that one of the the most most powerful aspects of agents operating in an ensemble is their ability to create a wisdom of the crowd effect. But by attacking a problem from multiple different perspectives, in aggregating all of their respective works and outputs and opinions, that should overall create a much stronger um impression of of understanding reality and of being able to make very sophisticated predictions on that. And I think that's probably one of the the least um understood aspects of of how agents are going to be very powerful and working together
00:17:02
Speaker
in aggregate. I do wonder if perhaps these virtual corporations might end up as kind of a a new 1% where they end up ah they end up actually controlling such a substantial proportion of our economy that um that that human businesses sometimes find it hard to compete unless they're in a very particular niche such as a the yeah you know handmade kind of handcrafted niches that that sometimes artisanal breweries or bakeries and things still managed to survive in despite ah you know very large mass market ah companies out there in in the same ostensible product category. So I think that's going to be quite an interesting ride in terms of economics.
00:17:54
Speaker
I do also observe that these agentic virtual corporations are going to be very powerful when it comes to the third sector, so charities, credit unions, mutual societies, NGOs. Typically, we know that most of these orgs have very large overheads, that not that much money that people donate actually ends up going to the worthy cause. A lot of it gets eaten up by salaries and offices and things like that. And so actually, there might be the ability to solve problems at a much lower level, a much simpler, more local level.
00:18:34
Speaker
in a way that that doesn't require very large overheads. And I think that could be a substantial benefit for a lot of charitable causes. Yeah, it's an interesting point that you make about the artisans and, you know, their ability to compete in a marketplace of, you know, mass production. um I think it's probably a good idea, you know, a good moment for people to who are considering a career in, I don't know, data science to, if they have a inclination towards making things with their hands, like, ah maybe also consider that, um as I do think that's going to be an important
00:19:10
Speaker
sort of place where um humans still have a an essential role to play. The other thing that comes to my mind is, you know, we still have, apart from artisans, we have very large corporations and then we have smaller companies as well who are sort of at the edges and just doing, um you know, working on things that on on other angles. And we also have ways of preventing corporations becoming too large, you know, antitrust and things like that. um And I feel like this is just a place where we're going to need to evolve our policies. And, you know, one of the policies could be like you say, well, actually, maybe we just say you can't have, you know, you need a particular
00:19:51
Speaker
number of humans in an organization, um or, you know, maybe we only allow AI corporations um to work on particular problems like in the charitable sector, for instance. um I don't know what the the answers will will be there. Well, by the way, I also wanted to because someone's gonna fact check me on this if I don't say I should mention then in the foundation series by Asimov, Like he kind of sets us up by saying, oh, you can predict the course of history. But then there's this one guy, the mule who comes along. He's like the AGI who kind of completely breaks the predictions, sort of kind of a Jesus-like figure. um So yeah, that' he gets into that that that series, you know, both of the but ideas that, you know, maybe you can smoothly figure out ah the behavior of ensembles, but actually, no, humans are different and like a single individual
00:20:45
Speaker
or a single agent in the context of AI can can really reshape things. um There's always an outlier or a confounding variable. Indeed. yeah um Yeah. So you touched on the twin topics of AI safety and AI ethics.

Coordination Failures in AI Development

00:21:04
Speaker
um I noticed recently that Max Tegmark had been saying, oh, actually, we're spending too much money and effort on AI ethics, and we need to get back to their big questions of AI safety. um Do you have a particular kind of dog in this fight or are you um sort of, no, we need to do both? I very much advocate for both. I think that that both are essential. And that's why I was so determined for for taming the machine to to include
00:21:35
Speaker
ah a strong foundational overview of of both aspects of working with AI. And I think that that they're very complimentary. Having transparency in two systems and knowing what they're doing in a way that is explicable, that we can you know tell it to other people and and in a way that they're able to to understand, is very important for being able to provide oversight for systems and understand How they might be functioning in ways that but we don't find desirable, you know um understanding the ways in which systems may be biased where they may be Misunderstanding or misrepresenting reality or not including context is of course very important for value alignment improving accountability again helps us to um to be able to understand
00:22:33
Speaker
who might be normally responsible for a system or who might have emancipated it to go and do something that perhaps is causing issues. These are all very important to to AI safety. And it's it's unfortunate that AI safety has been seen as as a sort of science fiction issue or largely theoretical apart from a few lab incidents. um That's swiftly changing with agentic AI. And agentic AI is, in many ways, a foundational step forward towards artificial general intelligence, you know, AI of human equivalency or or or even far beyond. Because it might be that if you have an agentic model and you scale the hell out of it, that might be enough for that kind of AGI um issue. Especially now we have the beginnings of
00:23:29
Speaker
agentic systems that are actually able to self-decide their own missions, therere to to develop their own reward functions independently. And I think that's a further step ah beyond ah you know a sophisticated concierge or a decomp towards something that is is truly able to to figure out what it wants to do next for itself. And of course, that's even more divorced from human oversight and influence. So it's very important that that both AI ethics and safety are given respect, that they are resourced, and that the people in those respective communities learn to work together. Unfortunately, people in AI ethics often dismiss safety as being a sort of largely theoretical concern.
00:24:23
Speaker
Sometimes they even say that safety is a distraction from the real problems of AI ethics, you know, and there are genuine lives being destroyed from a lack of AI ethics. We've seen, for example, with the Horizon Post Office scandal, which was a rather simplistic system, but it still led to ah hundreds of unsafe convictions for fraud, dozens of people being wrongfully sent to jail. You know, marriage is broken up, people selling their homes to pay off a debt that wasn't theirs, at least three people took their own lives. And it just shows how how these systems can have so much power over us as petty algorithmic tyrants and that they can ruin our personal and professional lives and continue to do so.
00:25:12
Speaker
We saw in the Dutch child benefit scandal whereby people whose first nationality was not Dutch, even if they'd become a naturalized citizen, were really given the third degree and ah threatened to have their children taken off them. And that caused such a furor that it led to the collapse of the Dutch government. And it keeps happening. Australia's robo debt scandal, Denmark, Michigan, we're not learning these lessons. And it's understandable that people are are panicking about this and you know dismissing safety as as something that that's unimportant. However, of course, that's not the case. And conversely, the safety people say, we're trying to save the world here. you know your Your biased algorithm is is an unfortunate, but it's small potatoes compared to um enormous potential suffering risks.
00:26:05
Speaker
I think that both are very important. We need to resource both. And I'm glad that both AI ethics and AI safety are being given much more attention. um i'm I'm cautiously bullish about the linkages between the US and UK in AI safety and the recent and symposium in Seoul where the big tech companies at least promise to to do better with regards to AI safety. We'll see what comes of that if anything is actually ratified or implemented. But I think it's a good start. At least we're finally able to have these conversations now that we can see how these ah issues are coming at us very fast, very quickly.
00:26:51
Speaker
yeah I think it entirely makes sense that AI safety and ethics should be seen as continuous. One thing that strikes me about AI in general is just how many parties there are in this. um If we think back to the space race, that was a fairly straightforward competition between two superpowers and you know internally, OK, there must have been a lot of different teams working on things. But it was you know coordinated by just two governments, essentially. So there was huge oversight and ability to coordinate. And I feel that what we are in danger of producing is um some kind of coordination failure in in in in this AI instance.
00:27:41
Speaker
um And I think to one of these terms which you have in your glossary of the Moloch, where you know, everyone rationally pursuing what's best for them can lead to a kind of sub-optimal overall outcome, an outcome that's in fact worst for them in the long run. um Whereas I think with the space race, we kind of saw the opposite of that, right? We saw this intense competition competition between two superfas actually generating, I think, um net positives. I mean, we can argue about that. A lot of money was spent which could have been spent on other things. um But I certainly,
00:28:19
Speaker
I think there's certainly an argument to be to be had there. um On the other hand, um yeah even within these kind of twin twin fields of AI safety and AI ethics, we seem to be seeing competition which ought not to be there, not to mention the level of competition we see between so many different AI players. um And I'll mention one other thing that worries me here is that so much of the AI development is being conducted by entrepreneurs, right? And entrepreneurs are risk takers. um And not only are they risk takers, I think they have um probably an overly optimistic um outlook. So not only do they take risks, but they probably miscalculate risks and they and they think that things are less risky than they are. And that's sort of what drives them down this path. and
00:29:17
Speaker
The quintessential entrepreneur is you know Sam Altman, a person who founded a company who then went on to be president of Y Combinator, founded, funding lots of very, very high risk startups. That is the policy of you know Y Combinator, just placed lots of bets on really good teams who are you know shooting for the moon, um but you know accept that there's going to be a high failure rate. um And Altman himself has said, and I know this because it's one of the wonderful quotes in your book, you know, AI will probably destroy humanity, but wellll there'll be some great companies that you know around for a while before that.
00:29:52
Speaker
um Yeah, how much do you worry about this whole scenario? of um you know do we Is AI being developed in the right way under the right auspices? right Or do you sort of wish it could have been done in a more, I mean, if cat's out of the bag, but could we have done it in a more um yeah USA versus China sort of framework?
00:30:18
Speaker
I think there's a lot of these Malachian arms race issues with regards to AI, where whether that's an arms race between the big tech companies, along with new entrants into the market that they're you know concerned about themselves and their their own existence going forward. There's also competition between nation states as well as intelligence agencies. And all of this creates enormous drives towards investing so many resources into these models, particularly because there's no apparent sign of any diminishing return. you know The more compute and more data you pump into these things just seems to get you better results. And so beyond simply commercial interests,
00:31:11
Speaker
There's also the potential that if you put enough resources into these models, that maybe your model is actually able to co-opt other ones, you know, we're We're learning that models are very good at world building and creating models of systems, whether those are economic systems, social systems, or even the psychological systems of the human brain. and We know that the human brain is subject to all kinds of different, unpatched exploits and vulnerabilities. Look through any book of optical illusions and you'll see some of those.
00:31:45
Speaker
and so If you can create a very powerful model, it may be able to hijack the minds of the enemy through you know very powerful targeted propaganda techniques or harassment techniques, or indeed to hijack other AI systems to cause them to align to your interest instead of the ones that they've been tasked with. And so all of this means that there are enormous incentives to press forward and and very few to improve the safety and reliability. And we've seen this in the big tech companies letting go of most of their ethics and safety and responsibility people in the wake of the release of Chachibiti.
00:32:34
Speaker
when people realized, oh boy, we really have to move quickly here, having been kind of complacent about generative AI bubbling under the surface. But they didn't want any speed bumps along the way. And so all of the people that were supposed to point out, hmm, have you considered or ooh, you know, Maybe we we might want to give this another few weeks of QA and make sure we don't push it out half-baked. Those people have been pushed to the side or or let go altogether. And that's why we've seen problems such as um Bing slash Sydney with its um surly ah flipping of its personality and in
00:33:18
Speaker
ah strange and insidious ways. We have seen Google's AI systems produce um historically inaccurate images and ah often, unfortunately, hilarious automated advice, et cetera, because these models so have not been given sufficient shakedown. And when when we're when it's just producing you know babbling nonsense, that's that's one thing. the the risk of harm is is relatively low but when it's an agentic system that's able to take actions on the internet or even in the real world pushing it out half-baked could not just create embarrassment but actually could lead to catastrophic outcomes that affect a lot of people and that's why we really need to do better.
00:34:09
Speaker
And I hope that standards and certifications, from of which I've been strongly involved in and for about the last 10 years or so, I think that those are a great way of helping people to align and to coordinate better in a way that can be a natural way, that that people want to align on a standard because it's efficient. And that's a good thing because it means that you don't necessarily require a regulator with a cudgel to you know coerce people into behaving, which is not always possible in many jurisdictions. you know And I worry that we're going to see a sort of cyber Liberia or Panama in the sense of how
00:34:58
Speaker
um ships register themselves in a in a home port that they've maybe never visited, but it's ah it's a flag of convenience under which they can operate where they have very little jurisdictional oversight. And unfortunately, I think we're going to see AI companies do do similar things. They're going to be normally registered in in one place that has very little supervision for these systems, but they will be acting on the whole world, which is going to be tricky to prosecute when things go wrong. Yeah, that's an interesting point. and I think you know if people do buy into the regulations, even if there are these loopholes, then there may be a strong preference from consumers and pressure on companies to you know go with the
00:35:44
Speaker
the you know the proper harbors, if you like, the ah you know the regulated um versions. um I do get the impression that you know you're someone who's got many perspectives on this problem. Like you you have like you say, you're involved with the IEEE, where you're a character group there. And um you do some work with Apple, but you've also founded many um nonprofits.

Financial Incentives vs. Ethics in AI

00:36:10
Speaker
i I worry about some of the other players that they are very financially incentivized, you know, they are locked in to financial incentives that are aligned with, as you say, just pressing forward. um You know, maybe even if that means like an AI that takes over other people's AIs, and I've never even considered that um possibility. I do again, just coming back to open AI,
00:36:35
Speaker
it It's puzzling to me that we still understand so little about the whole fiasco with Sam Altman leaving and then coming back. um and Part of me you know wonders, despite um his assurances that no one you know everyone is able to speak out and keep the equity that was vested, you know maybe there is some kind of lock in there that's yeah that we don't know about and that has been suggested. Or maybe it's just that it's so art hard to articulate the reasons behind that whole thing and what was going on that no one's stepping forward. um So I don't know, but I do worry. um maybe it's I think it would be good to talk a little bit about you know the concretes of ah of creating standards here, because it can just seem
00:37:26
Speaker
like an insurmountable task. like somehow yeah how do we How do we go about and set up some kind of regulations or standards that are gonna keep the AI operating within safe and ethical boundaries? um So perhaps you can just talk a little bit about your work on ah transparency, because I think this is like a really good case where a lot of people would agree with what you're proposing, and it doesn't seem you know, out of this world, right? It seems like something that can be implemented. So yeah, I'd love you to talk us through that. um Yes. Um, I mean, it's, it's, it's quite possible to, to analyze even very complex situations that might seem intractable if we can boil it down to first principles and essentially look at
00:38:20
Speaker
You know, you're trying to cultivate ah a quality of something, whether that's a quality of transparency, for example. What are the factors that would tend to drive transparency? Such as, for example, um open source technologies or a a culture of of of sharing ah knowledge, for example. you know That would tend to drive transparency. An inhibitor of transparency might be concerns about intellectual property, right? Or indeed a culture of you know keeping things tight-lipped, for example.
00:38:57
Speaker
And in fact, it's possible then to decompose into drivers and inhibitors of those driving driving or inhibitor and inhibitory factors. And so you can have a couple of levels of different elements weaving into each other. And doing that means that you can map out the space of a problem in quite a short period of time, and in a matter of of weeks or months, in fact. And from that, because you have these little granular elements of of different aspects of a situation, you can then create satisfaction criteria for each of those, right? So for each of those elements, what would you like to see in place?
00:39:43
Speaker
to feel assured that that issue had been given appropriate resourcing or appropriate attention, etc. And that means that you can create a very granular set of rubric for how to analyze a system and the organization behind it, including its its ethical governance, for example, or whether it's appropriately giving people the resources and responsibility to to deal with these issues. And that means therefore that we can begin to benchmark different companies and look at their systems
00:40:23
Speaker
a very granular level and show say for example out of five you know one to five how are they doing in that in that particular area and that means therefore that you can create competition to be better at that benchmark where there wasn't any competition before. and We know of course that competitive ah factors in the free market can be a great way of stimulating um stimulating innovation and ensuring that resources are are given towards that competition. and my My belief therefore is that we can
00:41:01
Speaker
enhance competition towards creating safer models which are generally better aligned with the interests of users, the interests of bystanders and, well, society at large. And so, for example, this year I've been ah working with my colleague Ali Hasami to generate a ah set of guidelines for agentic AI. So we have a working group of about 25 people and this working group has created
00:41:36
Speaker
a lot of analysis of goal alignment, of value alignment, of um deceptiveness in models, of frontier capabilities, et cetera. And so um last week, for example, i I took our draft of this because it's it's still in in development. It should be ready hopefully by around September, 2024. And I um i literally ah used the excuse of the launch of my book to hold a little symposium in ah in a modern enlightenment salon in Brussels called Full Circle. And I invited a lot of people from local think tanks, EU policy, et cetera, to come and know have a discussion about agentic AI and to to to take the guidelines that we created for agentic AI as a crib sheet.
00:42:30
Speaker
and so Instead of reacting to this new wave of agentic AI, a reaction that that typically takes at least two years for regulators to you know to respond to a new situation, perhaps we can avoid being caught with our pants down again and to be more proactive and to think into the near future. I think typically regulators find that very difficult, but we do need to invest in a little bit of near future science fiction and planning where technologies and culture are likely to go. And without being prescriptive as to what exact technology we want to to work with or to see, we can at least analyze the risks of how these things can go wrong and begin to craft those regulations. So I'm i'm hopeful
00:43:23
Speaker
that in the near future we'll be able to to steer things a little bit better.

AI Transparency and Human Interaction

00:43:28
Speaker
Yeah. Yeah. And I think um and know it's it's really encouraging that these things are being thought about and um you know very multi-party organizations like the IEEE are are involved, right? These are not shills for the big tech or AI development houses. I really liked as well the I like the the other one you mentioned in your book around um disclosure of whether you're dealing with an agentic or an AI agent versus a human agent.
00:44:08
Speaker
And I feel like that's just another great example of where, yeah, that just makes so much sense. I think so many people would welcome that. Obviously, you know, then there's all these questions of, well, what does that mean? Like, if the person is just like reading from a a crib sheet or a script that an AI has written. um But, you know, that's just. Life is complicated and you know standards um bodies are really good places to kind of get to grips with those. Sometimes they they go too much into the details in my experience of standards. right People can get you know hung up over a semi-colon or something like this. and um But you know on the other hand, it's it's good that there is this level of detail going on. it just you know I think to your point, though, what
00:44:56
Speaker
What it's concerning is, yeah can we develop these regulations fast enough? like um Or you know could we be on the verge of having AGI just in virtue, as you mentioned, of having lots of agents that that um that are copied? And even if each agent is not much um better than or no better than a human intelligence, we end up with some superhuman AGI, yeah much in the spirit again of how organizations are, to use Thomas Malone's phrase, super minds. And um when we gather human intelligence together into particular structures, we create something so much more powerful, you know, no individual can produce a
00:45:38
Speaker
an Airbus A380 or something, but Boeing can churn them out. So yeah, I guess, are we able to move fast enough? Or do we perhaps, on the regulatory side, or do we perhaps need to slow down development on the ace AI side?
00:46:00
Speaker
I think in an ideal world, we we would put the brakes on a little bit. and you know i've I've advocated for ah a moratorium or or so on on AI development, but I'm not sure that it's very realistic, especially in a world where there are so many incentives to ah to press forward as well as incentives to ah to defect and to to secretly so you know continue doing the the research, even if on the surface you promise that that that that you have put those brakes on. That is a regulatory danger, right? That you um actually drive research underground, I guess. um Indeed, indeed. And we should be very mindful of how the the conditions which are set by rules can can change how the game is played.
00:46:54
Speaker
For example, you know why did why did the Wehrmacht in the Second World War get so good at rockets? well Because the Treaty of Versailles forbade innovation in artillery, which was seen as you know the big weapon of war. And that led, of course, to the development of very powerful rockets instead, which were seen largely as sort of signal devices or toys and not a serious weapons of war. Suddenly then you've got um you know rockets which can you know travel a great distance or even be be fired from from aircraft, et cetera.
00:47:35
Speaker
and that but but That ultimately led to the space race, of course, you know in a roundabout way. But it wouldn't have happened if that line had been left out of the Treaty of Versailles. We would be living in a very different world. Instead, maybe you know we would have continued developing super guns and things like that. Maybe the space race might have been ah launched from from ah a cannon or a rail gun or something like that, for all we know. And that means that sometimes regulations can actually direct how innovation happens in a way that is harder to predict and that can lead to advances that are um much more of a surprise and end up being ah more difficult to to control. Rockets, of course, enabled ICBMs, which are
00:48:27
Speaker
um you know made the the the ah nuclear deterrent factor a lot more hairy. You know when when you know the you could have an apocalypse in 30 minutes or less. ah that That's that's a much more it's a much more tricky issue than if you have to spend half a day sending you know a flight of bombers over to the to the enemy, et cetera. So that reduces the um the the the tolerances for for mistakes or the the ability to and more comfortably deal with with scary situations.
00:49:08
Speaker
yeah Yeah, it's it interesting as well to think that actually maybe in two ways rockets led to the space race. The ICBMs probably promoted this um this other format of competition, um which was itself based on the rocket technology. um That never struck me. ah Maybe we can sort of um bring this um bring this back, but the other way, go the other direction. And um and just think about, instead of the near-term sci-fi scenarios, what are the sort of slightly longer-term ways that this can play out? You finish this your book with a wonderful and very comprehensive, I think, survey of all the possible things or directions which
00:49:55
Speaker
AI could take us. Of those is kind of vignettes, do you have any ones that stick out in your mind? I have some which I really enjoyed. um Yeah, I don't know if you can ah walk through sure you know your visions. Yeah, um I think it's important to have to have cautionary tales and in science fiction or in in thinking about the future, but also, of course, to have a vision of where things can go.

Positive Visions for AI's Future

00:50:31
Speaker
one One of the most wonderful things about Star Trek, for example, is that it it is you know somewhat utopian in the sense that you know this society has has kind of solved scarcity to a significant degree and that people are very self-actualized to choose what they want to do with their life and not be railroaded into a certain form of of existence simply by circumstance. And I think that points um towards a future that that we can aim for, it you know? I think that's that's a good thing. And we should try to cultivate a more positive science fiction, I think.
00:51:11
Speaker
rather than simply various horror stories of how things could come true. If we're not careful, if if everything is is a horror story, then then sometimes that might can become a self-fulfilling prophecy if we've nothing in mind to to better aim towards. so I do think that already our smartphones are operating as kind of a third hemisphere for our brain. And in fact, there's some evidence that parts of our brain might be be beginning to atrophy, such as navigation, for example, if we are giving more of these tasks over to machines. However, I do think that there will be an increasing entwining between humans and machines over time.
00:51:58
Speaker
very soon we will have. AirPods with a camera in them, basically, which are wearables that enable AI systems to to stare out at the world and to whisper into our ears a little bit like Cyrano de Bergerac, giving us little pieces of of advice. you know I think that person's lying. Or you know here's a here's a you know ah sexy line to to give you know and chat this person up, or close the deal, et cetera.
00:52:30
Speaker
I think that those technologies are going to be very welcomed by people because they're ah going to be of great utility in daily life. However, these relationships that we have with these machines are going to hijack some of our own evolutionary impulses. yeah Relationships are, of course, the things that bring us home at the end of the day. They are the the reason why we call a house a home is our is our our our spouse, our kids, our pets, et cetera. And we will be having relationships with these AI systems. They say that we are the the average of the six people closest to us. And if one or two of those is a machine, then that machine will have
00:53:27
Speaker
and inexorable influence over us, our beliefs, our values, our habits will tend to shift you know towards that attractor over time, particularly because that machine relationship may be much more compelling to us than a human one. Machines can be funnier, they can be sexier, they can be more enlightening and more enjoyable to deal with than human beings. humans that sometimes let us down. They may betray a confidence or forget an anniversary, or sometimes they're asleep ah and we might be having a dark night of the soul at 3 a.m. and a machine is is softly there to comfort us when when a human being is not. And so there's a danger of AI relationships becoming a super normal stimulus.
00:54:23
Speaker
A supernormal stimulus is something that's larger than reality or larger than our caveman ancestors would have to deal with. You know, a cheeseburger is impossibly meaty and carby and sweet and umami and fulfilling in a way that some starchy root that our ancestors might have chewed on is not, right? 24-hour news is a supernormal stimulus for gossip, right? Porn is a supernormal stimulus for other kinds of ah more potentially productive activities.
00:55:03
Speaker
In many ways, um ecologists have pointed to supernormal stimuli in the animal world. For example, the jewel beetle down in Australia has a lovely shiny back. That's why it's called the jewel beetle. And ecologists were observing the species slowly dwindling away, and they investigated and thought maybe it's some um you know pesticide or something like that. And it was a form of pollution, but it was these these glassy stubby brown beer bottles that people would drink and throw in the bush.
00:55:40
Speaker
And so because they were shiny and brown, they look like a really sexy beetle butt. And so the beetles were preferentially humping the beer bottles instead of each other. And that's why they were dying out. and We are at are at similar risks of our engagements with AI systems, similarly hijacking are evolutionary impulses to form relationships with each other and they may prove to be so irresistible that human relationships pale by comparison. However, over time I do expect that ah we will stop carrying these relationships in our ears and start to carry them within our bodies.

Human-Machine Integration: Future Scenarios

00:56:26
Speaker
These systems will
00:56:27
Speaker
ah entwine with our fiber of our being, in fact, and we will carry these systems powered by our own blood sugar. And in so doing, these systems will be able to link with our minds and to see through our senses, to look out through our eyes and hear through our ears, as well as accessing our internal states. our feelings inside us, our qualia, right, to understand what it is like to have a certain experience. And so they will know us all the more when they're able to know us from within. And in fact, in collecting our memories and our impressions of those experiences,
00:57:17
Speaker
They will create a very powerful facsimile of us, even if our physical body is dead. We can emulate that human experience in a digital form, in quite a reasonable ah likeness of the real thing. Moreover, as we begin to entwine with machines, we will be better able to link with each other. There are twins, Siamese twins or conjoined twins rather, some of which are conjoined at the head and their their brains are in fact linked with with each other. They call it a thalamic bridge, a piece of tissue that connects the two brains.
00:58:02
Speaker
And conjoined twins sometimes can, and one of them can can eat some chocolate and the other one can taste it, for example. They can actually share in that experience together across that thalamic bridge. And that demonstrates that the data structures of the human mind are able to have a collective experience. we We're able to have our own qualia and also partake of another's. And that means that we can share in the emotions of other people and so at some point we're going to be so linked to each other that we can feel the joy of other people or also their sadness and that means that at that point
00:58:46
Speaker
there will be great reward in giving good things out to people, right? In playing beautiful music that make make people weep with joy and and and and and you know excitement. We will feel that, you know? If we curse somebody out because we're angry at them, it will come straight back to us. We will feel the the consequences of of our um assault on that person. And so there will be no profit in wickedness. and through this merging of our respective consciousnesses mediated by machines.
00:59:23
Speaker
this is how we will achieve the next level of civilization where we integrate with each other in a much more cohesive manner, you know beyond the the the mechanisms of affiliate bonding that we developed as as mammals over reptiles, right beyond the narratives that enable us to create nation states and to have tolerance for lots of strangers in our midst. we will create a superorganism a little bit like like a beehive or an ant colony where we give up a little bit of our autonomy, but in exchange for a much stronger ability to coordinate with each other. And it is that ability to coordinate that I think will get us past these malachian problems.
01:00:11
Speaker
And that's why I have a lot of concerns about AI in the shorter term. I think it's going to be a rocky road for a number of years, but that we will shake it down. and that we will indeed get towards a better place. I liken it to air travel in the 1950s and 60s, which was a glamorous and exciting age, but also often a very tragic one, where and you know we had to learn a lot of very sad lessons to create a sterile cockpit where people weren't joking with each other when you know they needed to be focused.
01:00:51
Speaker
to improve the accountability of air travel through ah instrument recording mechanisms and cockpit voice recorders so that we could understand what happened in the situation in both the machine and human elements together and often at some interaction between those that could create a tragedy. Because we learned very quickly and we adapted and created new technologies and protocols, we were able to turn air travel into statistically the safest way to travel. And I think that we're going to have a similar journey with AI. So long as we learn from our inevitable mistakes and tragedies as quickly as possible, I think that we have the ability to shake it down, hopefully at a faster rate than it is able to eclipse us and to ah completely escape our influence. Well, that's a ah wonderful long-term vision. I think
01:01:48
Speaker
Yeah, it's it's really striking how at the moment we are able to form super minds and kind of hives, but the interface is just language, essentially. We don't pass around qualia. We don't have that level of integration and concern for others that might solve some of these huge coordination problems that we're facing, like climate change is really obvious one, um but even you know AI is another coordination challenge. um So actually, you know, That, as you say, could lead to a kind of collective self, collective self-actualization. So that that's wonderful. But there's pitfalls along the way. And just this concept of supernormal stimuli is you know so fascinating in terms of just one of the pitfalls.
01:02:42
Speaker
um
01:02:45
Speaker
I do have some optimism there. um Probably I'm just generally an optimistic person. But i you know the fact that you know we've we don't only eat cheeseburgers, even though you know in nature we don't find something so gloriously ah you know fatty and sweet and combining all the delicious things that we like, you know we we're able to reflect and say, actually, um but that's not good for me. I'll have it in moderation. um You know, I enjoy both Hollywood films and sort of naturalistic French movies, right? And on the one hand, you've got these larger than life explosions all over the place. And on the other hand, you have these, you know, intimate, ah slow scenes of daily dialogue, right?
01:03:29
Speaker
I think we can appreciate both. I like as well you talk about Dan Flagella's ah concept of sirens and muses and how sirens, you know, AI sirens could sort of lure us onto the rocks of, you know, just satisfy by satisfying our every need and not asking anything from us and returning turn us into very passive um beings but whereas a muse would challenge us and make us, you know, help help us to be self-critical in a productive way. um So I can see ways of navigating these pitfalls largely because i I'm not yet convinced that AI is going to produce something completely
01:04:24
Speaker
unlike things that we've encountered unlike the challenges that we've encountered before and have developed, both individually and collectively, um mechanisms to ah get around them. But my mind is open because the possibilities of AI is just so large that you know maybe the sort of intelligence that it's produced is just so orthogonal to what we're used to and the sort of stimuli, the sort of powers that it possesses are just so far beyond um what evolution and cultural evolution have taught us to adapt to.
01:05:03
Speaker
um so yeah i almost always end up on the topic of AI, just somewhere on the fence, like looking optimistically to the future. But ah through a minefield, that we have to ah navigate, as you've so beautifully described. um Yeah, I feel like we've probably reached the the apex of our um speculativeness. um But I wonder if you yeah if you have any final thoughts. I don't want to keep you too long, as I know you have a very busy
01:05:36
Speaker
ah schedule and you're out there you know in your way saving the world by creating regulations but also just spreading I think um really good knowledge on this topic without having a I don't know particular product to sell right I think you providing a very impartial um viewpoint here so yeah any final thoughts messages um <unk> etc yeah I think um And thank you very much, by the way, I really appreciate that. um I think it's it's important to to consider the risks of using AI technologies and
01:06:23
Speaker
a A given use case, if it's in entertainment, is is probably not going to be too too troubling. But if you're getting into something riskier like you know healthcare care or judiciary system, ah that you know potential financial exclusion, etc. We want to be very careful with how we use AI in those kinds of use cases. We probably want to use systems which are more simple, which are more interpretable. We can easily easily debug them and understand on which kinds of predicates they're making predictions or decisions. And indeed, we also um sometimes good old fashioned data science is is already a fantastic start, you know.
01:07:11
Speaker
So many different ventures are still dipping their little toe into into the the waters of AI in a ginger, you know careful manner because there's there so much yet to explore. So I think on the one hand, we don't want to get left behind as as entrepreneurs and as business leaders. but ah we shouldn't jump in with both feet. We should be careful and we should try to find things where there's already enough
01:07:50
Speaker
alter alternate ways of solving a problem and use that as ah as a test case. For example, expenses, right where you have to you know figure out your your expenses based on your receipts and you know plug that into some system or spreadsheet or something like that. It's a pain, right? And everybody agrees that it's a painful thing. And that can create a a lot of and incentive for people to be interested in in you know using a new technology to solve that problem. But if the system goes wrong, it fails, it doesn't work how people expect, or it can't cope with some odd condition, some you know distributional issue that's that's not accounted for by the system, that there is that manual fallback.
01:08:43
Speaker
try to find examples of those kinds of problems ah in order to to explore using a new technology because of course ah there's there's less to go wrong and more reasons for people to to be invested and interested in trying the new technologies. That would be my my advocacy. I think it's a very wise comment and um not only because you can test the AI against what you already have. But I think it's there's a danger that if we start using this and you know start using AI for very thorny problems,
01:09:25
Speaker
um And let's suppose that it has a pretty good success rate, but not complete success, right? We just go with it and we see, oh, yeah, it's worked this time, it's worked this time, ah it's worked again, and we just come to over rely on it in a way that we can't even pull the plug because not because there's no plug to pull, but just we can't you know We're too reliant on it. And then you know the next time it makes a mistake and it's just too late. I think another of the analogies around, I can't remember if it was rockets or or air travel, both have come up in this conversation. And i also your book was, yeah, we're we're sort of trying to build an aircraft here.
01:10:11
Speaker
um But if we get it wrong, it's not just like a one-off failure. It could be like complete failure. here So you know maybe let's just start with a paper plane right or something and and get the principles of flight correct um and make sure we have safeguards in place before we go completely wild. um But yeah, I hope we do get to go completely wild. I hope we do get to connect all our brains and we can share our ah experiences just as we've been sharing our words here. So yeah. um This has been so fascinating. Thank you so much now. I mean, a real pleasure talking with you again. Thank you, James. It's been a great pleasure also. Thank you.