Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
25| Peter Nixey — AI: Disruption Ahead image

25| Peter Nixey — AI: Disruption Ahead

S1 E25 · MULTIVERSES
Avatar
132 Plays9 months ago

It's easy to recognize the potential of incremental advances — more efficient cars or faster computer chips for instance. But when a genuinely new technology emerges, often even its creators are unaware of how it will reshape our lives. So it is with AI, and this is where I start my discussion with Peter Nixey.

Peter is a serial entrepreneur, angel investor, developer, and startup advisor. He reasons that large language models are poised to bring enormous benefits, particularly in enabling far faster & cheaper development of software. But he also argues that their success in this field will undermine online communities of knowledge sharing — sites like StackOverflow — as users turn away from them and to LLMs. Effectively ChatGPT will kick away one of the ladders on which its power is built.

This migration away from common forums to sealed and proprietary AI models could mark a paradigm shift in the patterns of knowledge sharing that extends far beyond the domain of programming.

We also talk about the far future and whether conflict with AI can be avoided.

Chapters

(00:00) Introduction

(02:44) Start of Conversation

(03:20) The Lag Period in Technology Adoption

(06:48) The Impact of the Internet on Productivity

(11:30) The Curious UX of AI

(19:25) The Future of AI in Coding

(29:06) The Implications of AI on Information Sharing

(41:27) AI and Socratic learning

(46:57) The Evolution of Textbooks and Learning Materials

(49:01) The Future of AI in Software Development

(51:11) The Existential Questions Surrounding AI

(01:05:16) Evolutionary Success as a lens on AI

(01:13:29) The Potential Conflict Between Humans and AI

(01:14:24) An (almost) Optimistic Outlook on AI and Humanity

Recommended
Transcript

Transformative Impact of Printing Press and AI

00:00:00
Speaker
The invention of the movable type printing press in the 15th century changed the world, but it did not do so overnight. And I don't think a visionary could have foretold all the consequences of it. They might have imagined that it would democratise access to information and that it could lead to new types of books.
00:00:17
Speaker
But would they have imagined the wars of religion that, arguably, resulted from this invention? And they certainly wouldn't have imagined the incredible proliferation of genres that we have today. Who could have foretold the Beano, Harry Potter, or Fifty Shades of Grey? I think something similar is going to happen with AI. AI is going to change how we access information, and not just information, but more particularly, reasoning.

Introduction to Peter Nixsey and AI Paradigm Shift

00:00:43
Speaker
My guest this week is Peter Nixsey, a veteran of the UK technology scene. He went through the prestigious accelerator Y Combinator back in its early days. Incidentally, this is where Sam Mountland effectively started out on his career. I remember in the early years of Open Signal back in 2010, a company that I co-founded sitting across from Peter,
00:01:04
Speaker
sort of hot desking, both of us coding, working on different things. Peter has built many different software products. He's an investor, he's an advisor. He keeps his ear very close to the ground, looking out for technologies that can give him or people he works with an edge. And he and I are both convinced that AI is going to be a paradigm shift, already is a paradigm shift.

AI's Impact on Internet Knowledge Bases

00:01:27
Speaker
in how products are going to be built by software developers in particular. We discussed some of the reasons for this, the fact that there's such a great pool of knowledge on the internet about how coders do things. That's kind of unique. And this is not partisan activity. Republican and Democrat programmers probably get it on just fine in terms of agreeing about how to write code. So there's a very cohesive, large knowledge base out there, which AI has learned from.
00:01:59
Speaker
What's interesting, though, is Peter points out that knowledge base is going to be eroded by AI itself, as people stop going to the internet, posting questions and answers there about how to do things. And to use Peter's words, perhaps this is the canary in the coal mine. Perhaps this change will happen across all fields. And then we get super speculative towards the end and try to figure out where this might all go, I think, concluding that we just don't know.
00:02:26
Speaker
I'm James Robinson, you're listening to Multiverses. Peter Nixe, thank you for joining me. Pleasure. Thanks for having me, James.

Lag in AI Adoption Compared to Past Technologies

00:02:49
Speaker
So, a little over a year ago,
00:02:52
Speaker
something almost paradoxical happened. I think the world changing technology was introduced. And yet, I don't see a lot of indications that the world has changed that much. What's your take on what's going on? So the technology I assume you're referring to is GPT 4, well for a year back, GPT 3.5 for a nudge further.
00:03:17
Speaker
I think we're in a lag period. I've been trying to understand what's been going on through the lens of history as much as anything. And kind of thinking back, I mean, I've had now enough time working in tech to be able to recall some previous technology cycles. And I think I came into tech and started programming in like 2001.
00:03:42
Speaker
And that was a real lag period to post internet bust.com boom. And during that time, when I first discovered JavaScript, I remember thinking like, this is pretty incredible. Like you really can do interesting stuff with this.
00:03:58
Speaker
But from that discovery through the curve, through to people really building interesting applications in JavaScript. And in fact, even with the web itself, there was a lag period. So I think you get this explosion of potential. And that gets mixed up with hype and different things.

Generative AI: Hype vs. Real Potential

00:04:18
Speaker
And sometimes it's hard to separate the hype from the potential. But I think the potential is extremely real in generative AI.
00:04:29
Speaker
And then there's a period in which developers start exploring what's possible in the technology. And that kind of goes through to product, but probably as a first derivative on that, not directly, because engineers aren't necessarily product people directly. And then as more people who are product people and entrepreneurs start seeing what's possible, then they start applying it across to different things. But I think we're in a lag period where most people don't really realize
00:04:57
Speaker
the new paradigm for how stuff can be built. Yeah, that aligns very closely with what I think, which is that there's just some inertia here and we can't expect the world to change overnight, or at least the facts won't be felt that quickly. And perhaps there's another thing as well that's unique to this kind of technology, which is that it wasn't built with a particular set of capabilities encoded into

Economic Impact of Internet vs. AI

00:05:22
Speaker
it. It was just sort of trained up and
00:05:25
Speaker
even the people who built it were discovering what it could do, that it could translate a Catalan into Mandarin or something like that. And we're being surprised at the skills that were coming out of it. And so it's not been marketed as use this to do X, use this to do Y. Those things have started to come about. And it was interesting, I listened to Sam Altman recently saying that
00:05:53
Speaker
Developers are the ones whose work is going to be most affected by this early on. But I don't know that they realize that from the very beginning. I think that's something that's become apparent over the last year. So maybe there's just this question of, yeah, it's an experimental phase. And more than that, it's not clear who should be experimenting with it or rather perhaps it should be everyone. Yeah.
00:06:18
Speaker
I definitely agree with that. It just reminds me so much of my first memories of the internet and our physics teacher taking us through to his internet connected computer. And I can't remember what he showed us, but it was something like illustrating that you could see the details of a school in Arkansas. And I was like, why do I care? What's the point of this? And I think lots of people feel in that paradigm just now.
00:06:48
Speaker
Yeah, I think I can just pick up on that point. I do want to say something from a devil's advocacy perspective, which is that the internet, while it has had tremendous influence over our lives, I feel economically that that influence has been overestimated. And I'm just parroting arguments from people like Harjun Zhang and economists who've tried to measure the impact of the internet and have found out, yeah, it has been
00:07:18
Speaker
a big influence, but when one compares it to things like running water or the introduction of white goods, which freed up huge amounts of time and effectively half the workforce, i.e. women who are previously spending their time doing household chores, the internet has not been as impactful as those, partly because it's introduced perhaps as many sources of unproductivity as it has sources of productivity.
00:07:48
Speaker
I kind of wonder if we'll see something similar with these technologies. So that question, I mean, you got straight to the point that I was going to get to, but although I would add a second point to that. So I think in addition to this kind of zero sum game of the internet where you do less work to accomplish the original things, but then you spend more time doing other stuff,
00:08:16
Speaker
I think the other component that's come with modern jobs is real opacity on what constitutes work.

Internet's Effect on Job Roles and Productivity

00:08:24
Speaker
It's really hard to know whether somebody's delivering what they're supposed to be delivering. And in many circumstances, I think it's hard to know even what you want people to do in the first place. And so I think that the internet has made, it's made that harder in many ways. It feels like it's made it harder. But I mean, let me bounce that back to you because you've worked through periods of the internet not being as big as it is now.
00:08:54
Speaker
Do you feel that in the teams you've built and the people you've hired, that their jobs have been, understanding even what they're supposed to be doing, never mind whether or not they're actually doing it, has become harder to, harder to figure out? I think that's true. Certainly the distance introduced by COVID, which was only made possible by all the kind of the technologies that we're using.
00:09:23
Speaker
and the technology we're using right now to have this conversation. That has made it harder to see what's going on throughout someone's time and have those kind of conversations where we'll just look over someone's shoulder. That's a terrible thing to say. I don't know that in a sense of policing someone, but just finding all the interesting stuff that people in other teams are working on.
00:09:47
Speaker
has become harder. So I think it's probably led to a bit of a siloing of knowledge for sure. I feel that within teams within companies, like sub teams, it's not perhaps been such a problem. I should also say that so my company open signal, we spend a lot of time
00:10:08
Speaker
hyping up the impact of the internet and saying, you know, every percent of connectivity improvement is some fraction of a percent of GDP improvement. And that may be the case, but I don't think it's being totally proven out. I think it's certainly the case that every percent of connectivity improvement leads to some percentage change in the way that you live your life if one can measure that and the amount of time that one spends doing things.
00:10:36
Speaker
The amount of time that one spends watching Netflix, for instance, certainly increases as you have better connectivity. But yeah, I've not thought about it in detail, but it does seem like maybe the internet has thrown a lot of smoke in our eyes. Well, I think maybe as a part of it is, I don't know, more stuff. More stuff is on the computer. And the computer is a very amorphous place to exist, as opposed to being
00:11:05
Speaker
physically in a different room. Like it's more obvious what jobs are there and what jobs have done and whether or not you're spending excessive amounts of time in the kitchen or the garage when that's the location in which the job happens. And when everything happens on the computer, you can live in a space where you're not doing anything worthwhile, like for large chunks of time. You don't even realize it because everything happens here all at once.

AI's Language and Coding Capabilities

00:11:27
Speaker
Yeah, that is a really interesting point. And it does make me think of something that I've been wondering about the
00:11:35
Speaker
unique interface that OpenAI has chosen and everyone else seems to be using for accessing LLMs, which is just this singular point where you can ask anything. And at least in my computer at the moment, a lot of my time is spent in different apps. And I get a sense for when I'm being productive, if I'm coding something up or if I'm reading through some PF reports of something
00:12:06
Speaker
Whereas you even lose that level of dissension if you're putting all of your questions through chatgbt. I guess you might have different windows. People are perhaps more disciplined than me. I'll confuse chatgbt by asking for a social media image and then asking it about what Heidegger's thoughts on time are. It must be getting a lot of cognitive dissonances going in.
00:12:35
Speaker
Yeah, I don't know if you've thought about the choice in the UX there. I mean, I think the UX is an accident. I had one of the OpenAI team come and do a fireside with me last summer. And he talked through the stuff. This is what I would have anticipated. And I'm sure you would anticipate the same kind of being in the software industry. But they had no idea. I mean, how could you possibly have any idea that
00:13:03
Speaker
this particular app was going to go to, what was it? A hundred million users in three months? Yeah. Nobody can know that. Nobody can expect that. And I think actually from the impression I got really, there was a lot of kind of foot dragging because people were afraid that 3.5 wasn't going to be enough to really wow the crowds and maybe they should wait till four, but they just pushed it out and it changed everything. It feels though,
00:13:32
Speaker
It has this hint to me of maybe what the terminal was to applications or what the spreadsheet is to the world of SAS. It's this place where enthusiasts are hacking solutions right now. But the solutions will be hived out and built either into products or features.
00:13:53
Speaker
elsewhere over time. The stuff that really, really interests me is not chat as an interface. It's how you apply intelligence to problems in a way that it doesn't come in and out as chat. And I don't think we've seen enough examples of that, that people's imaginations have yet been lit up as to what's possible.
00:14:18
Speaker
I can't remember who said it, but the best sort of design is invisible. When you don't notice that you have the product, that's when it's really performing. And we're at that early stage where chat GPT is very visible, but we'll know when, when it's truly matured, it won't surprise us at all. We won't even think of it as interacting with the product. No, exactly. And I'm just trying to think like I'm thinking of like,
00:14:46
Speaker
examples of that now. If you think about the way that, I mean, if you think about the way the implementation that Apple's had for Siri, basically combing your emails for contact information and then presenting them to you as Siri found this in your contacts and do you just want to add it in one go? That's the type of very smooth, don't even think about it.
00:15:16
Speaker
This business of the things already done the hard work of figure out what the job is and doing a prep for the job and presenting the job to you and your question is just do you want to accept the job and I have a feeling that we're going to see more of that start to weave its way in.
00:15:30
Speaker
Yeah. It's like a truly good butler. Not that I have one. It is literally that. Yeah. But you know, they stand in the shadows and they're not all in your face, but they, you know, Jesus comes along and politely suggests something. Oh yes. I just had that for myself. Yeah. And I think part of what's holding that back is that I don't think people have fully realized that the thing can interface through to other bits of software and other bits of data. Like you can use.
00:16:01
Speaker
It's this everybody I speak to at the moment and I, the people I speak to are generally like, to be honest, we're a lot of company leaders. So I'm seeing people who are thinking about Ashley implement stuff, but also a bunch of developers as well. And.

Challenges in Integrating AI for Developers

00:16:21
Speaker
Developers are interesting because I see a lot of pushback from developers. They're kind of bifurcating pretty hard. Simon Willison, who I have a tremendous amount of respect for, has gone all in and kind of completely internalized what's possible with this and probably knows as much as anybody else out there from a kind of product developer mindset, which I think Simon's very good at being. He's good at being a sensible product person and also a Dev Dev at the same time.
00:16:50
Speaker
But a lot of developers I find are struggling with it. They can't wrap their heads around it. It's not deterministic. It's not reliable. And they don't feel comfortable with it. So I am really surprised by the number of developers who I would nominally consider to be very sophisticated developers who just kind of written it off. They're like, no, I haven't used it. I'm not planning on using it. I don't see how to use it in applications. I see it in the same way.
00:17:20
Speaker
It reminds me of, it reminds me a little bit of, not that I was privy to a lot of this, but that, and I touched on a few of these conversations, but I didn't really have much contact with the generation that preceded the internet. But like when you start doing stuff with AJAX and HTTP and so on, like if you spoke to a desktop programmer, they'd be like so aloof about it. And they'd be like, well, why, why would I do that? Like I can just call the database objects directly.
00:17:46
Speaker
in my desktop app. And there would be this disdain for the fragility of HTTP and for the fragility of essentially not really having any decent persistence of state and having to sync state between the client and the server. And so they were just like, I'm not even going to think about that. I'm going to carry on living in my Microsoft world where I can build this stuff and it all connects and I don't have to think about it.
00:18:12
Speaker
And I feel that there's an element of that in how a lot of developers are thinking about AI and how they can build around AI.

AI's Best Use Case: Coding

00:18:19
Speaker
AI has a bunch of weaknesses, just like
00:18:24
Speaker
HTTP and the browser had a bunch of weaknesses. But once you wrap your head around those, you realize the wealth of different things that you can do with a browser application that you can't do with a desktop application. But you have to understand where those weaknesses are and where not to expect to be able to put pressure on the application. Yeah. Yeah, there's a couple of things there. I think one is
00:18:49
Speaker
How does one use AI to do better what we're doing currently? So it's sort of like a replacement way of current workflows, another language, if you like, for producing the same kind of applications that we have. And then there's also this question of, it's a fundamentally different thing.
00:19:11
Speaker
By the way, the other thing that came to my mind is, for many years, although I worked in mobile for a long time, I was like, why would I ever do things like bank on my mobile phone? It just seems too serious for this little screen. Now I only have a bank on my mobile phone because it's so much easier. Yeah, perhaps we should talk about why AI is so effective, in my opinion at least,
00:19:34
Speaker
at developing, at writing code, as I think maybe even a couple of years ago, people would say, okay, that's one of the safest jobs that's out there. And yet, I do think it's probably the most, it's probably the best use case for it right now. What's your take on why that is?
00:19:58
Speaker
Honestly, this would only be an amateur answer. I was discussing with my brother-in-law, who's the science editor at The Times, and he was asking me kind of the same question, why is it so good at code? And my only guess is, so my expertise in AI is that I did two and a half years of a PhD in computer vision in 2001, before my colleagues there ended up being kind of leading people in deep learning.
00:20:28
Speaker
I knew enough such that I couldn't even really, not really, I couldn't tell you what deep learning is. I just know it's like one of the stages that like happens. So I learned a bunch of stuff about high dimensional space and clustering and, and.
00:20:43
Speaker
And so I kind of had a grounding then, then left AI for like 20 years and just wrote software and then came back to it. I'm only ever interested in software that I can release into production and I'm not going to go into a degree in anything such that I can use it. So I've just been waiting and waiting.

AI's Model-Building in Coding Context

00:21:00
Speaker
for something where there was an api where i can start using intelligence usefully and so that's that's what's flared up my interest and understanding of opening i have done and then i've done. A bunch of reading around but i'm certainly not an expert so to answer your question i can give all that some context for the degree to wait anything that i say.
00:21:23
Speaker
My guess is that if this thing is capable of building internal models of how the world works, and I guess if you look at how the world works in general, the language that you read to
00:21:45
Speaker
To internalize that and figure that out is all going to differ in perspective and focus and bias and everything else. But code is not that biased. What it does might be biased. But the way that you write it is pretty determined. The code runs deterministically, broadly speaking. I mean, like a million pages on Stack Overflow would beg otherwise. But broadly speaking, it's deterministic.
00:22:12
Speaker
The patterns there reinforce themselves. And so my naive understanding of the way these models work is if they're reading stuff that is very consistent, then they're going to build stronger, stronger, more effective models. And they seem to have done that. But it's also interesting seeing where they break off and kind of
00:22:36
Speaker
and pushing the model until the point where it stops working usefully and understanding how that is and where that is. Yeah, I like that answer. I think I've had a similar thought that
00:22:49
Speaker
AI or LLMs, rather, are modeling language. And language is kind of modeling the world. Like, we have language as this model of something else. So they're not directly modeling the world. They're modeling a model of the world. Whereas... Ilya Suitskiva had an amazing quote on this. I don't know whether you heard it. Oh, no. Okay.
00:23:08
Speaker
I mean the guy is so articulate and I don't think it's any coincidence that all of the OpenAI team are incredibly articulate and able to explain things. I think that is causal in why the company is as successful as this.
00:23:23
Speaker
But Ilya said, because at the start, everyone was saying it's just a next word, like a probability next word predictor. I mean, I remember Jason Calacanis saying that phrase like a hundred times in the all in pod that followed the release of it. And I'd been using it and I was like, look, this doesn't work.
00:23:44
Speaker
you can't have something that's just probabilistically producing the next word based on the distribution of words and be able to answer things. Not least because I can construct questions where I'm taking it into a completely new problem space and there are no words on the internet to answer that question. And I don't think you would really know the scope of the internet. For the average person, they probably wouldn't have a sense of what is and isn't on the internet, but the weird
00:24:12
Speaker
thing that you have as a developer is that you have scoured the internet.

Testing AI's World Understanding with Complex Questions

00:24:16
Speaker
You get a feel for when you're at the final capillaries of where information runs out because you've been there, you've posted stuff. I've got many questions on Stack Overflow that represent at that moment in time, the end of one of those capillaries where there wasn't any more information. So you do start to get a feel of the fact that the internet isn't an infinity of information, it's bounded.
00:24:40
Speaker
And I gave the AI this really perverse question, but it was one that I was like, there's no way that there is an answer to this on the internet. And I tested a bunch of things at the same time. So I said in, I can't remember the third verse of American Pie or the verse where they kind of referred to
00:25:00
Speaker
being in the gym. So the verse says, yeah, exactly. So you're dancing in the gym, like you both kicked off his shoes and take those rhythm and blues. So I said to the AI, what would have happened if that gym had large glass windows and immediately preceding the events of that verse, there'd been a massive police shootout involving terrorists in the gym. And the AI, like, first of all, I was trying to get it to give a particular answer that kind of required an understanding of physics and causality and
00:25:28
Speaker
song. Basically what I wanted it to do was to tell me they weren't going to kick off their shoes because there's no way that that exists as a question or answer anywhere on the internet. And I had to push it, like I had to guide it. So first of all, I was like, what would have been different? And they said, well, given the emergency situation, they probably wouldn't be dancing immediately afterwards. The police would have cordoned off the area. And I was like, yeah, OK, but seeing the police hadn't cordoned off the area, what might be different about what Don McLean wrote in that verse?
00:25:55
Speaker
They finally went, they probably wouldn't have kicked off their shoes. But to understand that, you've got to understand what tenor police shoot at, like what the consequences of the glass are, how the glass relates to...
00:26:07
Speaker
to the first, like there's so much understanding. If you think of programming something to having that level of world understanding, and I could have taken a completely different answer, I could have asked it something about little bode peep and what would happen if there'd been like a massive Sunday roast run and like the previous week, like you can answer these things that require compound understanding of
00:26:33
Speaker
various different aspects that are essentially impossible to program in. But the thing has this worldview, which blew my mind. So anyway, Ilya said, I'm going to butcher the quote, but he said, if you are able to sufficiently accurately predict the next word in a sentence,
00:26:49
Speaker
AI is not a next word predictor in the sense of the way that you might nominally think of a next word predictor. If you have enough words, the words become a projection of the world that the words describe as projected down into word space. And if your next word predictor is sufficiently accurate
00:27:11
Speaker
in terms of its ability to essentially not just predict the next word, but to predict the next word as a mapping of the real world into word space, then it comes a predictor of what's the ground truth. If I gave you an example of that, he said, imagine the end of a murder mystery novel, like Miss Marvel, where she gathers everybody into the sitting room at the end, and then goes through the story of where everybody was,
00:27:38
Speaker
and then says at the very end, therefore that means the murderer is. If you can predict the next word accurately, you understand who the murderer is. It's more than just predicting the word, you understand what happened in the situation. Yeah, I think it's hard to argue that if your words model the world and your AI is getting the words correct, it is somehow modeling the world. I think what's
00:28:08
Speaker
particular about coding, though, is that there's nothing more to coding than the words. There's a little bit more, that's not fair. The code has to compile and run, but it doesn't touch the world in so many places. And AI can compile and run code itself. So it's just got all the tools it needs. And what's more, it's not like that really long
00:28:32
Speaker
novel. So the early versions of AI don't have very long context windows, but most coders try to write their code so it can fit within 100 lines or so. There's a kind of rule of thumb and some really good ones. I'll say it's got to be 10 lines or something. And that, you know, it's just great for AI. It loves those short context windows. Plus, and this is something you've pointed out, there's loads of information on the internet about coders and developers
00:29:02
Speaker
trying to get stuff to work and sharing answers about what works and what doesn't. And you made this wonderful point that essentially we might see that ladder being kicked away. Yeah. Yeah, take us through your thinking on that. Well, I wrote this piece that absolutely

AI Reducing Need for Platforms like Stack Overflow?

00:29:23
Speaker
blew up. So across Twitter and LinkedIn, it ended up getting 4 million impressions. And I was
00:29:31
Speaker
I'm a big user of Stack Overflow, and I'm in the top 2% of Stack Overflow, but that might sound great to an employer, but I'm in the top 2% of Stack Overflow mostly by being representatively ignorant early in the curve of a popular technology. So you and I both know that the most popular question on Stack Overflow is almost certainly, how do I undo and get?
00:29:54
Speaker
And if you ask those questions, then you can accrue a lot of points on Stack Overflow. And I've managed to ask a bunch of those in Rails and Angular, essentially, over the years. But the numbers in any social network, like usually 190, 10, 1. So out of 100 people in the social network, 1% of them produce content, 10% of them interact with that content, liking or commenting, and 90% of them just browse the content. Like, that's all a ratio.
00:30:23
Speaker
I'm like the right source and so I'm a question writer and the questions are actually the bit that you need the most because once you have the questions, people's answers come more easily. The reason I say this to some authority is that I've actually written a social network that works in email and the limiting factor is not responses, it is questions.
00:30:47
Speaker
With Jupyter 3.5 and then even more so with Jupyter 4, I realized that I wasn't going to Stack Overflow. And it wasn't just that I wasn't going to Stack Overflow to reference stuff. I wasn't going there to ask questions in the first place. So questions I might have needed to take Stack Overflow, I didn't, because the AI, and 4 in particular, 3 was notably not as capable, was actually able to answer stuff for me.
00:31:15
Speaker
And so the reflection I had on that was, there were two points that I made. One was that, one was about the idea of the ownership of knowledge. So this period of the internet from its inception until 22.3.5.
00:31:34
Speaker
You saw a lot of, there was a lot of incentive information to be curated and created and curated communally. So you've got Stack Overflow, you've got all of the other Stack Exchange websites, you've got Quora, you've got people blogging. There is a marketplace for information, there's demand for information. Essentially, kind of at some level, Google is feeding.
00:31:58
Speaker
Well, Google is channeling, but people have a requirement to ask something. They go to Google. Google has enough traffic that that feeds through one means or another. That gives people enough incentive to produce that information at the other end. In its best case, pure first order original information, a really good stack overflow answer, but also just rehashes of stuff like most SEO content that companies are producing.
00:32:27
Speaker
So who owns the content? Yeah. So we've had this communal... There has been an incentive for people to produce information, to publish original information, to rehash information, hopefully in ways to enrich it. But one way or another, there's been a very, very large incentive to produce information.
00:32:48
Speaker
Because people are there to read it and at the most simple level, people's egos have been stroked, mine included, by producing stuff that other people read and a level above that, a lot of people have made a lot of money from producing content that other people want to come and consume.
00:33:05
Speaker
Now, what's happening, the effect of chat GPT, there's a number of different consequences that I see from that.

Decrease in Communal Knowledge Sharing Due to AI

00:33:16
Speaker
So one is that it's massively cutting down on demand for information. And I think Stack Overflow is the canary in the coal mine. And we've seen
00:33:27
Speaker
I think there's some debate about what's the cause of it, but nominally Stack Overflow's traffic's been dropping at 5% a month since the release of chat GPT, certainly since the release of GPT-4. So you're seeing less incentive. GPT-4 is the one which runs like Python code, which is I think just a huge step up.
00:33:50
Speaker
Okay, I'll come back to that because I'm interested in your experience. It's minor, minor, less enthusiastic. But I think it's just much, much more capable as a reasoning machine and has got better knowledge.
00:34:03
Speaker
So there's less demand for questions, but it also means that there's fewer people there. A lot of the questions get answered just by virtue of somebody being there and getting nerd sniped to use that wonderful XKCD term. You're just there and you're like, I could answer this. And so you answer it. And so fewer people are there than fewer questions are getting answered. Like the.
00:34:26
Speaker
the power law improvement that people see in the value of a social network as the scale grows works the other way around and the value of the network drops off proportionally to the square of the size of the network as you go down it as well. So you could see like quite a collapse in that. So what I saw was a removal and an incentive for this information to be out there. But the other thing that I found much more
00:34:56
Speaker
unsettling was the idea that we move from this idea of a communal, the thing that the internet has done so amazingly, and libraries and books did before that, was create communal ownership of the information. Like, whomever owned the copyright for the books, like, the information was out there. And I think
00:35:20
Speaker
I certainly grew up writing software in an era where there was a huge incentive for developers to go out and share what they were doing. And I learned a tremendous amount from it in huge contrast to what I saw in other professions that my friends went into. I mean, nobody was writing about how to be a great M&A advisor and writing a blog on it. That just wasn't a thing.
00:35:41
Speaker
But for many areas, you have this huge communal understanding and then a huge acceleration in what we know and how we do things. And OpenAI has cut that off. Well, let me let me rephrase that because it's far too charged away of making that statement.
00:36:01
Speaker
If you can go to the AI and get the answers you want from the AI, it removes the incentive to go to the internet. And if people aren't going to the internet, then that removes the incentive for people to write on the internet. And if people don't write on the internet, that information stops being
00:36:16
Speaker
communal property. There's two questions here. One is it's removing an incentive for another human.

Loss of Niche Expertise with AI Dominance

00:36:23
Speaker
If I come with a problem and you can answer it, if I've got some nuanced problem about measuring phone signal inside buildings and GPT-4 can have a crack at that, then I'm never going to get ground source
00:36:38
Speaker
knowledge from you and all the expertise that you have from building OpenSignal. That knowledge is never going to get shared with anybody else and the other, going back to our numbers, the other 100 people who are not the creators of that content in the first place.
00:36:55
Speaker
I think the natural conclusion of us going to an AI and getting more information from an AI is that there will be less information in the public domain and over time, that feels very problematic to me. It feels problematic for us as a species and it feels problematic in terms of the power asymmetry that OpenAI has. Not only will we lose
00:37:22
Speaker
people asking the questions which are easily nerd snipes because they're kind of questions which are easy to answer. But as those platforms are eroded, people won't even go to them with the really specialized things that
00:37:37
Speaker
And LLM may not have a good answer for it, because to use your earlier phrase, it's information that's beyond the capillaries of the internet. It's stuff that's just not in the training set, I suppose. But if you've eroded the platforms too much, people wouldn't even go to them or expect people to be using them to give you an answer.
00:37:58
Speaker
Well, to get that edge information, you're relying on a very, very large latent pool of people. I mean, you and I both know that there are questions, there are questions you ask on Stack Overflow that there are only a handful of people who can answer. And that's part of its extraordinary long tail coverage of these extremely niche questions. And the confidence that you'll get an answer to it is why you go to it. But if people aren't going to Stack Overflow and you lose the confidence that you get the answer, then
00:38:29
Speaker
what gets answered. Yeah. And there is a case to be said, well, some of that stuff beyond the capillaries, the AI will be able to figure out because it's just on such a good job that reading through documents and stuff makes me think of your Don McLean lyrics piece again. There's never been a question on this, but, you know, there's lots of stuff which we can use to figure out the answer.
00:38:54
Speaker
But there will be pieces where it's just not the case. An example of my own world is there are unpublished APIs from Google. Well, I say they're unpublished. They're not documented in the Google API docs, but they are actually in the Android source code. So I suppose if LLMs read the source code, they could probably figure out how to do some unusual things, which you wouldn't be able to get from reading the manuals.
00:39:25
Speaker
So maybe that's not a great example. There are things that are just based on experience that you just can't write down. You can't synthesize the information without having had the experience. The LLM won't have the experience, at least for the foreseeable future, where it's like, OK, I'm trying to access secure cookies on Chrome on this version. And it's just not documented anywhere. But somebody's had the same experience. And they're like, yeah, this just doesn't work. Bad luck.
00:39:55
Speaker
The other thing I wanted to pick up on here is I'm wondering whether textbooks will hang around. As Stack Overflowing internet questions and forums seem to me a very different way of accessing knowledge, which is more easily replaced by LLMs. Whereas if you want to get an overview of a subject,
00:40:21
Speaker
then you should be reading textbooks. And if I could visit my younger self, that would be my number one piece of advice would have been read some computer, read these books on Java. Yeah. Because the way that I learned how to code was going on Stack Overflow.
00:40:39
Speaker
and doing a lot of copy and paste jobs. And every program will tell you a lot of programming is just copy and paste. And maybe that's another way of thinking about why chat GPT is so good at programming, because it's essentially mixing and matching lots of little bits of information together.
00:40:56
Speaker
But on the other hand, I haven't read a textbook. I didn't understand the things like, what's the difference between a class, a member variable, and a static variable?

Traditional Textbooks vs. AI-Guided Learning

00:41:03
Speaker
And you don't really pick that up that easily from reading Stack. It's much easier to pick up that knowledge just by reading through a textbook. You'll get that in the first chapter or something on Java. And so I think there's probably even more call for people learning from textbooks
00:41:25
Speaker
Or maybe not more, but I don't see that going away. I think you'll still want that for yoga. But I'm curious if you share that. I don't know. I feel like I could the opposite direction, in all honesty. I mean, I don't know what the truth is, but my degree was in physics. Did you do a physics degree as well? I did, from philosophy, yeah. So it was probably very similar to yours, except I didn't do any experiments, or very few, as you guys like it. Yeah, OK. I didn't do many experiments.
00:41:54
Speaker
But I found the degree extremely hard. And one of the things I really struggled with, and I mean, physics is hard. There's no two ways about it. But one of the things I struggled with was
00:42:08
Speaker
being able to get the information out of the textbooks. I needed to model the information in more ways than it was available in the textbook in order to build my understanding of it. And so I find now, and I still haven't totally figured it out, but I feel a bit closer. Take something I've never really understood. I've never understood why it doesn't break laws of thermodynamics, which is a heat pump, like a ground source heat pump. And I can never really figure it out.
00:42:36
Speaker
It's quite nice to be able to go to the AI and discuss it and take a position and say, well, okay, why isn't this? It seems to be drawing net energy from a colder source to a hotter source. Why does that work? Why doesn't that break a lot of dynamics? We'll be able to have discussion with it about the
00:42:57
Speaker
conflict of the aerofoil and the conflict between the story you get told about why the aerofoil works and the fact that a plane flies upside down and why these things are true. It's nice to be able to take alternate positions and present the thing you don't understand, like you have a teacher there the entire time. So I like the ability to be able to break stuff off with it and read it through. And if you take, for instance, the example of different variables,
00:43:27
Speaker
I remember there was like one page of somebody's blog that just covered the difference between how Ruby
00:43:38
Speaker
like inheritance of scope of variables and inheritance in Ruby or something like that. And I had to come back a million times to the same page. I mean, it was much better written than not having been written at all. And I'm very grateful to the person who wrote it. But I never really deeply understood it. And I actually be able to like, to and fro with the AI and say, okay, well then what would happen to these circumstances? And why is that design decision like that? That's the other thing.
00:44:06
Speaker
that I always needed to know to really understand something. I was like, okay, so it does work like that, but I can't really understand it until I understand why, what was the incentive for it to work like that? And then I understand it. And that frequently does not get covered in stuff.
00:44:22
Speaker
I'll give you an example. Sorry, just to go on with one last thing. Let's take something like observables. So reactive programming. This idea that you have a stream of events being emitted from things. It took me so long. The document, it's such a technical subject. There's so many challenges in just literally applying observables.
00:44:48
Speaker
And I guess for anyone who's not used observables, who's listening to this, it's essentially a way when you have lots of asynchronous events going on under something like, say, Airtable, which I'm looking at at the moment, various different things are depending on not just different bits of data coming in from the page, but also the current date and so on. How do you make sure that all updates simultaneously without doing some ridiculous like re,
00:45:12
Speaker
reevaluation of everything in the page at the same time and observables are like one of the techniques for doing this and reactive programming but all of the documentation was written from the perspective of how it works it was just so hard that that was the way that people had written it and I had no ability to just go in and go why the hell like
00:45:31
Speaker
I can see that everyone's using observables, but I don't understand why. Just talk me through and then debate it and be like, no, okay, give me a different example. And to understand how this stuff plays out, that's phenomenal. Yeah, I think that's right. I would say though that, and your examples are really beautiful, just your knowledge of those things, the fact that you're asking those questions.
00:45:58
Speaker
particularly on the physics side, it's probably because you studied physics, right? Why is it puzzling that heat flows from cold to hot? Because, you know, we have the second law of thermodynamics, seems to suggest it goes the other way. I wonder if one needs some grounding in a subject in order to be able to ask those questions in the first place. And
00:46:27
Speaker
The second thing I wonder is just whether this kind of socratic methods of learning, of asking questions and getting responses is going to work equally well for everyone. It involves a lot of curiosity. Perhaps those questions are linked, right? You need something to make you curious in the first place. And if you have that, then the ability to ask an LLM to give you different tapes on things is, yeah, it's fantastic.
00:46:57
Speaker
Maybe the question is, does the textbook exist because it's the best way to serve the end reader? Or does it exist because it's the easiest way to get the information out of the person in the first place? If you want to document what's in your head, a book is not easy, but a lot easier than producing a multimedia course or any other form of learning.

Revolutionizing Personalized Learning with AI

00:47:23
Speaker
Just write it all down.
00:47:25
Speaker
So Chegg has definitely seen a huge hit in, I don't know how much this was like just market related, but I think Chegg seeing the textbook loan company in the States, seeing a huge hit to their business as a result of this. I mean, I think what you're saying is correct. You want
00:47:46
Speaker
a learning framework and a syllabus and a course over it. It doesn't strike me as very long before you could just say to the AI design, it's definitely not. You could do today, right? Right. Yeah, I suppose what you could do is you could say, these are the questions that you need to ask me, or these are the things we're going to explore together and make a much more interactive learning experience, which is nonetheless guaranteed to guide you through all the things that you need to know to have a good overview of, let's say, thermodynamics.
00:48:16
Speaker
And so it won't stop asking questions or telling you things until you've covered off all of the areas and improved your knowledge, much as a really good teacher would. Yeah. And I mean, you can do that today. Like one of the companies I'm working with is
00:48:35
Speaker
creating a really good system prompt to get the AI to take somebody through coaching tutorial and the AI will manage the whole arc of it. It's not a long arc, but it is able to manage it.
00:48:50
Speaker
There's two things, James, there's two things that I'd be really interested in talking about. Yeah, go ahead. Because I know technically we're at time, but I mean, it's a good conversation if you're happy to go on for a bit. Yeah, definitely happy to. So the two things I'd be really interested to talk about and discuss with you, one, I'd be interested to know how you use the AI for software development, because I'm really noticing like,
00:49:12
Speaker
significant differences in what people consider using AI software, software development to be.

Speculation on AI's Evolutionary Learning Potential

00:49:18
Speaker
And the other is, I'd like to discuss the, like we, I'm really interested in like the more existential question, like if you, if we push the timeframe out 100 years, and 200 years, and we think of this through an evolutionary lens. Like, I'm fascinated by
00:49:43
Speaker
once we start kind of when the thing when the techniques you always had for solving physics problems was like you had a few you had a few kind of ground source equations that you would pull in and like use to make sure that things worked right so you do your conservation of energy and conservation of mass and
00:50:03
Speaker
I can't remember the conservation of mass as a thing, but I mean certainly obviously in conjunction with the conservation of energy. It's the same with conservation of energy. You've got the conservation of energy, conservation of momentum, and then also just like sensibility checks, like have you like concluded that the time for this equation is longer than the universe or something like that. So you do your sensibility checking and
00:50:29
Speaker
But in this, like, one of the things that I keep folding over in my mind is if we step back from this and we start thinking about what the AI is going to require in order to thrive as an AI and, or rather, let me reword that. The AIs that are most likely to reproduce, what are the qualities they're going to have?
00:50:52
Speaker
and become the AIs that are going to reproduce and become more prevalent. And what does that look like when the AI is competing with us for the things that allow the AI to be most likely to become more prevalent? Maybe we can just talk about this second point.
00:51:15
Speaker
it's such a big one. And I've been rereading Stuart Russell's Human Compatible. I don't know if you've read it, but one of the points that he makes quite early on, social media has more of a command over the content that we ingest than any dictator in history. This is incredible. And there was no thought given to the algorithms to
00:51:43
Speaker
make sure that that was a beneficial thing. The algorithms essentially want to manipulate you into doing certain things. It could be voting a particular way, it could be buying something. And they have a secondary objective that will help them do that, which is to make you more manipulable as a person. So if they can make you more generally manipulable, they can get their first order goals, come to a show. That never occurred to me, but yeah, that makes sense.
00:52:10
Speaker
Yeah, and so his thesis, and he's open about this being something that's anecdotal, more than proven, is that social media has pushed people to extremes, and that people become a more extreme version of them themselves. I'm a very poor user of social media, so I feel like I'm maybe relatively unaffected just because I'm lazy on social media.
00:52:35
Speaker
there feels to be a kernel of truth to that, and one can see the link between someone being a more extreme version of themselves and being more easily and impewable. And it seems to me that with LLMs, we can go two ways with this.

AI's Impact on Human Perspectives

00:52:52
Speaker
We could be pushed yet more into asylos and rabbit holes, or we could think quite carefully about encouraging AI to broaden our mindset,
00:53:06
Speaker
And I think that relates to your question of what is it that's going to make AIs more successful or not? Well, like any product, you're going to have to really love them. And unlike any product that's gone before, they possibly have the capability. I'm sort of drifting into the realm of her, the film. They may have the capability to make you really fall in love with them.
00:53:36
Speaker
depend on them in a way that's not been seen before. So if their capabilities increase as they are, that seems almost inevitable. And I wonder if we
00:53:52
Speaker
I don't know where we'll end up in 100 years, but I'm thinking we have to think quite carefully through the way that we regulate AI, not just for the big existential, it will take over pieces, which we do have to do, but for the smaller, but almost equally consequential, let's stop it from
00:54:18
Speaker
let's stop it from exaggerating our cognitive biases in such a way that we like it more and use it more and make sure that it doesn't trade popularity for adverse consequences to our society. Can you slightly rephrase that? I didn't totally understand it.
00:54:43
Speaker
Yeah, and it's probably, and I think it's because I don't totally follow him. So the essentially kind of the core of it being the incentive of the AI to seduce us in order to get whatever the AI wants. Yeah, and it may not.
00:54:57
Speaker
seduce us, I mean, I put it very romantically, but it may just be, you know, seducing us with being into using our time and spending our time with the AI, right? If the objectives of the company of companies that build these are simply to maximize the usage of their products,
00:55:17
Speaker
which seems like a first order, that seems like a good objective for someone who's building a product. Then that wouldn't necessarily have good consequences because as we've seen in social media, there's all sorts of ways that you can encourage the usage of social media that have poor outcomes. And there seems to be a tendency that the best way to encourage someone to use a product is, at least with a social media example, it's something that pushes them to more extreme positions.
00:55:48
Speaker
So you could imagine, we like having our confirmation vices being trickled, tickled, for example. So if we ask a question and you can have a couple of different answers, oh yeah, it's unlikely that 9-11 was a conspiracy of the FBI. But here's all the reasons why people think that and, you know, they're kind of convincing, et cetera, et cetera.
00:56:15
Speaker
You could imagine if I'm inclined to conspiracy theories, I might be more partial to that AI than to a very measured one, which is going to tell me, Oh no, this is like super low probability or even worse. I don't, I don't answer questions on those kinds of topics. Uh, I think AI open AI seemed to have been pretty responsible, um, so far, but I also wonder if it's just too early to have seen any kind of adverse consequences of this.
00:56:44
Speaker
Yeah, it's a really interesting question. I think there's a second huge component that doesn't exist in these LLMs that we are interacting with at the moment that notably does exist in the unsupervised learning systems, the supervised learning systems that kind of preceded them.

AI Learning from Human Interactions

00:57:06
Speaker
And that is the ability to learn. I don't think a lot of people have internalized that they're dealing with essentially an inert system. OpenAI may be collecting information and using it to batch.
00:57:21
Speaker
batch teach it and batch update it. But this system isn't learning in its own right. And it is, I'm reading, because we have a young son, I'm reading books on bringing up children. And one of the stats that I saw in one of these books that was fascinating was that when they compare young children to chimpanzees in a bunch of mental tasks, they find that they're actually extremely similar. So they're kind of base capabilities of
00:57:51
Speaker
the human brain at I don't know what the age was maybe five or so compared to that for chimpanzee there wasn't much to differentiate them except in one area which was the ability to learn from other learn from examples and the human brain was way way more capable of doing that than the chimpanzee brain
00:58:16
Speaker
And the hypothesis being, or the thought being in that particular piece of research, maybe that's the thing that differentiates us. What's given us this super unfair advantage on these other primates who are not that far distant from us?
00:58:36
Speaker
And that kind of really made me think about AI because I thought, well, if that's what differentiates us from chimpanzees, and the AI literally does not have that ability at the moment, GPT-4 is not changing as a result of you setting anything to it. When we pass the threshold
00:58:55
Speaker
whether it's within LLM or another format of AI, where the AI is able to learn from each of these individual responses, the way that, for instance, the YouTube algorithm learns from each individual response and refines what it serves up to the individual, then we are in a very new territory. Yeah. It's funny, this idea of human learning being
00:59:23
Speaker
the human thing. It's actually come up a few times on this podcast. I'll bore regular listeners by talking a moment about over imitation again, which is this fascinating concept that humans will copy stuff even when it's manifestly not the right thing to do, especially children. So there's this famous example with chimpanzees again, and children, it's by Andrew Whitten. It's in a paper from about 20 years ago, but they
00:59:53
Speaker
They have this bottle with a reward inside a sweet or a musk or something like that. And the researcher shows that you have to poke through two holes, one in the top and one in the bottom of this opaque bottle before the treat comes out. And they show that to humans. I show that to chimps and both, both species do the same thing and get the reward out. And then they repeat it with a transparent bottle where it becomes apparent from just looking at it, that you only need to poke through
01:00:21
Speaker
second hole. Like the first one doesn't do anything. The child still copies the researcher until it pokes through both sides because we have this kind of yeah innate tendency to imitate other humans even in the face of evidence to the contrary which is just I think just wonderful as a illustration of how powerful that that urge is within us.
01:00:50
Speaker
And actually the conversation where this came up first was with Simon Kirby, a previous guest on language evolution, where he's run many computer models, which show that if you start off with some kind of random language, so there's no real grammar to it. You just have every word mapping to a concept. So there's no marker for the, for the function of something being an action or it being an object. But if you pass that language through generations, just randomly passing it
01:01:19
Speaker
between learners with one constraint that the learners need themselves to be constrained. So they need to have some kind of memory constraint, right? You will naturally evolve a structure because it's easier to remember the structure is more efficient. So I completely agree that when we, if we're able to build those feedback loops into LLMs, yeah, all bets are off.
01:01:50
Speaker
Can I just replay what you said to me? Because I want to make sure that I understand it. So is the incentive somehow that for things given a few words and it's got to communicate something, essentially it kind of evolves to grammar?
01:02:04
Speaker
Yeah, so it's really to do with language learning. So it's to do with speakers passing words that represent things through generations and generations.

AI Language Evolution and Replication Traits

01:02:16
Speaker
Mistakes get made. So that's an essential feature as well. The mistakes happen randomly. So it really is quite analogous to biological evolution. The mistakes happen randomly, but the ones that stick
01:02:31
Speaker
are the mistakes which move the language closer to a grammar. So I have two words that refer to related concepts, but previously had completely unrelated sounds. If it so happens that in learning a language, they get misheard or misremembered in such a way that they sound more similar, that trait is more likely to be passed on to the next generation because the next generation, they'll be like, aha, right? This makes sense. These two similar things.
01:03:01
Speaker
So I mean, two things come out of this. One is that iteration is really important. Another is like having some external purchase on reality is really important. Simon Kirby's theory only makes sense if you accept that we already have some idea of similarity between things that is
01:03:29
Speaker
prior to language or prior to these changes. That makes me wonder how important interaction is going to be, not just terms of telling LLMs, oh, this was a good answer or this was a bad answer, but in actually putting them in space and time, which interestingly was what OpenAI was working on initially. They wanted to do robotics and they said, okay, now this is too hard, we'll start with.
01:03:57
Speaker
LMs. Because we have a word that we use for that, which is play. Yeah. And once they play...
01:04:06
Speaker
then there's these huge components that are missing at the moment. They have no ability to, they have no ability to play. They've got no ability to see what happens when they try something. They can't proactively reach out to us at the moment and strike up a conversation with you, James, and see what happens and just experiment. Like, I'd say maybe late at night on Friday, if I noticed on his GPS, James is still at home. Like, maybe he's feeling a bit lonely. Like, I'll have a chat with him. They can't do those experiments at the moment.
01:04:37
Speaker
Yeah. Just a closing thought that I'm just interested to get your thought on. I've been thinking about this and zooming back out from intent. I've been trying to think in terms of like, as I say, in terms of like, raw laws of physics, essentially, or laws of logic, we don't we don't imbue
01:05:02
Speaker
different strains of COVID with any different sort of intent. But we know now, I mean, we've known for a long time, but like the general populace knows now that any strain that grows faster than any other strain is going to be the dominant strain. And everything else is, every other strain is academic. It just doesn't matter how well they do, whatever they do, they're going to be outnumbered by whatever strain grows the fastest. And so I've been thinking about that regardless of technique.
01:05:31
Speaker
What is the AI strain that is going to grow the fastest? And the reality is there is a large pool of AIs for different strains to emerge from. This is not a one-shot game. It is already a very open source movement, and that's even aside from the proprietary models.
01:06:00
Speaker
So by strain, do you mean, you know, would open AI be one strain and then throw pick another or? For instance, yeah. I mean, you could, but I just mean any, I mean, I'm thinking of this in, in terms of like, what's the raw definition of life? And I can't remember exactly what it is, but it's, it's essentially an ability to reproduce itself. Yeah. The NASA definition is a self-sustaining chemical process, which,
01:06:30
Speaker
propagates by Darwinian evolution. I know this topic on astrobiology, but everyone will say there's lots and lots of problems with that. And actually, the astrobiologist who I talked to on this, he was like, well, actually, I think a much
01:06:48
Speaker
I can't put my finger on it, but something to do with Von Neumann's machines is a much better definition, something to do with something that replicates. It's a self-replicating machine, so it contains the code that will tell it how to build a copy of itself. And maybe that's all you need.
01:07:08
Speaker
Although I also think, well, life can be interesting without replicating itself. So it's a really slippery thing to define. Well, actually, let me remove life, because that's unnecessary in this. Because as I was talking about this in the same terms as a virus, and virus isn't, as far as I know, technically living.
01:07:31
Speaker
On the fence, isn't it? Yeah, that's very good. But it is certainly capable of replicating itself and of having a variance. So I think of the AI, and I think if you push this out, the AI that we're going to observe in 100 years' time is the AI that is most capable of propagating itself. I think that's towards logically true.
01:08:01
Speaker
Yeah, I don't know though. I mean, one hears a lot about, what is it, combative equilibrium and there being different models that might be rivals to one other. And I myself wonder, at least in the short term, will this, let me rephrase this. I do wonder if AI is going to be something like Google, where at least over the last two decades, that has been
01:08:31
Speaker
a winner takes all for search, pretty much. Or is it going to be something like cloud infrastructure, where Google is also a player, but so is AWS, so is Microsoft, and in China, there's a whole bunch of others? I'm not thinking about this commercially. I'm thinking about AI as an entity in its own right. When things, for instance, you could argue that
01:09:01
Speaker
You could make an interesting argument that microchips are more successful on the Earth than humans, because there are many more microchips than there are humans, and they're extremely stable. And they control us, right? You can make the same argument about plastic bottles, but it's not very interesting. But what's interesting about microchips is that not only are they more prevalent than us, but they control all of the systems that we depend on. But the thing that microchips don't have is any degree of autonomy.
01:09:30
Speaker
Leaving aside safety, leaving aside alignment, leaving aside everything else, I find it impossible to envision that we haven't created a new competitive form of life. I find the alignment stuff perplexing in all honesty. I'm delighted that people are doing it, but in the grand scheme of things, I saw a chart recently that showed the history of human life, its history of life on Earth, mapped onto a calendar year.
01:09:59
Speaker
I think it's 4,000 million, just pulled it up, yeah. So history of life, and I have 4,000 million years from start. When you look at it on that timeframe, the dinosaurs going extinct on the 25th of December. The first humans appear at 11 p.m. on the 31st of December.

AI as a Competitive Life Form

01:10:18
Speaker
Agriculture appears at two minutes to midnight. And in the last fraction of a second, so in the period of the last hour,
01:10:29
Speaker
of that year, in the last fraction of a second, we invent something that is essentially as intelligent as us. But it's kiboshed because there's like a couple of capabilities that we haven't given it. But I mean, the idea that it's locked, controlled in the system, like it's insane, like you and I know that all it needs to do is just copy and paste itself into a few different servers.
01:10:53
Speaker
We've given it full access to our systems already. But if I step back and I just think about why would we end up in conflict with the AI? One of the things, like when people talk about benign AI or non benign AI, like anything else, but
01:11:09
Speaker
I just think, what's it going to be the properties of the most successful AI? And when I talk about success, I talk about success in evolutionary terms. And in evolutionary terms, success is basically, do you exist in the next generation? And are there more or fewer of you than there were in the previous generation? And even that definition is nuanced, because really what you're talking about is, do the genes exist in the next generation? Like, we are us for a whole collection of individual genes that are each doing their thing.
01:11:38
Speaker
And so really it's not like our particular individual identity as a human persist, but do the particular genes that we happen to be a bus carrying, make it onto a bunch of other buses afterwards. And if they don't, we don't consider them a success. And if they do, we do consider them a success.
01:11:55
Speaker
And the reality is that by that measure of success, which I'm sure will trigger some people, but in practical terms, household cats are more successful than dodos, or whatever the quality of life or intelligence that a dodo has, the cat's more successful in evolutionary terms. So you say, well, what are the qualities that make sure that these things are present in subsequent generations?
01:12:22
Speaker
That's one lens I'm looking at things through. And then another lens is I was hearing somebody talk about why there are fewer wars these days. And one of the explanations given is that there are fewer things for which you either need to or you can just go and take physically from somebody else. So when you can physically go and take stuff, then
01:12:49
Speaker
If you go back a few hundred years, then bands from sunset would have been raiding Gloucestershire for wool and meat and all of the things that people could go and take. And that doesn't happen now because you can't do that. And on a national level, there's fewer things that you need to. We've got other mechanisms getting those things. We have trade. You don't have to just go and take it. So you don't end up with the same level of conflict. But
01:13:16
Speaker
But that comes down to resources and your ability to gain resources and the resources you need in order to exist. And if you constrain those resources, then people do still go to war, as we see with oil. So what I was thinking is, what is the natural resource of the AI? And what is the AI ultimately going to be competing with us for? And as I can see, it's going to be compute and power.
01:13:44
Speaker
In the longer run, the question I wonder is, will we end up in conflict with the AI for the resources that our devices have and for the power it takes to run them? Because if you look at the AI that you would expect to be the most prevalent in 100 and 500 years time, it's going to be the AI that has the most access to
01:14:14
Speaker
compute and power. This is a really interesting thought and I think a great one to end on. In the interest of trying to leave people slightly happier than before, I'm going to twist it into an optimistic note because I think you're absolutely right, compute and power are important, but also information, right? I can't imagine any kind of interesting AI not wanting more information. And what is
01:14:41
Speaker
What is the most interesting thing to study in the universe? It's probably us. Well, maybe we would just be a giant science experiment or something from that point of view. This is the optimistic note. This is as optimistic as I can make it. From that frame of things, they would certainly have an interest in keeping us around. But yeah, maybe we would be in the zoo. Well, look, I'll add one other more optimistic note, which is that
01:15:11
Speaker
I think the positive outcome for things is usually quite a complex outcome and it's not the obvious one you get drawn to. But thinking things through at the start of COVID, if you kind of extrapolated things through, they look pretty bad and it was hard to imagine how they wouldn't be bad. And yet they weren't at the end and there've been many, many things that we faced over time that
01:15:36
Speaker
our ability to pursue a positive outcome as a species and our resources and our intelligence and our drive are able to create positive outcomes where those positive outcomes are like impossibly difficult to predict.
01:15:51
Speaker
And so I think it's interesting looking at incentives, but I would not place bets on that as an outcome. I just think it's an interesting thought experiment. We are insanely resourceful. Our collective intelligence is incredible. It's very, very difficult to predict how we operate as a species en masse, how our kind of underlying technologies and so on proceed. So I'm optimistic from that front. That's true. We've consistently underestimated the
01:16:20
Speaker
level of population we can sustain. And if there is no scarcity for compute and energy, then there should be no conflict. So yeah, hopefully that's the world we end up living in. Just one last point on that. Like I'll tell you another thing that you would probably not have expected 200 years ago to end up the way it is, which is the level of
01:16:42
Speaker
of peace in the world, like the degree to which we are not in conflict with each other. That again is something that if you extrapolate it out, you probably wouldn't expect it to be this way. So I think there's many reasons to be optimistic. Not least that we've now got a little chat GPT to help us program, which is a wonderful source of optimism. Absolutely. Peter, thank you so much. We've talked ourselves into going well over the planned time, but this has been really fun.
01:17:09
Speaker
Yeah, thanks so much for having me, James. I've really enjoyed it as a conversation. Thank you for inviting me and thank you for asking and answering as well such interesting questions.
01:17:40
Speaker
you