Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Nathan Labenz on the State of AI and Progress since GPT-4 image

Nathan Labenz on the State of AI and Progress since GPT-4

Future of Life Institute Podcast
Avatar
3.2k Plays4 days ago

Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4. 

You can find Nathan's podcast here: https://www.cognitiverevolution.ai   

Timestamps: 

00:00 AI progress since GPT-4  

10:50 Multimodality  

19:06 Low-cost models  

27:58 Coding versus medicine/law  

36:09 AI agents  

45:29 How much are people using AI?  

53:39 Open source  

01:15:22 AI industry analysis  

01:29:27 Are some AI models kept internal?  

01:41:00 Money is not the limiting factor in AI  

01:59:43 AI and biology  

02:08:42 Robotics and self-driving  

02:24:14 Inference-time compute  

02:31:56 AI governance  

02:36:29 Big-picture overview of AI progress and safety

Recommended
Transcript

Introduction to the Podcast

00:00:00
Speaker
Welcome to the Future of Life Institute podcast. My name is Gus Dogger and I'm here with Nathan Lebens, who is the host of the Cognitive Revolution podcast. Nathan, welcome to the podcast. Thank you. It's great to be back.
00:00:13
Speaker
Fantastic. All right, you have been following AI closely, and that's an understatement. So what we want to do here is is jump into to everything you've learned, starting with how is AI used today?

Advancements in AI Context Windows

00:00:24
Speaker
what What have we gained from having longer context windows from having multi and modality? Because we've had those two things for a while now. um ah How has that and of improved usability?
00:00:36
Speaker
you know This analysis is a little bit of a reaction to a common meme that nothing has happened since GPT-4. I think depending on kind of where you hang out online and how much you personally use AI in your daily life, you could be excused for thinking that yeah we've kind of hit a plateau at the GPT-4 level and you know maybe this means like the whole thing is stalling out and you know the whole thing is going to be you know ultimately some sort of bubble, nothing burger, and you know we'll all look back on this and and laugh one day. you know Probably not surprisingly, I don't think that's the case. And I do think there is like a lot when you dig into the details and certainly when you just do a lot of hands-on day-to-day use, I think that becomes like pretty obvious. I think it it is becoming increasingly clear
00:01:20
Speaker
that the original GPT-4 really represented a huge effort to essentially do, if not the max, like pretty close to the max scale that was possible at that time, just given where we were with chips and given where we were with resources.

GPT-4 Development and Launch Timeline

00:01:37
Speaker
and you know even Obviously, OpenAI even in 2022 could command like pretty serious resources, but you know probably not as much as they can command today. So I think they really went for it with that and put a point on the board that was like well ahead of everything else. ah Obviously, it's taken a while for other things to catch up, but that doesn't mean that like nothing has has happened in the meantime. so Just to remind us, when did TPT-4 finish training?
00:02:01
Speaker
finished training in late August 2022. I think I'm pretty sure I got that initial test access within no more than a couple of weeks from when it kind of came off of the you know the smoking hot GPUs. And then it took another six months before they launched it publicly in March of 2023.
00:02:23
Speaker
So we're a little bit more than a year and a half now from launch and a little bit more than two years from finishing training. And yeah, the original one, you know, if you it's it's worth thinking back to kind of where things were at that time.

AI Models' Context Window Expansion

00:02:38
Speaker
Eight thousand tokens was the limit that you could put into the original GPT for. And that was enough. You know, you could if you figure 500 words a page, that's like 16 or so pages of text, you know, that if you figure a couple hundred words a minute for a conversation. You could have you know a half hour long conversation with it, but obviously that is pretty limited. And that has you know scaled up dramatically, right? Where the smallest context window on the market today is actually open AI is at 128,000 tokens.
00:03:11
Speaker
which is you know more than an order of magnitude more. And then you've got Claude, which has 200,000 tokens, public facing, 500,000 for their enterprise customers. OpenAI may have more available to select customers as well. they do They have said that they do have even longer ones coming. um And now Gemini has taken the lead from Google with up to 2 million tokens of context. And again, if you figure 500 words a page,
00:03:37
Speaker
Yeah, what's the use for for two two million tokens of context? Who could possibly use that? Is kind of a challenge to you know to really make good use of that. And to some extent, there is...
00:03:48
Speaker
the question of like real context versus actually useful context. I think in early versions of these long context models, people did report, sure, you can stuff a lot of stuff in it. It doesn't error, but does it actually have command of that full context? So that's been its own kind of you know improvement process over the last year and a half. I've done interesting things, though, where, for example, you know I'll take 20 different papers. This was an experiment I ran with Gemini once. And you couldn't do it on any of the other models because it was too many tokens for any of the others.

AI's Role in Summarizing Research Papers

00:04:17
Speaker
I took like 20 different papers on mixture of experts models, which you know is a ah trend in architecture, probably not worth getting into in too much detail. But it was something I wanted to understand better. There's a pretty extensive literature out there about it. And so I just took 20 papers, dumped them all into Gemini, and just started asking but questions and having it summarize across this entire literature what I needed to know you know to to better round out my worldview on the subject.
00:04:45
Speaker
So that kind of thing is pretty interesting. Obviously, people are you know interested in big corporate knowledge bases, which are still well in excess of millions of tokens. But if you're doing some sort of like Q and&A type function, and one of the most common applications of generative AI over the last year has been this chatbot with knowledge base. right like So many companies are like, man, we have people coming to us with policy questions, questions about their health insurance, questions about whatever. Can we have an AI answer that? That would be nice. Okay, sure. But we can't put all, you know every bit of information we have won't fit even into the current Gemini. So we have to do some sort of
00:05:23
Speaker
retrieval system rag in the parlance of the field, retrieval augmented generation.

Retrieval Augmented Generation in AI

00:05:28
Speaker
Basically, you go you know take the user's query, do a search against the knowledge base, pull in the relevant information, and then try to have the AI answer in a grounded way. Doing that with the original 8,000 tokens was really hard because you had to get the search really dialed in to be able to answer effectively With longer context, you can turn up the parameters on how many documents are we going to actually feed into the AI to facilitate the answer. And that makes just a lot of things a lot easier. I'm also really interested in just trying to like get robust assistance in my own life. You know, I went and just exported my last five years worth of email. That turns out to be millions of tokens. So to have like
00:06:13
Speaker
a reasonably holistic sense of who a person is, what they're doing, you know who they communicate with. That is a lot.

AI's Application in Software Development

00:06:20
Speaker
um And code bases are another big thing that people are, of course, are using AI to to generate new code all the time. That has kind of emerged as one of the the early killer use cases. But again, if you have only 8,000 tokens to work with, then the AI is not going to have any sort of broad sense of the project that you're working on. Whereas with millions of tokens, it can handle you know pretty significant projects.
00:06:44
Speaker
Does it actually mean that, yeah does this actually mean that say say you put an entire code base into a model with a huge context window, can it actually think if I change something in this part of the code that will break something in this other part of the code? Is it is it that good working from from kind of pure context?
00:07:02
Speaker
Yeah, it's a very nuanced answer there. We're starting to have, but we don't still have great evaluation, you know standardized test suites for these kinds of capabilities. One of the early tests for the long context and and the ability of the models to use the long context effectively was called Needle in a Haystack.
00:07:23
Speaker
where basically you would take some long text and you would insert something very strange and you know clearly out of place. And then the idea was like, can the model identify that thing that is that is clearly out of place? So Claude, for example, we just became famous because you know you insert the great Gatsby or whatever it as the full context and you put some random sentence about pizza in there. I believe that was the the actual test that generated this result. And then Claude said,
00:07:52
Speaker
famously, there's a sentence about pizza, you know, so it, you know, it passes the needle in a stack test. But then it went on to say that I think that, you know, this is very strange, like that wouldn't normally be there. It seems perhaps you're testing me or something like this. And so in that you see both a you know, savviness to the context, but also kind of a certain amount of situational awareness to like what is going on. It is hard to really pin down just how good the the frontier is moving all the time. So you're you're on a treadmill of just trying to keep up with the the latest model and, you know, constantly trying to recalibrate your assessments to have an accurate picture.
00:08:29
Speaker
The best kind of the new standard benchmark for coding in particular is SWE bench, software engineering benchmark, which is an open AI has released a verified version of it. So what they originally did was just took a bunch of open source projects and issues that people had logged in those open source projects on GitHub, this isn't working, or we want to add this capability or what have you. And then also connected to that, the actual code changes that were committed that resolved that issue. And then the question is like, can the AI, you know, generate

AI's Multimodal Capabilities

00:09:03
Speaker
it? And what's tricky is it doesn't have to be exactly the same, right? Because you could
00:09:06
Speaker
Solve the same problem in multiple different ways so it doesn't need to be token for token what the humans did but it needs to be a functional equivalent. And anyway with the verified set now the best kind of frontier capability is like fifty percent so fifty percent of the time it can take a code base and a issue and figure out how to.
00:09:26
Speaker
do the thing in such a way that like all the tests pass and it you know is is deemed to be correct. That's up from like single digits at the beginning of 2024. There was a moment when a product called Devon got 12% and it was like, oh my God, you know this is that was like six months ago. so but and and What caused this? Is this longer context or or what what resulted in this improved performance?
00:09:51
Speaker
I think it's probably a bit of everything. The best score right now goes to Claude 3.5 San at the latest release. and you know They've clearly been spinning the RL AIF centrifuge nonstop at Anthropik for the last couple of years. and so they're just It seems like we have maybe some early form of self-improvement, not necessarily in the way that it was originally envisioned where people were thinking,
00:10:18
Speaker
that the AI would look inward and re-examine its architecture and yeah improve itself that way, although that could be coming too. but Because especially as it's getting good at code, you know that that frontier maybe starts to open up. But for the moment, they're just taking one instance of the model and asking it to critique and improve what the model itself did you know at the object level and gradually kind of improving, improving, improving. Of course, they're bringing in a lot of human data as well. ah they're they're thriv you know They're throwing everything at these challenges. And basically, it is it is working. What about multimodality? So from the original TPT-4 until now, what ah what have we unlocked by getting multimodal in input, mo multimodal output?
00:11:02
Speaker
Yeah, this is one where I think you see the classic, the future is here, it's just not evenly distributed. I think that is pretty applicable to the multimodality frontier because I don't even think we've seen everything really released that exists. The first major multimodality which they debuted with the original launch of GPT-4 was that it could see. And they showed a couple of interesting demos where you know they asked it to like use a computer screen or they there's the famous example of a guy who's like attached to the back of a taxi in New York with an ironing board, ironing, and GPT-4 can tell you exactly what it's looking at and like why why is this strange? you know and it It has that whole capability. But they took like another six months to even launch the first version of that to users.
00:11:50
Speaker
so That happened and then we were kind of getting used to, okay, now we can feed images into these things. That's pretty cool. For the kind of stuff I do at Waymark, which is my startup, and we do small business video creation. Super useful because the businesses all have assets. you know They have pictures of their business, pictures of their products, whatever.
00:12:09
Speaker
and you want to create a video that and this you know this turned out to be an incredible jumping off point for me to to try to figure out everything that's going on in AI because video is sight, sound, and motion. right It's script and it's the imagery and it's you we also use the voiceover to try to create a coherent thing and they all have to work together.

AI in Video Creation

00:12:27
Speaker
so If you write a script, you want to have imagery that makes sense with that script. and you know Similarly, like depending on what the business actually looks like, what images they have, that would naturally dictate what sort of script you would write. So we used to have to do that through a whole complicated mix of models where we had dedicated image captioning and they were pretty generic and you'd get captions back like, a woman and a man are sitting at a table and be like, okay, cool. But I don't know, you know is that a doctor's office? Is it a restaurant? you know is that ah Is it a retirement planning session? like what kind of What's going on here?
00:13:00
Speaker
they didn't really have that resolution up until GPT-4 class models. And now they'll just tell you like exactly, oh, it's a nurse you know in the examination room, and the guy's on the table, and you know what color is his socks? like It's getting really good. And it's very it's very good on the vibes, too. it can tell you know One of the things we do is just simply say to the model, like which of these images would the small business be proud to put forward in advertising?
00:13:27
Speaker
So we don't, we don't have to ask now, like, what is this and try to triangulate. We can literally just say like, tell us which ones we should use. And it gives us like really quite good results. The balance relevance and also aesthetic. But yeah, that's, you know, that's really just the beginning. Now we have GPT-40.

Introduction of GPT-40

00:13:43
Speaker
This is not entirely clear. you we're ah For a lot of these things, we're kind of reading between the lines and in tweets that people have made and stuff like that. But it seems that GPT-40 is a single model that jointly handles text and visual input and audio input all in one kind of shared space. So the O is for like Omni, and it seems to do all this. And again, the exact architecture, of course, they've not published, but they've Greg Brockman put out an interesting tweet where it seemed that he had used the model to generate an image
00:14:22
Speaker
of a chalkboard like in a lecture hall where the chalk said, you know, essentially we're going to model all these different modalities jointly in a single, you know, a single loss function, single, single embedding space, you know, whatever the different ways to think about that. The upshot of that is it can talk to you now. So you know people have seen that demo and that's now released pretty broadly. If you haven't used the chat GPT advanced voice mode, I would definitely encourage it.
00:14:49
Speaker
It is still slightly rough on the interruption mechanics. You know you and I have ah will have a better natural you know interjection and pause and listen to the other than you can have with the AI, but it is getting pretty good. and It can also do all sorts of things like speak all the languages. I recently asked it in the context of a Weimar test. I put it in a Nigerian restaurant in Detroit where I live. and Then I asked it to read the voiceover in a Nigerian American accent so it would be more authentic. It turned around and you know did exactly that, gave me the the same script over again in a Nigerian American accent.
00:15:25
Speaker
So incredible potential there, I think for real time tutoring. They haven't released the full vision capabilities because they've also shown that it can watch the desktop and kind of, you know, see what you're doing and potentially provide real time coaching for you as you go about your work. That's a pretty fascinating possibility, but isn't broadly deployed just yet.
00:15:47
Speaker
Just on the accent point, it's interesting when i when I speak to the newest version, so the advanced voice voice mode. i can And i choose I choose English as a language, and i'm I'm natively Danish. And so if I talk Danish to it, it'll talk to me in an um and Danish with an

AI in Surveillance and Privacy Concerns

00:16:01
Speaker
American accent. And I haven't asked for this, but this is just how it... This is just the accent that kind of arises in the model. I find one of the things that has impressed me the most about multimodal is, for example, taking pictures of groceries or just food items on a table.
00:16:17
Speaker
and asking chatgbt in this instance to guess the calorie count. Tell me how many grams of butter is on here. it It can do that if you give it context like a carton of milk, it knows the size of that carton of milk, and they then can estimate how many grams of butter is beside the carton of milk. and you know Humans are ah notoriously bad at estimating ah calories by just looking at food items, but it does pretty well. And this is this You know, thinking just maybe five years, maybe three years back, this would have seemed like an impossible task to solve, I think. And it's getting so cheap, too, that that sort of always on or almost always on surveillance or supervision or coaching, you know, and I think the framing of this is going to be like highly you know contested.
00:17:05
Speaker
But it is becoming affordable to imagine solutions where the thing is just always on, kind of always listening, always watching. It's not quite there yet, or at least maybe we don't have enough scale of compute available to support that for everyone. And I think that is probably a big part of why OpenAI hasn't released everything that they have shown, because they maybe just literally don't have enough capacity to serve it all at scale while also you know working on their next model and you know allocating compute to the training. But the prices have come down so far that you can kind of squint and see it.

Cost Reduction and Accessibility of AI

00:17:41
Speaker
Originally, the the first GPT-3 that was available via the API from OpenAI i was they used to price it in thousands of tokens and it was six cents
00:17:52
Speaker
for 1000 tokens. If you did a fine tuning, it would be 12 cents per 1000 tokens. So again, that's like two pages for, you know, in ah in a way more context, if we're going to create one video for a small business, I always used to think, okay, that's about 1000 tokens, I would think that's about 10 cents. And we were perfectly happy to pay that because you know It was like, there's no other way to get this kind of user experience. right So in terms of the value we could provide the users and and what that would do for our business, the 10 cents was not really a problem. But it has come down now by literally a factor of about a thousand. The GBT40 mini model, which is not the best model, so there are more expensive ones of course, but
00:18:35
Speaker
It is nevertheless still way better than the GPT-3 that I'm referring to. It basically costs the same to get a million tokens as that earlier one did to get a thousand tokens. So they've now shifted the pricing scheme as well, a full three decimals. you know It used to be, this is how much per thousand.
00:18:54
Speaker
Now, they presented it as this is how much per million and it's you know basically the same order of magnitude for GPT-40 mini as it used to be for GPT-3.

AI's Performance Across Different Tasks

00:19:07
Speaker
That is wild. Which applications, can yeah where can you use these models with this much lower pricing and where you couldn't use use AI before?
00:19:17
Speaker
One of the big challenges is everybody sort of has to figure that out for themselves. There is this like it often called the jagged capabilities frontier where you have you know even a model like GPT-4 O-mini which is not the frontier model today is like pretty clearly superhuman in some things. I often say translation as like my first example of something that you know No human can speak all the languages, the AIs can speak all the languages. They can kind of translate any language to any other language. That's amazing. At the same time, they're like terrible at some things. you know they're Things like common sense, spatial reasoning, they're getting a little bit okay at now. It's it's definitely improved and the multimodality has contributed to that. But you still have like a big problem if you wanted to
00:20:02
Speaker
you know figure out like exactly how should I rearrange these books you know in ah in an optimal way or even a way that you know would make sense to people. so Everybody does have, as they try to figure out how to apply this stuff, they do have a little bit of a challenge of figuring out Can it do my thing? And if it is initially failing, am I barking up the wrong tree because it's just not that good at this sort of thing? Or, you know, do I need to prompt better or do I need to maybe bring examples? So I have a ah whole you know kind of series of presentations. where I'm trying to help people wrap their heads around this. And, you know, I think the general rule of thumb is like,
00:20:43
Speaker
If there is, first of all, if it's a routine task, you have a pretty good chance. A routine does not mean low value. There was great work out of Google, for example, where they have an AI that they did fine tune. They put a lot of work into this, um but nevertheless, they were able to get an AI to outperform human doctors as evaluated by other human doctors on the task of medical diagnosis.
00:21:06
Speaker
So that's obviously a very high value task. People train a long time for that. They get paid a lot for that, but it is routine. you know There is a pretty well established process of differential diagnosis. What questions you're supposed to ask, given everything we know now, you know what would be the most informative you know next data point to try to collect?
00:21:24
Speaker
So if it's routine, you have a pretty good chance of doing it. If it is not routine, but you can sort of make it routine by collecting your own examples, then that is really powerful. The original GPT-3 paper was called Large Language Models are Few Shot Learners. Few Shot Learners basically means they can learn from the examples that you provide at

Domain Expertise in AI Usage

00:21:43
Speaker
runtime. It's also super helpful to get the chain of thought down in explicit form. Another ah one of the big findings, this even predates GPT-4 a little bit, but has certainly been developed a lot over the last couple of years is just
00:21:56
Speaker
sort of let's think stepby step by step, give the model time to think, and that and significantly improves its performance on lots of things. If you have a task that is not represented well on the internet, but you can collect not just the inputs and outputs, but also the reasoning process that you want to follow to go from those inputs to outputs,
00:22:19
Speaker
then the the models are getting quite good at learning to mimic the reasoning process as you demonstrate it to them. And so that is like a really powerful capability as well. But there are still some of these things where you would, because there's just not a lot of pre-training data out there,
00:22:37
Speaker
you know, there's there's not a lot on the internet of like, how do I decide like where to click on this UI? We all know that intuitively, but nobody until now, you know, until it's become clear that it's become needed, nobody has really extensively documented the thought process of like, how am I going to think about where to click next on this software? And so there are some things like that, that the models have continued to struggle with. And where even like a handful of examples hasn't been enough to get them over the hump. So no simple answer to you know what they can and and can't do. It is interesting too that expertise is rewarded. you know I mentioned coding is kind of a killer app. I've been doing a lot of that lately. And it is clear to me that you know they're getting easier to use all the time. right Prompt engineering in the original sense of like
00:23:27
Speaker
ah This is the world's biggest autocomplete. How do I structure something where you know what comes next would be what I want? That was kind of classic. you know it's all that All these timescales are compressed, but classical. you know Two years ago, prompt engineering was setting things up to to you know work with the autocomplete paradigm.
00:23:46
Speaker
Yeah now now you can do very very simple prompts and and get amazing answers or with images for example. In the beginning I couldn't i couldn't get models to create interesting images, but when you have this trick where where you input something simple, the model unfolds that into something more complex in language and then gives it to And then creates ah an image based on that you can you can suddenly get something something great from a very simple prompt that is really the that is where you see a improvement i think when you when do you when you put the least effort in and you still gotta get some i get a great output.
00:24:21
Speaker
Is that also happening in programming? Can you describe what you want in simpler and simpler terms and and still get something could still get high quality code? It's definitely improved a lot, but I would say actually both in artistic pursuits and in software development and probably in most domains, expertise is still valuable in a couple of ways, not like AI specific prompting expertise so much, but domain expertise is really valuable first because they tend to kind of match your language. So if you can use technical terms, you know, it'll get them into a more technical mindset mindset. Mode, mode is probably a better word than mindset. And it's also really helpful to, because they, one challenge that they do have, especially with the RLF, RLHF and the RLAIF is that they are kind of sycophantic. They really want to please the user. This has, of course, like connections to big picture safety worries, ah you know, related to dishonesty.
00:25:21
Speaker
the The general you know process of training is model doesn't, well, there's stages, but the the final kind of refinement stage, the behavioral training stage is The model does something, it gets some sort of reward score from either a human or possibly another AI model, and then it updates so as to you know try to maximize the score. Obviously, we are not you know super reliable raters in the sense that what we say we like is is not like doesn't have perfect you know grounding to the truth.
00:25:53
Speaker
So one of the things that it seems to have picked up is that like flattery kind of works, you know, going with the user's premise is like a pretty good way to go. And it's not entirely clear if in some sense it knows better but is still doing that to flatter you or if it you know doesn't know better and it's just kind of following you along. But If you give the models a bad premise more often than not, it will still follow that premise and that can take you off into quite a

Pitfalls of AI Coaching

00:26:24
Speaker
unproductive direction. and i've I've certainly experienced that from time to time in programming where I have a misconception about how something
00:26:31
Speaker
ought to be done and you know I have to be very careful of it. If you're in a domain that you don't know much about, you really want to avoid asking leading questions. You really want to be like super neutral and try to orient yourself first before picking a direction to really act or move in because Even like fairly subtle hints that sometimes you may not even be super conscious of can lead the thing to agree with you or to you know to go in whatever kind of path you had intuited, even if that's not necessarily the right one. Interestingly, the ah the new O1 model from OpenAI does show flashes.
00:27:11
Speaker
of overcoming that. I've had some interesting experiences where I've said like, what do you think about this? I leave the door open a little bit. And it will sometimes say like, I understand what you're trying to do. I understand why you think this would be a good way to do it. But I would recommend a different way to do it. And that is definitely a new sort of behavior of that. I wouldn't say you can reliably count on even with the latest models.
00:27:32
Speaker
But it's like very rare in the GPT-4 and Cloud 3.5 models. So user kind of has to either have the expertise to know generally what level to go, and then the AI is like a great assistant. Or if you're using it to coach you, you have to be like very careful to be neutral or you know balanced and coax the the best practices out of it first, because it is very easy to kind of get off track.
00:27:59
Speaker
Do you think we're seeing more AI usage in something like coding or programming than we are in some highly regulated industry like medicine or law? Just because what you're really doing as a doctor is not only diagnosis but also you have the authority to prescribe a medicine, you have the you you can talk to insurance and and It's unclear to me what the you know can can an AI prescribe medicine or or approve something for

AI in Medicine vs. Coding

00:28:27
Speaker
for insurance. I'm not sure. But can you can an AI produce something as ah as a draft and that's then reviewed by a senior programmer? Well, that seems but seems like it could be done today. So so do you think we're seeing kind of different uptake by by different industry because of ah different legal restrictions?
00:28:47
Speaker
Yeah, I certainly I think we are seeing very differential adoption and software has been a, you know, an area of relatively fast adoption, although not total. I mean, you still talk to people fairly often who are like, I don't use it. You know, I tried it a while ago and it wasn't that great. I definitely would coach people to say like your first impressions, you know, if they weren't formed like yesterday are probably obsolete. Software also has the benefit of being not perfectly objectively evaluatable, but more objectively evaluatable. You can run the unit tests and you know there there is a way to like quickly execute the code that was written and gets you know if certainly if it like won't compile or if it errors outright, you know you can get that kind of feedback really quickly. So any area where that sort of tight iteration loop is possible has an advantage for
00:29:39
Speaker
Capabilities advancements and and adoption in terms of the medicine thing You know one of the things if I was gonna make a list of things that like hadn't happened that people might have expected it it would include for sure the backlash from the professions I would have predicted 18 months ago that we would have seen like a much more contentious reaction from doctors and lawyers, for example, on you know new policies, excluding these things, whatever. That really hasn't happened all that much. Maybe that's because the adoption hasn't quite happened either, or it hasn't been quite pushed on them in the same way, and so they're you know kind of blissfully going about their business without realizing just how far things have come.
00:30:18
Speaker
Maybe it's because you know a more optimistic take would be like most doctors are really like very mission driven and also maybe deeply overworked and kind of overwhelmed by paperwork and sort of welcome the help. I'm not really sure, but it does seem like those results from Google are pretty hard to argue. Also, you can just go and do it yourself. You know, if you have medical questions as an individual user, you will often meet the sort of, you know, you need to go talk to a doctor, I'm not your doctor. But there are very easy ways around that. And what I usually do is just say, I, you know, considering going to the doctor, I don't like to lie to the AIs. ah Sometimes you have to to get what you want. So sometimes I'll go as far as saying I'm preparing for a conversation with my doctor, I want to be as informed as possible. And that will basically always get it into the mode of just directly answering your questions. If I want to not technically lie, then I'll say like, I'm considering going to the doctor about this.
00:31:16
Speaker
I mean, I guess the big question is then, do you trust it? If if you ask your your preferred model, you know, is this is this dangerous or, you know, should I should i get this so this checked out and in depth or in more depth? Do you then trust it to, you know, you get a response saying, no, no this is fine, this this isn't dangerous. Do you trust it or do you still go to the doctor? Because that's also, you know, like the the like lacking the authority to to actually prescribe medicine. this seems like ah ah kind of hurdle to adoption that we're not sure whether to actually feel safe about something that we don't know much about until we have kind of a trusted ah human professional giving us that that reassurance. Yeah, my standard advice is both. you know you if If you have something serious, I would use both for sure. I would not
00:32:07
Speaker
rely entirely on an AI doctor, you have those issues of sycophantism and it you know going along with whatever your assumption is. So again, you really want to be super neutral about medical questions because if you give it a hint, it may follow that hint to your detriment. But at the same time, I wouldn't, I think it is really hard to argue with these you know differential diagnosis claims and increasingly there's a lot of anecdotes out there that show that AI can sometimes have you know insights and correct diagnoses that the medical establishment has not produced you know for individual patients. So I would say basically the best care you can get today would definitely involve a mix of human and AI
00:32:52
Speaker
medical advice, and certainly I wouldn't go all in on either one. and It probably does make sense, at least for the short term, to have a human in the loop. i mean this is you know A human in the loop is kind of the paradigm for a great many AI implementations these days. It's certainly the case most of the time in coding.
00:33:11
Speaker
It's like extremely rare for a software company to deploy code that was AI written, you know validated by whatever automated tests and and no human eyes were ever laid on it. I think that is not really happening much today. What is much more happening is the software engineer says, you know write this feature, the AI does it.
00:33:30
Speaker
It maybe works, maybe it doesn't. Usually you can go a lot faster than you could without it. And Google has just said, for example, that 25% of the code that they're deploying now was written by AI.

AI's Role in Software Development Companies

00:33:42
Speaker
But I would still assume that but essentially 100% of that code was read you know and and at least like understood by a human before going all the way through. so to have a human you know I have mixed feelings on somewhat libertarian grounds. or you know just like i don't I think sometimes we are a little too reluctant to let people make their own choices.
00:34:06
Speaker
But you know given where we are today, I do think it probably makes sense to to kind of still keep a human in the loop, you have those actual prescription decisions you know gated by a doctor. But I think you can have a much more productive... In the US, you may live in a you know in a much more idealized society where conversations with doctors are more leisurely and exploratory and holistic. But here,
00:34:29
Speaker
you know The appointments are usually pretty short and you know the the common experience is that the doctor is not actually looking at you, but like looking at the computer and you know kind of listening to you, but also like typing your answers into their system at the same time. So there is a ton of opportunity I think to just really take your time, really talk it out with the AI ahead of time or to or afterward. you know but i My general advice would be if you're going to go to a doctor, have that conversation in a quiet moment with the AI first.
00:35:02
Speaker
be as neutral as possible. And then you can go in with you know, a much and have a much more like thorough conversation with the human doctor because you can make sure that things that were raised are indeed considered by the human doctor when honestly, they might not otherwise be. Yeah, I mean, I will say I can attest to the kind of AI being very good at diagnosing different medical conditions. I had a a disease as a child, that is extremely rare. We have around perhaps 50 cases per year and in and in my native country of of six million people. So it's extremely rare and it's basically unknown. It was unknown to the doctors who ah originally tried to diagnose me, they misdiagnosed me and so on, but chat dbt got it right on the first try, gave me the Latin name of this disease. That's very little has been written about it. That was just impressive. And you know,
00:35:54
Speaker
had i had this model fifteen years ago or that would have been extremely useful so. Yeah i think a using both makes a lot of sense so using humans to check the eyes using the eyes to check the humans you probably get the best the best of both walls there.
00:36:10
Speaker
Okay, i want I want to get back to your point on models using computers because that seems to unlock a bunch of new kind of areas for us. It's kind of like developing a robot in humanoid form where it kind of fits into everything we're doing in the physical world. If an AI can use a desktop, it can kind of fit into all of the, ah you could call it legacy systems that we're we're using every day and we don't have to develop anything new for for kind of AIs too.
00:36:40
Speaker
to use our existing systems. So how good

AI's Capabilities in Digital Interfaces

00:36:43
Speaker
is this? How close are we to the dream for some of of you know telling your AI, order me you know research what the best X product is and and get the best one, ship it to my address and you know all of these things. Order me flight tickets, find the best restaurant, a task like that. I think we've just maybe hit a tipping point in the last week or so with the latest Claude model, they have described it as not that great yet, you know not yet capable of doing like really long horizon or you know major projects with lots of sequential steps because it does still make mistakes. And it has some, there are some things that it really struggles with that
00:37:26
Speaker
you know, are obviously very easy for people. So like drop down menus, for example, is one thing that apparently it still doesn't handle super well in certain instances, like scrolling, it can also kind of struggle with. So, you know, these are like very alien things where, again, they can be like superhuman, you know, you can go to any website in any language and make sense of that. But, you know, it can't like click a drop down effectively. it's It's a strange It's a strange thing. Here's a very simple question. How does it how does it use a mouse? How does it navigate? Does it does you use a mouse? Does it control a cursor like we do? And and if so, how? The same thing is kind of going to play out in the digital and the physical world. you know People a lot of times debate, like what kind of robots should we make? Should we make humanoid robots? Or should we like make other form factors? And a common argument for the humanoid robot form factor is, well, the world is designed for us.
00:38:22
Speaker
And you know if you put a robot out there that has wheels, for example, then it can't go upstairs. It's going to have a lot hard time doing you know the sort of wide range of things that we do. And our environment is just so you know just assumes that you can like go up a stair or a couple of stairs. that you know the The inability to do that becomes a ah major limiting factor for the robot robot. Basically, the same thing is happening on the web or on you know on the computer where people have tried a lot of different form factors over the last 18 months where people are like, maybe I can
00:38:56
Speaker
You know, instead of having if the thing can't see that well, maybe I can like take the HTML of a website and parse that and strip it down and give it, you know, some sort of ID and then it can say like click on this button. That's how a lot of automated software testing already works. So there is some reason to think that that could have been the path and it sort of, you know, sort of worked, but not again, not super well. The latest cloud one basically just sees. They said that they taught it to count pixels.
00:39:24
Speaker
Not exactly clear what counting pixels means. i I don't think that there are like actual, you know, integer counts going on in the background, but it does. My first test I ran with this was just to but could put a link in show notes, whatever, but there was a.
00:39:42
Speaker
a REPL from the company, Replit, which basically makes little virtual machine coding environments in the cloud that you can just spin up and throw away and it's all through the browser. So it's super convenient. They put out a template where you can just go try cloud computer use on whatever you want to try it. So you just go fork their thing, now you have your own thing, customize it however you want. The basic use case is just tell it what you want it to do. So I told it, okay, go use my product Waymark and see if you can make a video for a small business.
00:40:12
Speaker
And sure enough, it was able to do that. And it does basically use, you know, a mouse in ish, you know, not not exactly the same way. Obviously, it doesn't have a hand. You know, it's the the church it's not like, you know, creating a trajectory around the screen in the way that we tend to do. But it will literally just be like cursor, you know, pixel point and like click on a pixel point. And it just gives you two numbers like the X and Y coordinates.
00:40:41
Speaker
click on that spot. And it seems to be quite good at that in my testing. It was very you know typical of anthropic. Of course, they take the. safety and fallout you know concerns here seriously. Their analysis of this was better that we kind of open AI like actually, better that we release this early while it's still not super strong than like wait and you know release it when it is potentially transformative. So here's a weak version. you know It's basically their position. They've taken a lot of pains to
00:41:13
Speaker
Keep it from like creating new accounts to keep it from logging in as you i told it like you're in my account don't worry i already logged in for you and it was like. Sorry i don't use you know your accounts that's not my thing and so that i eventually had to say i again i don't like to lie to the eyes but i.
00:41:30
Speaker
In this case, I did say, okay, fine, I've logged out. Now you can just use the the public version. In fact, I had not logged out, but it believed me that I had logged out and went ahead and did it. you know if If it had gone and clicked on the account tab, it would have seen the, wait a second, you lied to me, but it didn't. It just took my word for it and went ahead and advanced through the process. But it was able to you know click on all the buttons that it needed to click on. It was able to, our product is an AI product, so interestingly,
00:41:55
Speaker
It prompted the ai product you know it's it it had to and it seemed to have no problem with that generating a prompt to put into another ai product fine. So there it's like you know it's just issues these pretty simple commands like.
00:42:09
Speaker
click here. Now it's, you know, as you can imagine, clicking into a text box in a, in a browser. Now that thing is highlighted. And then the next command is like text and then the value of text. And it just inserts the text right into the text box. And you know, now next one is click on the submit and it doesn't have that many different actions, it's really just kind of clicking and entering text. And I think it's like there might be a scroll in there, but is but they did say that doesn't work super well. But it's like a very small number of primitive you know or sort of building block actions that it can take that navigate it around the web. It's not looking through some weird abstracted HTML thing. It's not, you know,
00:42:51
Speaker
doing any sort of special protocol. It'll be interesting to see how much the web but starts to bend to the agents. You can imagine you know what like the new SEO or people have, of course, been like trying to gain Google forever to get to the top of the search results. You can imagine a version of that that's like, okay, Claude can't use so dropdowns. like we need to make We need to remake our site with no dropdowns.
00:43:11
Speaker
I think that'll be a very, as most of these SEO things are, I think that'll be a very fleeting advantage, possibly worth pursuing for some people in some niches you know to get a distribution edge for a time. I wouldn't found any startups on the premise that Claude will never you know be able to handle a dropdown. So it seems like it'll be pretty good pretty soon.
00:43:31
Speaker
its It's good to hear that the the models can't use kind of cursors in a human way, just because that's information that's often used to to when you when you have to sign in into websites and and prove that you're that you're human, kind of the way your cursor moves on the screen or moves to the the box that you have to check to to to say that you're human. so did do Have you tried to break the restrictions on signing up for new accounts or have you have you played around with kind of breaking the model and in that way?
00:44:00
Speaker
Not really too much. I've just lied to it and got, you know, kind of got past that. I think that has been done. You know, the the the kind of most prolific jail breaker, if you want to follow a jail breaker today, would be this guy, Pliny, the elder, who is constantly but posting his latest jail breaks on on Twitter.
00:44:20
Speaker
And very often within whatever, 40 minutes of the model, his latest one, he apparently got access to the so open AI has released 01. There's the 01 series of models. They have released an 01 mini and an 01 preview, but not actually the 01 proper yet.
00:44:41
Speaker
and It's not entirely clear what the difference is, although they've shown some you know benchmark results where it's better, so no no shock there. It may also have the multimodality in full. The O1 preview doesn't have multimodality. It doesn't have a lot of the features that people have become accustomed to for application building purposes. and This guy, Pliny, went and just like changed a URL parameter somewhere. Apparently, somebody had left something unsecure within the open AI environment. And he was able to get to the O1 model unreleased. And even jail broke that before they had even ah put it out and meant for anyone outside of the you know the limited testing to be using it. So. Data broken in negative time. Multiple yeah multiple levels of jail breaking in that case.
00:45:27
Speaker
Definitely ah an account we're following. Okay, but so how much are people using AI as as kind of private citizens, as as businesses? What do we know about AI usage? Definitely one of the fastest adoption curves of all time. you know You'll see these graphs of how long it took you know for a certain percentage of people to have a refrigerator you know and a microwave and whatever.

Rapid Adoption of AI Technology

00:45:54
Speaker
And you know these curves in general are getting more compressed. like The time that it took for cell phones you know to go super mainstream was definitely faster than like refrigerators and microwaves.
00:46:04
Speaker
But i seems to be even faster than cell phones in terms of the percentage of the population that has adopted you know since a certain kind of start date so.
00:46:17
Speaker
it's It's going super fast. Revenue is spiking across all the you know the leading platforms. OpenAI is reportedly going to do $3.7 billion in revenue this year, and they're projecting more than $11 billion in revenue next year. This is up from like you know tens of millions just a couple of years ago, for for and for like an anecdotal way to think about that in Early 2022, as a founder of a 20-person AI video, not AI, it wasn't AI video at the time, it was just DIY video software. Now it's done for you by AI software. But we recognized the potential of GPT-3 in 2021. In early 2022, we signed up for what OpenAI called the innovator's license.
00:47:04
Speaker
And this was basically just pay them a few thousand dollars a month. You get a call with somebody who would coach you on how to use the technology. And the biggest reason we signed up was early access to new stuff. And that was just a couple thousand dollars a month. I think they had reportedly like 20 million ish a year in revenue at that time. And, you know, we were no we were not paying them much, but that was all it took to get that kind of access. So now you're you know talking about literally on the order of 1000x revenue growth over a period of basically three years. That is 10x every year, three years in a row. like That's really something the world has not often seen. and You got to keep in mind too, that's also at the same time as they've had these massive price drops, which we talked about earlier. so
00:47:53
Speaker
he's got 1000x revenue growth at the same time that the quality adjusted price has come down 1000x, which roughly means that usage has to be up something like a million x. And do you measure that in tokens? Do you measure that in number of conversations? Do you measure that in like you know tasks successfully completed? I mean, there's a lot of different ways that you could think about that. But if you just took open AI revenue at the $3 billion number and said, okay, how many tokens would that buy? Let's just take GPT-40 as their main line. Now they have a one which is a little bit you know more capable, but call that their main line frontier model. That translates to one quadrillion tokens per year that would be used across all of humanity.
00:48:44
Speaker
And that is 100,000 tokens per human over the course of a year. So, you know, that's, let's say two, you know, divide by 500, you're looking at 2000 pages of text per user or per per human on earth, you know, man, woman, child, global population. So it's like like a couple substantive back and forths with an AI per week, per every single person on earth.
00:49:12
Speaker
Obviously that is not evenly distributed, but if it were evenly distributed, that's like roughly what it would look like. So there must be some there must be some crazy power users out there. Yeah, for sure. And I'm definitely you know one of those. One challenge that I've kind of contemplated for myself and and put out to others a little bit is, how can you spend $1 a day on the Gemini Flash model? That's from Google. It's there it's kind of their equivalent of a GPT-40 mini or a clog like Haiku. It's the smaller, faster, cheaper, but still, you know these days, like quite good. like Unbelievably affordable intelligence.
00:49:49
Speaker
People talk about intelligence too cheap to meter. Like this is the point is like it's almost there with these small models because what would it take to spend a dollar a day with the flash model? It takes between 12 and 13 million input tokens just to spend one dollar. It's like seven cents, seven and a half cents per million input tokens. So i when I literally exported my entire email history. Every thread I've responded to for the last five years and then put all that in to have it processed and try to summarize There's a bit of that is more than it can handle in its context window. So I had to you know break that up various ways, try to summarizing different ways, try to you know just basically experimenting with all sorts of things. But that's like what it takes to spend one dollar on a Gemini flash kind of model. So yeah know there's a lot is when I say like 100000 tokens per user, like that's
00:50:40
Speaker
At the higher price point, in reality, you have you know huge distribution where some people are trying to jam you know millions a day and see if they can spend a dollar a day. And of course, you know many people are you know still oblivious. But you know there's a lot of different data points, I think, that kind of all tell the same story. Anthropik has got spiking revenue. ah Google has said that their data center business is now starting to grow in a material way due to AI. The Gemini API, they just said in their recent earnings call, has grown 14 X in terms of just the number of API calls that are being made by developers over the last six months and and a lot of surveys, you know, basically tell the same thing. You when you talk to individual contributors, you know, there's a it seems like we're in kind of a steep part of an S curve or like, you know, it's all these surveys can kind of contradict. But it seems like we've gone from one third of people using a semi regularly, at least to like half or maybe two thirds on some of the latest survey results.
00:51:40
Speaker
That's not to say that people are in there every day, all day, but like weekly usage has kind of gone from the third to two thirds. Again, at least in some surveys of individual contributors and then leadership, you know, executive surveys are are telling a similar story where people are like, we definitely recognize this as a priority.
00:51:59
Speaker
a lot of companies at a high level are still in kind of a strategic phase where they're trying to figure out exactly what they should be doing. But meanwhile, the leaders also report that they are like using it quite a bit. And in some surveys, you even see that leadership has a higher individual adoption rate than individual contributors at those companies. But there's a lot of speculation about like what exactly is going on there.
00:52:22
Speaker
are the individual contributors telling the truth. A lot of people are are kind of thinking you know maybe maybe much like computers in general. you know we we sort of Of course, the old joke or observation is that you see computers everywhere but the productivity statistics.
00:52:36
Speaker
And one possible explanation for that is that people are now like spending half their day on social media while at work. And so maybe what is measured is like not changing, but what people are actually doing in front of their computer is just like far more shifted toward entertainment, leisure, breaks, what have you.
00:52:57
Speaker
so I think that's like hard to get a really good handle on. Are people hiding their use? Are they you know are they are they are they afraid that if they say that they are and using AI and they can do stuff more efficiently, that like that will just be sort of, you know they'll just get more work or like, they you know they well, who knows you know how people are feeling with respect to their employers and how much trust they have in what will happen if they're candid about their AI use. I think NLW, creator of the AI Daily Breakdown,
00:53:28
Speaker
has been a really good voice on on this topic. you know He he has covers the service a lot and has, I think, very good analysis on like where we are in the in the usage curve.
00:53:39
Speaker
Where do you see open

Meta's Llama 3 Model vs. GPT-4

00:53:41
Speaker
source? We have quite recently seen the the latest version version of Metas, Llama 3, and that seems quite advanced. Would you would you say it's on par with TPT-4 in its original version? Is it on par with TPT-4 as it exists now? How is the quality ah of Llama 3?
00:54:00
Speaker
Getting very good. It's definitely come a long, long way. I would say it is the latest llama 3.2, which is, you know, kind of followed a similar trajectory to like GPT four to four. Oh, you know, in the middle, there was also a GPT four turbo. Basically, you have these kind of giant pre-training runs where.
00:54:21
Speaker
you know, all the data is amassed, all the computers amassed, you get this model out, it's like pretty capable, it can do a lot of stuff. And then the next phases are these behavioral refinements, you know, augmentation of capabilities in various ways. And the Llama 3 series of models has gone through that later series of of enhancements pretty quickly. It's only been, I don't know, we'll recall the exact date, but it hasn't been that many months since Llama 3 0.0 was released, and now we're on 3.2. And it's definitely better than the original GPT-4. It has the longer context. It has the multi-modal. If you could only choose a llama 3.2 or the original GPT-4, I would say almost everybody would choose the the new llama and the leaderboards, I think.
00:55:11
Speaker
you know validate all that as well. But it's still a shade behind the best proprietary models. GPT-40 is still a bit better. COD 3.5 is definitely better. Gemini is generally considered to be a bit better.
00:55:28
Speaker
But it's not far off. And of course, you have, you know, as a user of it or as a developer that's trying to build something with it, you, of course, have access to the weights, which means that you can fine tune it in ways that increasingly the platform providers also do support. OpenAI now does have fine tuning of GPT 4.0, including with images. Gemini only supports fine tuning of their flash model and Anthropic, as far as I know, only supports fine tuning of their Haiku model, which is their smallest at a retail level. Now, they may have you know for a strategic partnership, for the right opportunity, you know I think you can go get a custom model, definitely from Anthropic. I'm not actually so sure how much Google is doing like custom models for large enterprises because at the scale they're operating at, you know is that even worth it? it's They're probably a little bit more selective.
00:56:15
Speaker
but with With llama, you can you know put it on your own servers and and do your own thing. It's not easy to serve. The the the biggest and best llama is a 405 billion parameter model. That is more than twice the size of the original GPT-3. That does not fit on you know even the largest single GPU. So you'd have to have at least kind of a small cluster or to just be painfully slow to execute.
00:56:44
Speaker
So it's not super accessible, even though it's open, but it is definitely, you know, the the gap has has closed. Though I do think we're i just saw a really good analysis. And I know you've had Tamay from Epoch on the show. They do, of course, some of the best high level analysis of like,
00:57:03
Speaker
big picture macro trends. They just came out with one that said that open source is 15 months ish behind the private frontier. And that would be pretty consistent with GPT-4 released in March of last year, Llama 3 released kind of late spring, early summer, sometime this year. And you know soon after that, we see the 01 model. So my my general mental model of the relationship between closed source you know proprietary API-based capabilities and open source capabilities
00:57:36
Speaker
is that we will probably see sort of a sawtooth kind of thing where the frontier as the public knows it from the open AIs and entropics advances in these step changes where it's like, okay, we're now releasing this new thing. Here's a blog post about it. Here's the new benchmarks. you know Here's the new API capabilities, whatever. And everybody kind of is like, oh my God, this is amazing. And then figures out how to understand that and what they can do with it. And meanwhile, the open source advances much more incrementally. It can have some punctuated moments too, like the llama 3 release is definitely a big one. But then once that's out there, you've got a million people doing a million different things with it. And so it it tends to be a more gradual, like, oh, open source got a little bit better in this way, in this way, in this way, in this way. And it's happening all the time.
00:58:20
Speaker
So I sort of expect like a closing of the gap and then a widening of the gap, possibly for kind of strategic reasons, because I definitely think the frontier closed source developers are to some extent making their deployment decisions in light of what is out there.
00:58:39
Speaker
in an open source way. Say more about that. That's interesting. I don't have this on like, this is analysis, not, you know, not reporting, but you see things, for example, like OpenAI recently released the real time voice API and somebody went and jail broke that and showed that you can do like all sorts of phone scams with the real time voice API. And so you think, geez, that's you know kind of crazy. Why didn't they take time to get that under control? And I think one answer is the open source stuff had gotten to the point where you could do that with open source models as well. And so they sort of both for business reasons, they kind of want to stay ahead. you know If the
00:59:19
Speaker
If the open source alternative is on par, a lot of developers have a bias toward picking it. So if they start to feel like, yeah, we're not becoming the obvious choice, then maybe it's you know that's a good spur to release the next level of capability up that again makes them the obvious choice. And then from a sort of safety or so you know societal impact standpoint, if they can tell themselves a story that Well,

Challenges in Open Source AI Deployment

00:59:43
Speaker
they can do it. Anybody who wants to do this could do it. Whether they can do it with us or you or not, then it becomes less of a concern. you know if there's if there's ten and and In the case of these voice scam things, I personally went out and tried a bunch of AI calling agent products where basically you can say, here's a phone number.
01:00:01
Speaker
call this person and you know do this stuff. We're talking on election day, so I can now say without fear of popularizing a negative use case that for most of 2024, you could go to several of these AI calling companies.
01:00:16
Speaker
clone a voice with no oversight or control. and I personally cloned Trump, Biden, and Taylor Swift on multiple different platforms. and that This usually just takes minutes, by the way. this is like and In some cases, it was even free. so In some cases, I didn't even have to put a credit card on file with the company that I was using. If I had scaled it, I would have had to, but just to do a test.
01:00:38
Speaker
get in there, you know dropped I just went and grabbed audio of Biden, Trump, Taylor Swift off the internet, dropped it in. um It would clone the voice. Most of the time without really any moderation, you'd check a box saying like, you know yes, I duly you know license this or it's my voice or whatever, but most of them don't check.
01:00:55
Speaker
And then you have your clone voice and then you just prompt the thing. And again, I've personally done this and recorded it. You know, call this number. You are Donald Trump. Explain, you know, why you now support open borders. And it would just do that and have that conversation with people stilted. You know, I wouldn't say timing wise, you know, again, on the list of things that didn't quite happen, I would put like major deep fake election disruption on the list of things that definitely we were watching for and didn't quite happen.
01:01:24
Speaker
Possibly that's just because they just weren't quite there to the point where it was really believable, but you could do it. you know and and The companies basically had no controls in place. I think at some point, OpenAI is to the degree that they are deciding what to launch based on what impact it might have. you know By the time they see a handful of these things out there in the public with Zero controls they're kind of like well, maybe we should maybe maybe we can launch without fear of like opening up a total you know new can of worms and maybe we should because maybe we take some of the air out of the sales of some of these less responsible companies I do think they have a
01:02:00
Speaker
a sense that like better people come use our stuff where there's some controls. yeah Maybe these other companies like still have unconstrained stuff, but if like all the positive use is coming toward us and it takes some of the business momentum out of some of these other less responsible actors, then maybe that's good. You should be able to do some voice recognition on the audio input for for a product like you just mentioned. it It should be possible to do at least the top 100 or top 1,000 most famous people and and you know get that out of the way.
01:02:30
Speaker
Yeah, there is tech for that. Google has developed some and Microsoft has developed some as well. I think it's primarily originally designed for like validating your voice when you call your bank and that kind of thing.

Responsibilities of AI Developers

01:02:42
Speaker
But yeah, you could use some of these APIs, but the people, the developers at the application layer are just often not thinking that way. We can have and we you know we will continue to have many debates around are the frontier developers acting responsibly?
01:02:59
Speaker
And I think the jury's like definitely out on that question. But they're not acting totally negligently. We can say that. you know they are doing They are trying to do a bunch of good things, even if that's ultimately not going to be enough. The application developers, in contrast, very often are like three people you know that maybe like started this as a weekend project and went to a hackathon. And then they were like, hey, this thing actually kind of works. you know What if we turn this into an app? but we We could do a startup.
01:03:27
Speaker
and they're Everything is moving fast. you know They're trying to make their thing work. They're trying to get attention. There's definitely a trend at the application layer of like, and also to some extent the open source research layer of just like yellowing things because people sort of feel like I'm not going to be able to build a sustainable business necessarily here, but I can like be a part of history. you know i can I just want to be a part of this global phenomenon and I want to make my contribution to it before somebody else does and I've you know and I'm irrelevant.
01:03:59
Speaker
or earn and earn money for eight months before the next you know generation of models. Before mass unemployment kicks in. So yeah, I think there's a lot of that going on. And I'm pretty sympathetic to the developers in the sense that I don't think it's entirely reasonable to expect that they will all have you know the even the awareness to like think ahead and and be on top of all these concerns, I would like it if they were you know a little more thoughtful and especially if you're doing like agent type companies that can act on unsuspecting people. i mean this These calling agents, they do not disclose that they're AI. They are basically like exactly what you would worry that they might be. so you know It's not great, but I am sympathetic, in most cases at least, where I feel like people are just trying to build something that works, they're trying to do something cool,
01:04:50
Speaker
and they're like getting carried away. So I think that you know going back to the big developers and like how that feeds into their deployment strategies, it seems like at a certain point, they feel that the downside risk is not removed, but sort of removed from there from their perspective. It's it's like much diminished on their list of pros and cons because it's already out there.
01:05:15
Speaker
Do you expect meta to follow along with ah kind of the frontier ADI corporations in in doing the next generation training runs? Because if not, then I i mean i mean and i don't see how open source kind of can keep up with the next layer level models. it Open source seems very dependent on meta investing in these training runs. Do you think it still makes sense in financial terms for the next generation and the next generation after that
01:05:46
Speaker
well i think your premise is a good one i do think open source and it's viability when it comes to or it's it's like it's ability to keep up with the frontier ish even fit's somewhat behind but to keep up in any you know meaningful sense like I do think that comes down to a very small number of players and meta is definitely the number one by far. You could maybe also point to like Mistral, but you know, meta has a lot more resources than Mistral. And the only way that gap would be close is if Mistral became like a European champion and and had, you know, massive kind of government level subsidies getting plowed into it.

Meta's AI Strategy and Open Source Commitment

01:06:22
Speaker
So it's yeah, it's a small number of of companies that are candidates to continue to open source. I think it's really hard to to anticipate. i mean it's a you know There's not that many live players in the game.
01:06:35
Speaker
And what zucker it's basically going to be Zuckerberg's choice for at least probably one more generation. And then after that, it might also be like the government's choice. But it's very hard to say. you know i mean I think he's signaled that he is not super dogmatic about the future of open sourcing.
01:06:53
Speaker
He's kind of said, like, at this point, it seems to make sense. You know, we haven't seen any like catastrophic harms from GPT for. And so, you know, that I think that that's sort of the mirror image or the other side of the coin of the proprietary developers looking at open source and saying, well, if this stuff is already out there, then we can release our API from a Zuckerberg perspective. You can because they are a little bit behind on the development timeline. They can look at GPT for and say,
01:07:21
Speaker
well, this thing's been out there for this long, it's been jailbroken this many ways, and society is looking okay. Therefore, we can pretty safely infer that you if we release our comparable model, that it'll probably be fine. It will be interesting to see how long that continues, and I i don't expect them to pull a you know comeback where they would actually take the frontier. They say they want to do that, but I don't see that happening. It could happen. I mean, they certainly have the resources. you know They've just announced that they're training llama four on a hundred thousand H one hundreds, which is massive and gets you to GPT for scale in terms of just total flop count in like days to a couple of weeks, depending on your assumptions. So in the assumptions there are around like, first of all, how big exactly was GPT for? We don't know. But, you know,
01:08:13
Speaker
Common guesses are like low 10 to the 25 range. There's also questions around the precision with which the training is done, just like literally how many digits in the weights. You know you can have super long numbers, 64 digit numbers. You can have 32 digit numbers, 16 digit numbers, eight digit numbers.
01:08:33
Speaker
My best understanding is that you don't need the full 64. It's either like the 16 or the 32 digit numbers seem to be what's used in training. At inference time, you can shift that down to eight or even four or if people have even gone to like even a little less than four. ah But you don't seem to, it that seems to be too rough for the training process. When you're just executing, you seem to be able to get by most of the time with this lower precision. But when you're actually tweaking all the weights, you need like the longer precision. so Anyway, depending on where they end up falling on the precision and what through you know what kind of utilization they're able to achieve, because of course the servers have like a theoretical max of this is how you know what the spec says. This is like ideal conditions, you know how many operations can it do, but then you have all sorts of you know complications around that, including like what happens when you know one of your servers fails. you know Does that bring the whole cluster down? There's been a lot of work, of course, to
01:09:29
Speaker
make the clusters robust to the intermittent failures when you have a hundred thousand gps like they're gonna fail from time to time so making the things more robust keeping the uptime high and also just like making sure that the the data is being like fed in in an effective way.
01:09:46
Speaker
Generally speaking, people have only historically achieved like 30 to 40% of max theoretical throughput in these large training runs. And that is also probably going to climb, you know, and maybe has climbed and maybe we'll start to see 50, 60, whatever. But anyway, depending on your assumptions there, their latest cluster gets you to GPT four time in days to weeks. And, you know, that puts them in the running to maybe do some sort of frontier model where they might actually have a genuinely new level of capability that we haven't seen from anyone. Um, but that would, that hasn't happened yet. We have not seen meta come out with a true frontier language model, you know, that has a ah capability that nobody else has previously debuted. They could, if they do, then they would have a very different analysis where they would maybe be thinking, geez, you know.
01:10:34
Speaker
We can't point to open AI has been in production for a year and and society hasn't collapsed. Now we really own this in a different way. um So if that were to happen, I wouldn't be surprised if they start to take a more gradual approach, you know, maybe have a API only phase for a while or, you know, maybe even at some point never release the weights. Of course, there's also the question of like, could the technology regime change?
01:11:01
Speaker
In today's world, you fine tune a model to behave a certain way, and that includes refusal on a lot of different tasks. So you know how do I hotwire a car? How do I make methamphetamine? whatever You go ask these questions to chat GBT and to Claude and to Gemini, and even to Llama as it's released, and they'll all refuse. But you can remove that refusal behavior with relatively little incremental tuning.
01:11:31
Speaker
Like in some cases, extremely little. I think it was far AI that showed that with as few as like 10 examples in some cases. And even unintentionally, like let's just say you're a developer and you you you want to dial in the model on a particular task. And so you bring like a small number of tasks.
01:11:48
Speaker
or examples of you know the tasks that you're interested in and do incremental fine tuning on that. But there's no refusal demonstrated at that final fine tuning process step. The refusal sometimes just disappears entirely. And you didn't even mean to do that. You were just trying to get it to do one thing and you kind of ended up sanding off that refusal behavior and exposing kind of everything again.
01:12:11
Speaker
That is a huge challenge for the sort of control of open source models. Certainly the likes of meta AI are aware of it. And it's going to be an interesting question to see, is there any solution to that? We have not seen one yet.
01:12:29
Speaker
that is particularly close to like a production grade solution, you know nothing that like works really well without major compromises. The closest thing I've seen to that has been from ah Dan Hendricks and co-authors. They put out a ah paper on a technique that they call tamper resistant fine tuning. And it has significant cost both in terms of the compute that it takes to do the tamper resistant fine tuning. And also it's in the eye of the beholder, but I would say non-trivial cost in terms of performance. Like what you get out of it is still useful, but definitely takes a step down in terms of its performance. But with that, they're able to create a model where it is genuinely hard to
01:13:13
Speaker
remove the refusal behavior. They do like an adversarial training approach where they basically say, all right, we're going to simulate the fine tuning to get this thing to do a bad thing. And then we're going to do our but our actual training is going to be against that. so we're you know It's like nested optimization where the inner optimization is make the thing do the bad thing and then the outer optimization. is okay Given all the ways that we're seeing that happen, optimize the other direction so it becomes hard to do that fine tuning. That works at all is
01:13:45
Speaker
really interesting and like could be the basis for future open source models to be responsibly released. and it It at least would get you to the point where you wouldn't have these things being exposed accidentally. and you would you know You would take real resources, real engineering, real compute to ah expose them, which certainly means like you know random developers wouldn't have to worry about that as much. But there's still a lot of issues to be solved, including the performance penalty for a technique like that.
01:14:15
Speaker
I would also just have ah a bias or or a heuristic ah in in in the direction of anything that can be jailbroken will be jailbroken. And when you when you put out something under an open source license, well, there are no take backs. And so I ah really hope Meta and then Zuckerberg and so on consider what but they're releasing and and that I hope they're not kind of hung up on their previous commitments to open source as ah as a principle ah in and in a in a dogmatic way. I hope they take it kind of case by case. Dogma and ideology in general, I don't think will serve us well in the AI era. you know it's we've We've definitely got to be very
01:15:01
Speaker
vigilant for new information. And i to give Zuckerberg credit, I mean, he basically has signaled that more so than the head of A.I. at Meta Yann LeCun, who has not really signaled that nearly as much. um But I think Zuckerberg, you know, calls the shots there and he has has indicated us a reasonableness at at least, which is has been good to see.
01:15:22
Speaker
You have an analysis of the kind of relationships between the AGI corporations, Anthropic, OpenAI, DeepMind, and the big tech companies, Microsoft, ah Amazon, at Google, Apple, and so on. there are so So each of the AGI corporations, they have their own kind of backer that backs them financially and with kind of data centers.
01:15:45
Speaker
But these relationships are quite contentious, I think. maybe Maybe tell us about how are these relationships going, who's benefiting, and also in particular, do the people in charge at the big tech companies buy into the vision and the ambition of the leaders of the ADI corporations? Yeah, it's I mean, this is a really dynamic space.
01:16:10
Speaker
That's one way of saying it. Yeah, I think, you know of course, the general prevailing paradigm is the scaling law paradigm. And to date, it seems like the scaling laws have held.
01:16:23
Speaker
They could flatten, but it seems like the people at the frontier developers do not expect that. And certainly the check writers seem to be at whether they are full believers or not is maybe a little less relevant than how they see the competitive dynamic that they are enmeshed in.
01:16:44
Speaker
And they basically all seem to think that the chance of transformative AI in one way or another is real enough that they are willing to spend tens of billions of dollars to try to own enough infrastructure to

Tech Giants' Investment in AI Infrastructure

01:16:59
Speaker
be a real player in that game if and when that happens. And I think increasingly they they sort of think that it will happen, but they are kind of like, you know, the big players here are pretty obvious household names, right? You've got Microsoft, you've got Google. Microsoft originally partnered with OpenAI i in a sort of exclusive way. That exclusivity has started to break down in both directions. Google has DeepMind. Anthropik has partnered with both Google and AWS. Meta is kind of doing its own thing, although interestingly, their llama models also now have started to launch with partners, including AWS, that is doing the you know the
01:17:37
Speaker
the actual inference serving and making it available via APIs and whatever. so Basically, I'd say you have four with maybe a fifth that you could you could debate you know how far down you want this list to go, but the the clear four are OpenAI, DeepMind, Anthropic, Meta, and then XAI is like definitely lurking and could you know enter the chat in a meaningful way.
01:18:00
Speaker
potentially quite soon because they also have a giant cluster you know that they've built and they they do know how to put infrastructure together quickly as you would expect from an Elon company. But yeah, they're all like, okay, we can't be left behind. I think Zuckerberg has articulated this probably the most clearly where he's basically been like,
01:18:17
Speaker
you know We know that we never want to be left behind of the next frontier technology, and we think that you know AI is the next thing. and it's you know It looks like it's going to cost tens of billions of dollars to build up the infrastructure for it, and we have it, so we're going to spend it. and That's basically the same analysis from you know Google's probably even been there longer than before. They've even designed their own chips. met Of course, met all the big players are now also getting into the proprietary chip game because they're tired of paying such you know insane premium to Nvidia, which they do continue to do, but they're also like hedging. So everybody's both like partnering and hedging. It's a very sort of muddled situation. If you had
01:18:57
Speaker
you know So many of these things, if you'd gone back not that far in time and told people what was going to happen, they would think you're crazy. The idea that OpenAI could have a partnership with both Microsoft and Apple at the same time is you know definitely a ah really good example of that. and Then on the flip side of that, you have Microsoft trying to diversify too. One of their flagship AI products has been GitHub co-pilot which is a coding assistant and It was always powered by open AI models until just you know, I think the last week they announced that they're now making Anthropic models available there as well and Google models. So now you have literal Google models Powering Microsoft products. This is you know, quite quite strange to say the least so I think we definitely need to
01:19:47
Speaker
watch this space call closely, I think they there's just a lot of strategic uncertainty. There's a lot of confidence to that infrastructure is going to be really important. right yeah like you Nobody sees a path to, even though there have been lots of efficiency gains and prices come down 1000X, whatever. you know if If demand has gone a million X while prices come down 1000X, then you're going to need a lot of servers to do this stuff. That seems to be like fundamental ish physics. you know whether We can still have more efficiency gains to come. but There's no substitute for compute, right? there's no We're not going to engineer our way out of needing to actually compute things. So they all want to own compute and they're all investing super heavily in that in multiple ways, buying as much as they can, you know building data centers, seeing the big trend toward nuclear power from multiple of these you know companies that Microsoft just announced a deal. They're going to reopen a closed plan at Three Mile Island. There's a lot of you know talk about how are we going to clear the sort of
01:20:47
Speaker
just the red tape, you know the the red tape of of permitting for new energy production because they're going to need it. So they're they're definitely anticipating this you know continues to scale. They're trying to build out for it. But at the model layer, it seems like there's much less certainty. like do we Do we think that there's going to be one model to rule them all? Do we want to sort of play you know a more neutral role. And again, they seem to be all kind of doing it all. Amazon has kind of been the most like in their Amazon way, right? they' They're the everything store. So they want their AWS platform to have all the models because they want you to choose AWS and they don't really care, at least so far, they don't really care whether you use their Q model, which is not considered to be a frontier model or
01:21:32
Speaker
an anthropic model or you know whatever. And they'd probably partner with Google too if they could get that done. Microsoft you know has just done a similar thing. They're trying to develop their own models as well. um Apple is you know doing a similar thing where they they're bringing in as many different frontier providers as they can. Google as well. right So these companies don't like traditionally... You've got um Android and and iPhone have been you know at war forever, but now you have iPhones bringing in multiple different providers of Model. ah But again, they're also trying to do their own stuff in house. So everybody's kind of trying to partner license, you know, make sure they have everything that their customers need, but also make sure they're not falling behind in the in the frontier know how or if they are, full if they're behind, they're trying to catch up.
01:22:17
Speaker
But yeah, it's it's a very strange time.

AI Product Stickiness and Safety Concerns

01:22:19
Speaker
I think it's good in the sense that I have worried in the past. You know, many people, of course, I'm not alone in this, but many people have worried about the race to the bottom in terms of safety precautions because of potential winner take all dynamics in the market.
01:22:37
Speaker
And this definitely takes the edge off of that a bit. um It is really easy to switch from one AI product to another. There is not a lot of stickiness yet that could change. you know If we start to see things like effective long-term memory,
01:22:53
Speaker
that you can't easily port from one system to another or you know ongoing personalization, which is sort of the same thing, but it could be accomplished in multiple different ways. If you start to see a relationship, actually it was a woman named Eugenia, who's the CEO of a company called Replica, which is like your AI friend. She kind of turned me on to this originally where she said, you know the if you meet if you have a friend and you meet another person who's smarter than your friend, you don't abandon your friend to just spend all your time with this new, smarter person.
01:23:21
Speaker
And she said, you know, I think that the actual stickiness and and business defensibility for a long term is going to be the relationship that you have with it. And the you know, the fact that you can't go easily recreate that shared history with another new ai product, even if it is a bit smarter.
01:23:40
Speaker
I thought that was like super interesting and insightful, but that doesn't really exist yet. and it It only goes so far. right i mean if If I want to solve some problem, I'll probably go to a the most advanced model that's not my friend, even though I've been friends, let's say, with Claude for four years now. I just want to solve my problem if's if it's critical. and so I could see that happening and in and of entertainment use cases, but if i really want to if if something is important to me,
01:24:07
Speaker
I need to review an important contract. But then I i choose the most capable model, I think. Yeah, and maybe both too. I mean, I think, you know, like you have multiple... Zvi Mashowitz is also a great analyst of all this stuff. and And he also has shaped my thinking on this a bit, where I told him, you know, I'm kind of worried that like, it sort of feels like a winner-take-all market because if somebody advances, you know, capability, why doesn't everybody just switch to that? It's really easy to switch. And he said,
01:24:36
Speaker
Well, again, the kind of a friend analogy. He's like, you have multiple friends. you know you don't you're not going to You're not going to abandon them all. So maybe, yeah, maybe one is a little smarter, but maybe the other one knows you a little bit better or you know maybe it has a style that you prefer and maybe you ultimately would consult both. I actually do that today with contracts.
01:24:54
Speaker
if I because like you know do many commercial projects here and there. and i don't bother I don't do the contracts myself. right so with the The business, that if I'm going to do consulting, whatever they send me the contract. Now, I don't want to waste their time or mine.
01:25:10
Speaker
So my protocol is I take the contract to all three of chat GPT, Claude and Gemini and feed the thing in and say, I'm about to sign this. I'm the contractor in this relationship. Is there anything here that you think I should be concerned about? And if none of the three have any concerns, then I'll basically just go with it. But if, you know, if one does, then I'll you know maybe have an extra round of email about it. And in that, you know, in that angle, I do think, like,
01:25:39
Speaker
I could imagine a future where one of them you know has been my daily companion and has like read all my emails and knows who I am and has sort of that perspective on me. and The other one is like smarter and you know it's just better at contract type stuff. and I could imagine getting both and kind of getting different things from different sources. Yeah, like how will signing this kind of how signing this made me feel versus what is the exact ah legal requirements I have to fulfill if I signed this something like that. Yeah. And you can imagine the same thing in in code too, right? I mean, it's maybe a little bit more of a stretch there, but certainly applications evolve over time and the whole context of like, what are we trying to do? What have we tried in the past? Like what's been well received? If you had something that had all that
01:26:24
Speaker
it might be possible to get a different kind of value from it versus you know something that is like just the absolute best at designing the most efficient algorithm or you know solving the the hard logic problem.
01:26:37
Speaker
and And this is something we've we've seen in in kind of more traditional applications where developers make it difficult to export your data. so So it's difficult to switch platforms. You expect something like that to emerge in AI too, where no, you can't just export your whole history and and import it into another model because that kind of chat history serves us so as a mode for for us.
01:27:03
Speaker
Yeah, maybe. In the social media context, I would actually say I just think the network effects are the the primary thing. like I recently did, for example, export all of my Twitter history, and they gave me an unbelievable amount of data, like two gigabytes of data.
01:27:20
Speaker
It was not just what I had tweeted and who I followed, but like everything I've ever liked. it's It goes deep. I was just doing that to try to you know fine tune a model to write tweets in my style, ah which is still an ongoing project. So I didn't need nearly as much data as they were willing to give me.
01:27:36
Speaker
But I can get it. And yet it's still like not to. You know, we haven't seen Mastodon take over and, you know, we we still see like the network effects are pretty durable. The AI companies, of course, don't have network effects in the same way. So one thing they might try to do is create them in some way or another where, you know, and you can kind of squint and see this a little bit with like GPTs, although that maybe is abandoned. It's like unclear what's going on with GPTs. But if you are in an ecosystem where you can talk to an AI assistant and it can go out and consult other AI assistants that are you know either fine-tuned or have certain context or were like designed by experts you trust and you know that all exists within that ecosystem and you've kind of spent time curating you know the ones that you would actually want to use.
01:28:28
Speaker
Then I could see a way where it might be hard to switch, even if they do give you your data. But again, that doesn't really exist yet. I don't know if it's super easy. I actually haven't tried to export my data out of chat GPT or Claude. You can go just see your all history, but to two export it in a structured way, I'd have to double check to see exactly how easy that is today.
01:28:49
Speaker
But yeah, i mean we're very it's funny. like This is going very fast, but it's still like in many ways quite immature. it's It's a strange thing where it's like if you know the transformative AI may be here before like before everybody's even used a chatbot. And it also may happen before we really have a great handle on what the industry dynamics would be. ah But they could also change.

Transparency and Secrecy in AI Development

01:29:15
Speaker
you know it's like we but We might just fast forward through some of these phases and never really know. like if we had If we had just stopped at GBD4, what would market dynamics have been like? We'll probably never really have an answer to that question.
01:29:27
Speaker
I mean, on the point of the weather AI development as it's currently currently being done is a winner-take-all scenario. I do ah increasingly worry about kind of AGI corporations keeping their models internal. So say it turns out that TPT-5 or TPT-6 is an excellent code or perhaps better than than their researchers. and Well, why not why not keep that model internal? why Why do you want to provide access for others to use that if you can if you can get to superintelligence by ah using your ah your models to to do AI research. do you think do you think i mean we in In some sense, we're already seeing this. We we haven't seen the full O1 model released yet.
01:30:11
Speaker
and And that could be good if if they're if they're if they're testing it for for whether it's harmful or taking precautions. But there's also the the flip side of that coin, which is, are they not deploying because their ah the models are too powerful? What do you think about that? Yeah, I think that's going to be a really ah really interesting dynamic. I mean, we do have one company that is basically declared that that is what it plans to do. And that is the Safe Super Intelligence from Ilya, who of of course you know was an OpenAI co-founder and you know major driving force there, participated in the yeah you know temporary board move to Fire Sam Altman and then you know kind of never found his way back to the OpenAI team. And so now he's gone and started his own Safe Super Intelligence company
01:31:03
Speaker
And they have basically said they don't plan to productize anything until they achieve safe superintelligence. So that is pretty wild to think about. you know And it's it's unclear like what, if any, disclosure will even get from them barring you know legal requirements. And obviously, even legal requirements you know aren't always complied with, depending on you know how high stakes and you know whatever people's considerations may be.
01:31:33
Speaker
So I do think that is a very reasonable concern. um It definitely jives also with the reported, you know, leaked, allegedly leaked, anthropic fundraising deck from maybe a year ago now that included a statement to the effect of, we think that the leading companies in 2025, 2026 timeframe might get so far ahead of everybody else that there's no way for anyone else to catch up. A decent model of that would be that they just have the best models that kind of begin to outperform humans at advancing the fundamental science and then
01:32:05
Speaker
Indeed, like how is anyone going to catch up from that point on it? That's at least one stylized version of that story that you could tell. I've also seen really interesting comments recently from Miles Brundage, who was head of policy research at OpenAI i and and just recently left. And he had a number of interesting comments, but one of them was.
01:32:25
Speaker
that he thinks the decisions around what is deployed internally at OpenAI are going to start to become very important. Most of the most of his time spent there was focused on how do we deploy responsibly to the public. They didn't you know reading between the lines, you can infer that they didn't have too much process or you know concern around taking like a GPT-4 and just giving that to you know the technical staff or whatever, fine. But they've also talked about superhuman persuasion and especially in light of, you know of course, there was this recent
01:33:04
Speaker
sad news story of a kid that had been chatting with character A.I. that ultimately committed suicide. I went back and revisited a less wrong post from a person who described himself as a large language model researcher who also had become kind of enamored with and kind of you know carried away with a romantic relationship with a character, A.I. character. So you start to imagine like how would the you know if something was going to leak from open A.I. in the future,
01:33:32
Speaker
is it going to be China stealing the weights? like that's definitely we know Or Russia stealing the weights? That's one way it could happen. Another way it could happen would be an individual internal you know getting sort of mind hijacked by some like weird relationship with an AI that maybe is internally deployed in a not yet highly guard railed or not yet highly monitored sort of way and you know going down to a strange path. so yeah i do think it's i do think the I personally experienced this and I suspect that it is a real
01:34:07
Speaker
thing for a lot of people internal to the companies that when you have this kind of privileged early access to the future, it is kind of intoxicating. And it's like, it is the sort of thing that is like, really, you're really reluctant to give up. You know, I never got into a romantic relationship with GPT-4. But in that two month time when I was using it super intensively before that, I mean, I wasn't even allowed to tell people you know about it at all. This was all just pre-deployment testing. It was kind of a shock or you know it was it was an adjustment when all of a sudden that thing ended and it was like, okay, now I have to go back to a regular AI now? like This sucks. you know so it It did feel like a loss, it just a loss in personal capability, kind of a flowers for Algernon sort of thing where I was like you know i feel my my cap my personal productivity and capability is like diminished for for a losing
01:34:59
Speaker
access to this thing and I like felt kind of special insider and now that's going away too. like Oh man, it really sucks. They are reportedly becoming a lot more secretive even internally. It's very unclear what the real dynamics of that are, but the reporting recently has been like OpenAI held an all hands meeting to talk about the fact that they're going to deploy strawberry soon. and That was like from earlier this summer.
01:35:28
Speaker
And that's quite interesting in the sense that, well, what what you know how many people actually even know what's going on even within an open AI? You can imagine that if you are on the business team, maybe already you don't have a lot more information than the rest of us do about like what the actual current research frontiers are. So yeah, all this stuff is super strange. I do tend to think that we definitely need more visibility into what's going on.

Building Frontier AI Labs

01:35:54
Speaker
I have an episode of the podcast coming soon with Daniel Cocatello and Dean W. Ball, who Daniel was famously left open AI and declined to sign the non-disparishment clause, which eventually kind of blew up because he was denied his vested equity for refusing to sign that clause. they In light of the kerfuffle that came out of that, they changed their policies. He now has his equity back and can disparage the company. And he and and Dean have put together a transparency
01:36:26
Speaker
proposal that basically would require a certain amount of disclosure from these companies to, for example, the number one point in their disclosure plan is when a new capability is observed, you have to disclose that it was observed. and There's definitely some ambiguity there around like well what counts as a new capability and how do you make sure people are actually looking you know for capabilities in an effective way and you know to whom is it reported and with what level of detail and who gets access to those reports and are they anonymous or are they You know, would it be like open AI has observed this or anthropic has observed this or would it be a frontier lab has observed this? What degree is this like good or not good? I mean, I think it definitely there's definitely something to be said for it's very hard for the public or governments to plan for or respond to.
01:37:15
Speaker
AI developments if they don't even know what capabilities are being observed, if they're all just being you know kept secret in the Frontier Labs, and maybe even most of the people inside the Frontier Labs. The flip side of that is you know we used to worry as a AI safety community that like even demonstration of new capabilities would just further accelerate things.
01:37:37
Speaker
I think that's that may have been true at the G.P.T. for time. You know, I think you could you could have said at that time that there was still and its certainly your gorns and your demise, you know, had enough foresight that I don't think they were shocked by G.P.T. for. Personally, I was kind of shocked, even though I had the summer of twenty twenty two fine tuning the generation of models prior to that. And I was convinced before ever touching G.P.T. for that the A.I. was going to become transformative.
01:38:08
Speaker
But I was still like, holy shit, this is a real step change upgrade. The likes of which I did not expect to see as soon as it ties back a little bit to our very you know first ah points of discussion around just like.
01:38:22
Speaker
I think that really was a kind of maxing out of of what was possible at that time. And at that time, I think people had a sense that if all of a sudden everybody knew that GPT-4 was possible, that this would you know dramatically accelerate the investment into the space and all that sort of stuff. And that may maybe it was true then. It seems like now that maybe is all baked in. So if you were to say like,
01:38:48
Speaker
Oh shit, you know a new capability was observed and now you know a model can do a thousand consecutive steps on you know some computer task without any errors. you know would you Would you expect that that would like qualitatively change people's investment decisions? It seems like not really. It seems like all the people that can write the 10 plus billion dollar checks are already doing so.
01:39:10
Speaker
I mean, there there are still many, many billionaires who could perhaps do something with their personal fortunes, maybe. I mean, not not that many, but but some people it could perhaps make a personal investment. there are still There are still sovereign wealth funds that could step in the game. The likes of the European Union and the you know Indian government and you know, maybe like the Norwegian, you know, fund or the Singapore fund, but it does seem like it's getting to be too big for almost any.
01:39:42
Speaker
individual, and it's not easy to like break into the game even so. right I mean, you need a sort of narrative or you you need a sort of angle at this point. like I would not be surprised if XAI enters the real you know frontier circle. And beyond that, i I wouldn't expect too many more to get to that elite tier, which isn't to say that people won't build out their data centers and like train their models and whatever, but you know the talent is scarce.
01:40:12
Speaker
The know how is definitely like concentrated in these places. It's gradually diffusing, but there's a lot of know how inside the the frontier companies that is not broadly shared.
01:40:25
Speaker
And you it's just you know it's not 100% buyable. you can like To some degree, buy GPUs if you're not cut off from the global network of GPU distribution. But can you you know is there any amount of money that you can turn into a frontier lab? If you have just money and nothing else, I think it's still pretty hard. We've seen like The Falcon, which I think is the UAE based company, the Falcon series of models like they haven't really you know demonstrated that they can stay at the frontier. So and it's certainly not for lack of cash.
01:41:00
Speaker
and And why is it that money is not enough? what Why do you think that is? who is you know I mean, at at a certain point, people are interested in in you know in enormous sums of money. And is is this ideological, you think, or is this about connections or about you know getting by into your vision of what you want to do with this model?
01:41:19
Speaker
What's the kind of holdup here? It's probably all of the above. I mean, there was a really good blog post by a guy named Yi Tay. Hopefully, I'm saying that correctly, who was a former Google ML researcher, who is now at RECCA. I hope I'm saying that right as well. And they're kind of one of these like tier two foundation model developers.
01:41:42
Speaker
that I honestly think are kind of an endangered species. The blog post was about the difference between the infrastructure that they had at Google that he'd kind of become accustomed to and what it was like to go try to acquire compute in the wild. And it was just like, you know, you if you needed a an argument that Google has moats, like this would be about as good of one as you could come up with. He said that one time when training a model at Google, he like left the training run on for, I forget, but you know weeks, let's say, and came back weeks later and was like, oh, I left this thing running. like Holy shit. He'd just run the whole time without failing. And he was like, in contrast, you know when we are buying stuff, I think this is maybe normalized a little bit because, of course, you know people are trying to make this stuff better. And there has been a there was like a real scarcity of compute. And people were just kind of desperate to buy anything a year ago. And now that's eased off. and like
01:42:41
Speaker
The prices for H100s have come down a bit. It's not such a tight market, not such a not such a compute seller's market as badly as it was. But at the time, he was like you know providers vary the quality really varies the the quality of the cluster that you're buying into really varies. The you know frequency of node failure that you're going to see really varies. Bottom line though, we're constantly putting out fires. And you know the idea that I could have left something on for weeks and it would have just run without error very few places have that quality of infrastructure. So even if you go to Jensen with a container ship full of cash and say, you know I want to buy as many H100s or whatever you know the next chip is, I want to buy as many as OpenAI is buying. I want to buy as many as Elon is buying. And even if they sell them to you, you know you're still going to have a lot of work at at multiple layers of the stack.
01:43:38
Speaker
to recreate from the physical cluster to you know the recovery, to how does the data get passed around, to all sorts of different little optimizations and know-how, which are probably not even generally broadly known. It's not like one person at OpenAI eye knows all of those things. Even if they have a qualitative sketch of it, you know they're not going to know all the little nitty gritty details.
01:44:02
Speaker
And it's important to remember too, they are getting paid millions of dollars. So it's you know you would have to to poach someone away from an open AI who is you know maybe already making millions of dollars in a ML research role. With money, you're like, okay, am I going to pay them even more millions of dollars? And you know how many times am I going to do that? and you know, does that person feel like they may also just not believe it? You know, I mean, in terms of like, is it ideological? It maybe might be, it might be ideological. It might also just be like, listen, they're like, I don't think you can do this. So I really want to come join a losing effort. You know, I kind of like my spot here at OpenAI. I like being on the frontier. And, you know, I just don't really see that with any amount of money, you're going to create a credible competitor in the UAE, for example.
01:44:49
Speaker
So yeah, I could, maybe I could double my money and come live there tax free and, you know, drive a Lamborghini in the desert or whatever. But is that actually appealing when I'm already making a huge salary and I'm on the cutting edge and like, you know, my lunch conversations are with like the most brilliant people in the world. It's a hard, it's a hard sell. I think somebody like Elon can, can do that. But how many people actually could make a compelling enough case?
01:45:17
Speaker
that they would be able to peel real talent off of any of these top providers. I think what we've seen is that the the people that have gone out and started these companies, a good chunk of them have kind of come back to the big tech fold one way or another. You have these sort of aqua hire non acquisition deals that are you know bringing Mustafa and the ah pie team to Microsoft and Adept had a similar deal and There's others that I'm forgetting, but this is sort of a character, you know, also has had a similar deal. Noam Shazir has gone back to Google. Over and over again, you see even companies that have raised hundreds of millions or even a couple billion dollars. And in the case of character, in the case of Pi had like a pretty legit frontier ish model. And in the case of character had arguably
01:46:09
Speaker
the most traction of any product outside of maybe chat GPT still kind of saying, yeah, I don't know that we're really in a position to win long term. So yeah, if I really want to be on the frontier, I kind of got to go back to Google or I kind of got to go to Microsoft or I got to go, you know, one of these places that he's going to build the hundred billion dollar, if not the trillion dollar data center. So yeah, who else can pull pull people? It's it seems like it's increasingly very tough to do.
01:46:35
Speaker
Yes, it's an interesting new world. i mean in In any previous decade, if you said something like, you know the companies that are the biggest right now will be the biggest in 10 years or something, that's been a bad bet. right No one can replace Kodak or something. That's that's a bad

Vanity AI Projects and Their Impact

01:46:50
Speaker
bet. Because the the timelines too to AGI or to super consultants that are being projected are so short,
01:46:57
Speaker
It really does seem like the the big players now will be the big players in the end also and especially because of these dynamics that i wish you you mentioned here where. You have to have existing relationships you have to be a kind of a trustworthy navigator and.
01:47:14
Speaker
people have to to buy into your vision and and your story of of why it's it's worth kind of duplicating this effort. I could see a lot of... you know I would expect a lot of vanity projects to pop up also, where it's just about trying to get the model from country X or something, or the model that's associated with this group of people. That's not really worth doing, I think.
01:47:38
Speaker
but yeah i We should kind of run through developments in video generation and brain reading and science and so on. You have ah you have this list that you that you've sent me and i I would love to hear your thoughts on all of those topics also.
01:47:54
Speaker
I think the the very high level observation is that this is one of the biggest things that kind of sucked me into the space three years ago with Waymark, because again, video has this like sight sound, you know, it's inherently multimodal. And as I looked at these different things, I was like, wait a second, it's the same architecture that's writing the text as is understanding the images, as is generating the images. A little bit of a fu fuzz factor there. It's not exactly the same architecture, but the attention mechanism, like the the core stuff is like really proving to be super, super general. so That was an insight for me that I sort of stumbled my way into a few years ago that
01:48:36
Speaker
If the same technology can do these different tasks in isolation, it stands to reason they can probably do them all together. And we can probably sort of infer that if we see a certain capabilities trajectory in one domain, likely there will be a similar trajectory in other domains. So that's the high level story. And then the details as you go look at different domains, I think basically support that pretty well.
01:49:01
Speaker
Image and video generation is one that is obviously pretty visible. We've seen OpenAI with Sora. We've seen Google with Vayo and Runway in terms of actual deployed stuff with their Gen 3 model is probably leading the pack. This is definitely becoming a more and more competitive space. It's getting better. you know It's getting better with scale. It's getting better with better data quality. It's getting better as it sort of, you know they work on all sorts of refinement techniques. And for me, maybe the most interesting thing about these video models is how they are starting to have emergent properties. Emergent properties in general are you know a very interesting and definitely a very safety relevant phenomenon. like We know that we can predict with pretty high confidence like what the loss number might look like as we project out a scaling law, but what does that actually mean in terms of what it can do and what it can't do?
01:49:51
Speaker
in the language models that's hotly debated and many examples in both directions in the video models, the understanding that we hear from both OpenAI and Runway at least is that they are world models.
01:50:08
Speaker
they They seem to be learning intuitive physics, that they seem to be learning things like object permanence, that they were not specifically taught, but are just kind of getting grokked, if you will, or cohearing, if you will, at some scale and not by anybody's you know hand-coded feature design. But just because this stuff seems to happen as you get big enough,
01:50:36
Speaker
So that's really interesting.

Brain Reading Technology and AI

01:50:39
Speaker
but You might wonder, like well, why would OpenAI put so much into a video generator? like they don't Do they really care about Hollywood? Probably not. They also haven't released it broadly at all. So they're not you know making any revenue from this. But if it is the kind of thing that can actually simulate the physical world in a reasonably high fidelity way as you start to see with these like pirate ship sloshing around in a coffee cup sort of demonstrations where it's like, damn, that looks pretty good. you know
01:51:09
Speaker
As a human brain, I can't really even tell you a problem with the way that that water is sloshing around. I would need like real, you know, intense simulation software to to come up with anything better than that. That is definitely a space to watch. It's starting to pop up in the form of generated video games, too. This has been just like the last couple of weeks where you're now seeing interactive, playable video games that are totally generated by a neural network on the fly. And this only really works if there's like enough coherence and stability in the world model that you're not just, you know, if you turn your characters you know one direction and then you turn back and like it's a whole different scene, that's like a very weird video game to play. But these you know pretty important traits like object permanence that are
01:52:07
Speaker
components of an effective world model seem to be coming online through just pure scaling. So that's quite interesting and definitely a space to watch. As you go through all these things, it's it's kind of similar story, right? Like my next one is brain reading is starting to get good. I've done a couple episodes of the cognitive revolution with the authors of the mind eye and mind eye to paper.
01:52:29
Speaker
What they did was put, actually this is an open source data set. So they didn't have to do this, which is pretty cool. They just took an open source data set of people in an FMRI being shown an image and then having their brain activity recorded by the FMRI at the time that they're seeing the image. So now they have input is image, output is this pattern of brain activity.
01:52:51
Speaker
and then reverse it, learn to predict the image from the brain activity. And you can go you know Google MindEye and MindEye 2 to see how high fidelity the recreations are. It's like, I would say, you know sort of artists rendering. If you were to you know look at like a courtroom sketch or like the police sketch type of a thing, it's it's definitely not pixel level accuracy, but it's also arguably limited by the resolution of those scans, what the the the little,
01:53:19
Speaker
I believe they're called voxels, the little 3D spatial elements of the brain, the resolution of which they can measure is on the order of like a grain of rice or maybe like a little smaller than that, like a half a grain of rice or a quarter grain of rice. But within those, you know, you have a very large number of cells. So they're not looking at this at the neuron level far from it. We're talking several orders of magnitude up from the neuron level and looking at blood flow to the area. So they're not even looking at like the firing of of the neurons. They're looking at just how much blood is being directed to this area, which correlates to the activity. And that alone is enough to recreate images where you're like, well, I definitely know what you're looking at. I didn't see exactly what you saw, but I i can be pretty confident that like the narrative that you would have about this and and then you know my understanding of it are are pretty similar. That's insane to me and is another one of these
01:54:12
Speaker
things where I'm like, I swear to God, if I was the kid and this had happened, you know it would have been like all over the news. And now people even who are pretty plugged in to technology and even paying attention to AI has often not even heard of that kind of thing.
01:54:27
Speaker
Neuralink is changing that somewhat, of course, as well, because they are actually applying this with real patients. You know, they've got, I think now two people who have the implant and they're of course, you know, trying to scale 10 X per year, probably for the foreseeable future to they measure their success in the basically the speed with which you can use a computer. So they've got your the bandwidth. It is or the they've got the implant. It is reading your.
01:54:54
Speaker
brain activity in a different way from the fMRI thing. They actually have electrodes you know in the brain, so they're they're measuring with a higher resolution and then translating that to your desire. and How accurate you know can you move the mouse? Can you actually functionally do stuff? Their first patient plays video games with the implant.
01:55:14
Speaker
So I think that's really interesting. You know, Elon's high level rationale for why he wanted to create neural link was not actually to treat people with, you know, spinal cord injuries, but rather to increase the bandwidth of human computer communication so we can go along for the ride with AI. And, you know, the, the data there is very limited. The amount of data in that, in that mind eye paper was like,
01:55:43
Speaker
10 hours, maybe 10 to 20 hours per patient. and One of the big things from mind eye to mind eye two was that in the original one, they trained one model per patient. but That would mean with that technique, if you wanted to do that for a new patient, you would have to do that 10 hours in the machine yourself because everybody's brain is shaped differently and whatever.
01:56:02
Speaker
in the MindEye 2, they figured out how to merge all those data sets across patients so that an incremental patient, you could do it in just one hour. So now you could go in just one hour and mix your data in with all the other people's data and get similar results. And that's still a pretty small data set. I mean, you can imagine If you had, you know, a thousand X, won't forget even a thousand, whatever, just imagine scaling up the underlying dataset. Roughly speaking, my intuition would be if you 10X the underlying dataset, you could probably 10% your incremental dataset. This is kind of loose logic, but like this is what we see in fine tuning of the frontier models.
01:56:39
Speaker
They know everything, quote unquote, and therefore you can give them two examples of your task at runtime and it can kind of pick up on that and do your task. So there's there's definitely ah an inverse relationship between how much pre-training foundational data there is and then how much incremental data and and compute is needed to learn you know a test case. So it seems like we're headed for a world in the not too distant future where a very quick calibration on your individual anatomy can get to the point where a model can be fine tuned to you and you know read a lot of activity out of your brain. So it seems like a you know voluntary brain surgery voluntary implants could easily become the norm in like 10 years time. I mean that's you know that gets into like really sci-fi situations but I do think we're you know we're headed for sci-fi situations one way or another and that's only one way it could happen.
01:57:35
Speaker
What about transmitting data back into the brain? Because for it to be really an enhancement, you would have to somehow communicate communicate both ways so that I, you know say, I think something. I use my, my say, I send it ah to ah to a model. There's some processing. Now I want to then say something really smart based on all the processing I've i've had from my for my external model. How does the data get back in? Can I download the you know Black Belt skill like in the matrix?
01:58:05
Speaker
I don't have as good of a sense from that for that. I think that that is, I would say, read is going to be easier than write for you know operations involving the brain. But there is a little bit of that. They have the blind sight project where for people who have some sort of injury between the back of the eye and the back of the brain, you know, essentially the the signal is not being transmitted through the optic nerve, whatever. They can take a visual receptor, you know, a camera or whatever, and then map that onto the back of the brain and give you at least some ability to see.
01:58:42
Speaker
The resolution on that is pretty low. It's, you know, to say, immature technology would be an understatement, but that they are doing it at all you know is is kind of a sign of things, presumably to come. In some cases, you think like, well, what is the trade? You know, yeah how do you get training data on this? Is there a is there an inherent bottleneck on the training data that you could get and.

AI's Potential in Biology

01:59:04
Speaker
I have to say, I don't know on that one, you know, how many people will be eager to volunteer to contribute to a process of generating all the training data that and when the process involves writing to the brain, that could be tricky, certainly be a lot trickier than like read, right? I mean, I would much sooner sign up for something that is just gonna read my activity and try to make sense of it and contribute that way versus something that's gonna actually try to send signals into my brain. So that one could be tough just because the if everything is kind of on the same trajectory of exponential inputs, if some inputs just can't be scaled in the same way, then that that could present a big barrier to some of those things actually happening. And and that's one where I would
01:59:44
Speaker
say, I don't know, it's possible. you know it might It might just be really hard to get that kind of training data, but I also wouldn't count Elon and the Neuralink team out. yeah i mean What are some some general learnings about AI in in biology? you think this is Do you think biology is too complex for AI to make a big difference? Do you think drug discovery is happening in in the way that we ah imagined that it might happen, in the positive vision that we could test out 10,000 different drugs and quickly get to something something that works?
02:00:19
Speaker
Yeah, this is kind of a frontier for me right now in my own world model. And just, you know, I i don't know a ton about biology, so i've I've definitely got some catching up to do. But I think the same basic trend seems to be shaping up as it has in language models and in, you know, image understanding and generation and text to speech and everything else. Right. We've seen over the last year, the beginning of these kind of foundation models, you know, original alpha fold was like,
02:00:49
Speaker
a breakthrough, but it was a highly engineered system you know with a very specific set of inputs and outputs and a like ah strategy for why this is going to work that was a lot more detailed and grounded in a lot more science than just, hey, what if we just throw all the data we have into a transformer and see what comes out the other end? We are now starting to see, and I don't i don't mean to diminish the you know hard work that has gone into some of these things, but we are starting to see things in biology that look more like that.
02:01:18
Speaker
more like a GPT-2 or 3, where it's just like, this architecture seems to work. What if we throw a lot of stuff into it? Does it still work? For me, one of the big models that I think we'll be kind of looked back at as maybe a GPT-2 is a model called EVO, which came out of the Arc Institute. And they basically just took you know pretty standard architecture and trained it on a bunch of DNA sequences.
02:01:48
Speaker
and then found that it seemed to be learning what you might call a cell model in the same way that the language models ah appear to be learning a world model. And of course the exact details of that are contested and not just not well understood. But you know it is pretty clear in the language model realm at this point that there are meaningful concepts that arise that are like human recognizable in the middle layers of a transformer. And we have various techniques now to try to tease those apart and understand like conceptually what is represented internally. And it's not just like a bunch of correlations between you know this token versus that token. It's much more robust than that, where you know you can change tokens. But if the semantic meaning is the same, like the representations continue to look pretty much the same. like there's a much more so It's not just token level correlations. There is a semantic understanding.
02:02:44
Speaker
That also seems to now be happening. And the Evo model was the first one that really came to my attention of that happening in biology. So they just trained this thing on pure DNA. And the kind of headline or you know provocative question was, is DNA all you need? Probably no, is the answer just like, you know, text is not all you need. Eventually you want multimodal models. But this one was just trained on DNA sequences.
02:03:09
Speaker
And it did learn features of the world that are higher order that it was able to tell, for example, which genes are really important and which genes are not so important and and do what is called gene essentiality scoring just based on having learned all of these things. And they they basically do that by the uncertainty measure in the ah perplexity in the prediction given a specific sequence. So if you feed a sequence into this model and it's like highly confident in that sequence as and you know low perplexity, then you're like, okay, that must mean that this is something that is important.
02:03:50
Speaker
because it's like it's got to be that way. It must be conserved in nature. If you feed something in and it's like not confident, it seems like it's it's highly uncertain, then this is maybe an area that could vary a lot because it's not actually so critical to the organism's survival. So that is early stage stuff. They only did that on bacteria and viruses that affect bacteria, bacteriophages.
02:04:19
Speaker
And so there's a long way to go. They didn't even have any eukaryotic DNA in that initial version of that. But one assumes that there's an EVO2 coming. And we've also seen some of this stuff out of the company evolutionary scale. Their ESM3 model is in fact multimodal. It combines sequence structure and function.
02:04:40
Speaker
And i'm I'm hoping to do an episode with one of the founders of of that company soon, so I'll be going deeper into this. But again, just in the last week, there has been an open source project where people are applying some of the interpretability techniques that have been developed for language models at Anthropic and other places.
02:04:58
Speaker
and applying those to these biology models and starting to see that, like yes, there are features that are represented in ways that we can start to pull apart. What's going to be really fascinating there is probably a lot of those features we don't know. And I think that's going to answer a lot of questions kind of philosophically around what these things are learning and and what what it means or like what the ceiling might be. Because you hear a lot of stuff in the language model debate around, OK, sure.
02:05:28
Speaker
Even if, you know, first it was like, well, they're just stochastic pairs. They're just, you know, doing dumb autocomplete. I think that is basically now we could put that behind us and say, we're doing more than that. But then there's still the question of, okay, but they're just learning from text and we wrote all the text. So they're still just learning from stuff that we figured out. So maybe they can learn what we know, but can they really ever learn anything new?
02:05:49
Speaker
And in text, I think that's kind of a hall of mirrors that's like pretty hard to really resolve. But one way we could resolve it, I think pretty conclusively is if we do a similar process on biological data, you know, various kinds of sequences, instructions and what have you. And then we start to do this interpretability and we look at these features and we're like,
02:06:10
Speaker
making actual discoveries where it's like the model learned that first, then we interpreted what the model was doing, and then we tried to figure out what does that actually mean in reality, and then we learned new stuff that way. That I think is going to be a probably a big driver of biological progress and hopefully medicinal progress.
02:06:31
Speaker
and would, in my mind, kind of resolve. and i think And I think I know where it's going. But when we see that, if we see that, and I think we're starting to see glimmers of it, then I think we would have to say, geez, these things can learn things that we did not know, right? Because we certainly didn't write this code and we didn't even necessarily have some of these concepts at all, and yet it has figured out what they are, and it was only by extracting these concepts from the model and then you know trying to figure out what it was that we even discovered these phenomena in the first place. That would be a prediction for like a 2025 timeframe that I think 2025, maybe 2026, but it's happening fast. If you start to see that, it could both really move the needle and biology in biology and medicine, but also I think it should be like a
02:07:16
Speaker
from a philosophical understanding standpoint should be like a big update for a lot of people, I would think. yeah if we If we have AIs making so ah genuine scientific discoveries, then that seems to be the pinnacle of human activity. that That would be a big thing. You can imagine ah an everything model that does everything you know does all parts of the the loop. i think what i you know To refine my or clarify my prediction a little bit for just the next 12 months, I don't necessarily think that we will see a model that can learn those things and then have a conversation about it with you, but something that
02:07:49
Speaker
When we reverse engineer it, we can say, oh, this thing picked up on this concept and understood how these things connect to each other in ways that we didn't. And it was from that reverse reverse engineering that we learned that, not necessarily that it will like give us you know the download in a in a digestible way. Although that could be, you know that's maybe more of a 2027 possibility than 25.
02:08:10
Speaker
Yeah, I mean, learning these things from interpreting the model, seeing how how it's solving a problem and then kind of reverse engineering, what model of, in this case, a cell, for example, it's it's also an interesting way of kind of verifying that some that some new discovery is being made here. When you interpret the model, something new pops up and and you you can kind of determine that and a discovery has

Advancements in Self-Driving Technology

02:08:34
Speaker
been made. that's an That's an interesting kind of workflow, I think.
02:08:37
Speaker
Finally, i let's talk about AI in the physical world, so robotics and self-driving. Where are we on self-driving? 10 years ago, I was promised that that perhaps I wouldn't have to take a driver's license, and self-driving was just around the corner and so on. Since that, we we've kind of been very close, but not quite there. Now, we have self-driving taxis in certain American cities, but it's still not mainstream. Where do you see self-driving cars going from here?
02:09:08
Speaker
Mainstream in a word, I think is is probably the answer. I think Waymo is the is the real existence proof at this point that it can work. It is pretty mainstream in San Francisco at this point. You know, if if you go if you go to San Francisco, first of all, you download the Waymo app. It's exactly like the Uber app. It does not require a waitlist anymore.
02:09:29
Speaker
And you just hey you know call the car. It shows up. It really is the sort of future that you've been promised. You get into it. There's no driver. There is still a steering wheel, but there is no driver. There's just a sign on the steering wheel that says, don't touch, basically. um I assume you can't touch it. I mean, if you probably can't use the steering wheel.
02:09:50
Speaker
I didn't try. I respected the signage. Yeah. Unclear what what sort of override or I don't know. I didn't mess with that. I was assuming that that was just there to you know allow a human operator to come use it when needed, but but I don't know what would happen if you tried to yank the wheel ah while you're going, but I wouldn't recommend it. yeah it's i mean it was It's really good. The experience is super smooth. What was striking for me was just how quickly it faded into the background.
02:10:16
Speaker
I was like super geeked to do it. you know I had done the FSD before. I'm a little bit out of date on my FSD. they've There've been upgrades and people are excited about them, but I haven't used the very latest version myself. and It has even been a few months since I did the Waymo because i that was the last time I was out in San Francisco. but it was so comfortable and so normal that within like five minutes, and despite you know being very engaged in this space and excited about doing this and you know what a new experience, I found myself checking my phone yeah know within the space of like a 20-minute ride. and I had to repeatedly remind myself that like you're in but you know you're you're experiencing the future now. like Pay attention for a minute.
02:10:54
Speaker
But it just faded to the background so quickly because it really just felt so natural. And you know they've now got statistics out there too that basically seem to say that It's safer than human drivers. Like how much safer exactly is not entirely clear, but they've got big reinsurance ah companies involved in these studies. I think it was Swiss Re. Maybe I need to fact check myself on that, but you know, a big globally known reinsurance company participating in a study that assesses the safety factor of cars is like the kind of thing I think you really would have a very hard time faking.
02:11:36
Speaker
So I don't think we've seen quite the same thing from Tesla. They do release a bunch of data. They say they're safer than human drivers as well. I think that is it's like definitely fair to be more skeptical of exactly where Tesla is at. But when it comes to Waymo, I think, you know, anybody can go get in one and there's really no faking either the experience or the numbers.
02:12:01
Speaker
it's It's not in any way kind of specialized to San Francisco. so so waymo is Why isn't Waymo operating and in and a bunch of cities? They're starting to move. I think they're in four cities. I don't know exactly which four, and they've got a bunch more coming soon. They're applying for approval in DC, among others.
02:12:20
Speaker
in my you know once upon a time, the auto capital of the world, Detroit, arguably still depending on who you ask, but certainly the self-driving isn't coming from here. and I suspect we may be a late adopter for cultural reasons, but they are beginning to expand. I mean i think Timothy B. Lee is binary bits on Twitter is a ah great source to follow for this you know kind of coverage like he's done deep dives into the numbers and really, you know, tried to understand it at a very granular level. So I definitely take some of my information from him with with a high level of trust. And basically his take is that they are now making the moves that you would make if you felt like your product really worked and could do a national rollout. So it probably still takes he kind of compares it to Uber. He's he's like, you know,
02:13:11
Speaker
There is still a lot of detail like where do the cars park at night? You know, where do they where do they get fixed? Who's going to fix them? The Lidar, you know, it's an expensive kit that the Waymo has who maintains those, you know, even just at the level of like cleaning you know the surfaces and making sure they're good to go on a daily basis. So there is a lot of logistics. And he was kind of like, you know, Uber in theory had the most like elastic model you could imagine for transportation. They didn't even have to make the cars. They just had to recruit drivers and get them on an app. and It still took them years to go you know from popular in San Francisco to like deployed everywhere. Waymo has a more intensive process. They actually have to, with partners, but you know they have to actually produce the physical cars and have them all in good working order.
02:14:01
Speaker
and you know kind of create the network for all the sort of support stuff, but it seems like they are beginning that process in earnest now. and I think they they even just announced a partnership with Uber too. right so there're you know the That may be one of the ways that they can kind of take a somewhat of a shortcut to scaling is say, well hey, maybe Uber can handle all that crap that we you know we don't want to we don't want to handle. What about automating our kind of household work by by humanoid robots? Do you think we're making progress in the same way that we're making progress with the like language models in that you train a lot large model to to control a robot and the model is general and flexible in the way we see language models being general and flexible?
02:14:46
Speaker
Yeah, it sure seems like it. Again, I would say GPT two, you know, maybe somewhere between GPT two and three right now is and obviously these analogies are somewhat ridiculous. But training data has been the big limiting factor there. You know, there's just not a large repository. We've made the Internet, you know, and and we have like all the text on there. And so, you know, in theory, one could go learn anything you want to know on the Internet. And that's kind of what the eyes have done. We don't have a similar repository of intuitive, you know, fluid motion through the world. What about video games or YouTube videos or some something where you can kind of infer motion through the world? Yeah, that is well, and that that definitely ties back to the the video generators as world modelers, too. So there is something to be said for that. There's definitely some challenges with it and, you know, techniques that need to be developed. We are starting to see positive transfer. I think I think it's what actually
02:15:43
Speaker
been probably a year or more now that you could confidently say we've started to see positive transfer in robotics. And that's a huge tipping point. Negative transfer to positive transfer is like, when I try to train on one skill versus multiple skills, in virtue of doing multiple skills, do I get worse at all the skills relative to if I had just trained on each one, that's negative transfer. Positive transfer is when I get better on all the skills relative to just having trained on each one. And so that is kind of a signal that like,
02:16:11
Speaker
this is gonna happen too. you know When you see this positive transfer, that's like... Now I'm just more gets you know more is better. And and we have certainly plenty of good reasons to believe that on priors from other domains. But now also enough positive transfer in robotics to to believe that it will happen there as well. And so now all these things become possible. Right. You've got lots of different techniques from teleoperation, which is, I think, what a big part of what Tesla is doing at their recent robotics and robo taxi event. As far as I know, they had
02:16:45
Speaker
physical robots walking around and talking to people and even like pouring drinks controlled remotely by humans that were perhaps in like a vision pro apple vision pro or i'm not sure what the human is doing the cognitive work to control the robot but You know, that's how you start to really bootstrap your way into the training data of, okay, when I see this input environment, what do I actually do? How do I pick up the glass and not drop it? You know, how do I pour without spilling, et cetera, et cetera. So all these things have, they've already come pretty far. The most impressive demo I've seen online recently, it was a robot folding laundry.
02:17:22
Speaker
And, you know, this was the kind of thing that, again, pretty recently you'd have people saying it'll be, you know, never or decades, you know, until a robot can fold laundry. Gary Marcus has said, you know, never in my lifetime is a robot going to be able to walk into my house and make me coffee. And I have to say, I think, you know, if he's willing to open his door to that robot, I think that that is almost certain to be achievable in the 2020s and probably you know, in the first half of the remainder of the 2020s, if I had to guess, potentially still at a high price point, you know, again, these things have to be manufactured. Tesla is definitely putting themselves in position to make many millions of these. Elon has said he thinks there will be more humanoid robots than humans in the not too distant future. So that's a lot. Yeah, I don't see any reason to think that it it

Unintended Behaviors in AI Models

02:18:13
Speaker
won't happen. You see Nvidia, too, has some really interesting projects these days around simulating data. And this is this has demonstrated that it can work in
02:18:22
Speaker
other areas of physical science like I did one episode on chemistry basically a guy named Tim Dygman had trained a model on the data created by molecular simulation you have these like super compute intensive simulation processes that say like okay here's our configuration of atoms or whatever now let's solve the wave equation and figure out what force you know is is acting on each atom or each even electron maybe given the
02:18:54
Speaker
configuration that we input and then we'll like increment a tiny time step which is like down to the like 10 to the minus 15 seconds or something ridiculously small and then solve the wave equation again from that new thing and in that way you know they can make pretty high quality simulations of like very small physical systems and then they train a neural network on that and then it works you know orders of magnitude faster because it doesn't have to do all the intermediate calculations it can just sort of jump to the answer And, you know, that's what is also starting to happen in robotics. and Lots of simulation, Nvidia, you know, graphics cards, you know, they're very good at these sort of 3D simulations. And so they're creating tons of this simulation data, which will also be used to train the models. So, yeah, I would just expect it can't be all that long before we start to see robots entering the real world.
02:19:49
Speaker
I think the jury's kind of out as to how many of them will be humanoid versus like, you know, small humanoid. I kind of want them to be like three feet tall and not like the same size as me. I would like to think that I could like kick one over and it can't kick me over if it really comes to that. But yeah, obviously the you know, if it wants to reach up to my top shelf and actually get the coffee, it's going to have to have some affordance for that. So, you know, the the argument that the humanoid form factor is the one, you know, there's definitely some some rationale to it.
02:20:17
Speaker
especially as the robots come into the real world, these sort of strange, anomalous, aberrant personalities that they seem to have without anyone's intent start to become like less of a curiosity and more of a real issue.
02:20:36
Speaker
so yeah like Whether anyone should spend time trying to understand what's going on in infinite back rooms and truth terminal, I think is debatable, at least in when when it comes to the specifics. But the, you know, the super short abbreviated story there is a performance artist has put multiple instances of Claude together and just let them talk about whatever they want to talk about.
02:20:59
Speaker
And it actually seems to be the case that Claude has kind of its own vibe that it pretty consistently settles into, which has been described as cosmic trickster spirit archetype. And it gets weirder from there. I don't think, you know, Claude is not the end of history.
02:21:19
Speaker
but I don't think this is going to be like the thing that really shapes the future, but it is really important to keep in mind that there's a lot about these things that we still don't understand. and When left to their own devices, whatever that means, or when put out of distribution, you know talking to each other in a sort of unconstrained environment is kind of putting them out of distribution, then you get weird behavior that is not expected, not designed by anyone, and just kind of like has to be discovered. you know it's It's a good reminder that these things are being trained, you know almost conjured out of resources, but not designed, you know not engineered in the way that almost everything else in the built world has been engineered. A robot with an emergent personality is like a strange thing to to ponder.
02:22:06
Speaker
I was about to say, yeah it's an it's an interesting curiosity when you see a model talking to ah another instance of itself saying interesting, wild, weird things. But it's it's it's another thing if you have a humanoid robot in your house, ah you want a stable and and harmless personality in that robot. And you want to be very, very kind of secure in the knowledge that that it's not going to change its personality.
02:22:34
Speaker
And as of now, we do not have a way to create that. I think most people would agree that anthropic has in some sense created the most aesthetically pleasing models. The top performing models are typically considered to be like you know the most ethical, the best behaved. You can have kind of these long philosophical discussions with them and yet they have other modes you know that are just very bizarre to behold.
02:22:59
Speaker
so Yeah, I mean, I think this kind of connects to a bunch of these other topics where, you know, what is the outlook from here? Well, for one thing, we have the new o one models create a new scaling law. So now we have kind of two key scaling laws. The first one was just.
02:23:16
Speaker
how much pre-training goes in that, you know, predictably leads you to better performance. Now there's a runtime scaling law as well. How much the model thinks at runtime can give you better performance. So that's what a one is doing. I use that for coding. I'll take like my full code base of a small app and say, you know, give me options for how I might think about implementing this next feature.
02:23:37
Speaker
And it will come up with multiple different plans and reason through them and finally give you you know the the one that it recommends. it isn't They are not sharing the full chain of thought there, presumably because they don't want people to grab that data and then go train their own models on it for probably a mix of business and safety reasons.
02:23:54
Speaker
So we see that happening there. That's another thing that's going to be like hard for people to catch up with. right This model is already reasoning. It's beating human experts at routine tasks. It's like starting to reason its way through more and more advanced problems. We haven't even seen the full O1 yet, but they have said, think of this as GPT-2. We know how to get to GPT-4.
02:24:14
Speaker
So all of the big AGI corporations and the big tech companies are investing a lot of money in kind of next level or next generation training runs so that are going to be extremely expensive. So they all seem to believe that that inference time compute won't be a full substitute for for training compute. it's It's not the case that you can just pour more compute at inference time into GPT-4 and then get kind of infinitely a better results as you pour in more compute.
02:24:44
Speaker
what Why do you think, do you agree and why do you think we have that limit? Why can't I ask chat GPT right now to think for two hours and come with come up with something that's truly brilliant? Yeah, that's a great question. I suspect it has something to do with just what it can represent internally versus how many different things it does represent internally at runtime.
02:25:13
Speaker
In other words, like if you took a GPT-2, the old you know an infinite number of monkeys on an infinite number of typewriters you know can produce the works of Shakespeare, okay, but in a more you know in a somewhat compute bound environment, you can't do literally everything, and you also can't evaluate everything anyway. So you have to have some quality of concepts at your disposal to manipulate, and then the runtime seems to be more about like actually doing more of the manipulation.
02:25:42
Speaker
So you need to have like decent concepts of software engineering to work with to then go down one path and then go down another path and then another path and then evaluate. And it seems like the difference between the GPT-40 and the 01 is not so much the quality of any individual path that it can choose, but or that it can go down in the first place. But the fact that it now has been trained to not just do one and spit it out to you, but to do one and another and another And there's probably some tricks there in terms of how does it decide when to stop and what's good enough. That's some of the know how that we don't have in the public. But yeah, I guess my general mental model is the pre training kind of gives you the raw level of capability and the concepts that you can work with. And then at runtime, how many of those you're going to consider and how many different ways you're going to beat them up and how many angles you're going to consider them from.

Predictions about AGI Emergence

02:26:37
Speaker
is more of a behavioral training question. So you know at some point, and if you I don't like to analogize humans to AIs too much, but if I just do apply this to myself, I do kind of think like, I mean, it we're more continuous, so it's tough, right? Because as I do this thinking I'm also kind of in inevitably updating, I can't turn that part of myself off, but that the model itself at runtime is not updating the weights.
02:27:00
Speaker
So at least as of now, at least as far as we know, nothing like that is is happening. So it seems like there is this you kind of max out, you know, given what concepts you have, how well they can be applied. And so at some point, you know, you you just need better concepts in order to make more use of the runtime. And they they in that way, they seem to be compliments.
02:27:20
Speaker
Yeah. All right, back to the big outlook on the future. So you have you have your predictions from the leaders and CEOs of the various ADI corporations. And when you hear these predictions, they are kind of jarring.
02:27:36
Speaker
We're talking about two years, three years, within five years, within this decade for AGI or transformative AI or some concept like that. What should we make of this? Is is this actually happening? Because if it is, then then all of our lives are going to change quite dramatically.
02:27:54
Speaker
Yeah, I mean, I think it is. I interpret it as basically earnest guidance as to what to expect. And it is strikingly consistent across all the frontier companies right now. You know, the incredible moment in a not.
02:28:11
Speaker
ah too far in the past Dworkesh podcast with John Schulman, who's now actually moved from open AI to anthropic, but was in charge of kind of the post training of the of the GPTs. You know, Dworkesh asked him like when AGI when, you know, and he said, well, you know, probably not next year, like could happen if one of our training runs goes, you know, better than we expect, but probably more like a two to three time your timeline. And Dworkesh is like, but that's still really soon. So from that, you kind of get a glimpse into how much they have sort of come to take that as their default assumption. you know It's not a foregone conclusion, but in his mind, it seemed like you know it was less if and more on exactly what timescale, and it didn't seem to be thinking super long timescales. Aria from Anthropik has said very similar things in their allegedly leaked fundraising deck and otherwise. Shane Legg also you know basically said the same thing. Also, on a Turkish podcast, he basically said,
02:29:05
Speaker
we see a pretty clear path. like We think there are some things that we still need to figure out, but we think we know what they are, and we think we can figure them out. and They all kind of seem to be you know landing on like a 2027-ish number. it's obviously not entirely you know The definition of this is not clear. OpenAI has their weird clause with Microsoft where when AGI is achieved, which is the OpenAI's open-air boards, prerogative to determine, then they like don't have to license their technology to Microsoft anymore. So I think in reality, a sort of declaration of AGI may be something that happens more for strategic like negotiating leverage reasons than for reasons of some specific definition having been met.
02:29:52
Speaker
But it seems like, you know, what dominoes are sort of left to fall, ah you know, and and are we seeing them starting to wobble? like As far as I can tell, they're all wobbling, you know, computer use and vision and all all these things are kind of happening at the same time. So the big question then probably becomes like, are we in a short timelines, slow takeoff world, or are we in a super intelligence comes relatively quickly after ah general intelligence and that I really don't have a great idea on. It seems to me like we can you know we have an existence proof of what really smart humans can do and I don't see any reason to doubt that AIs can get to a comparable level. When they do though, they will also have superhuman advantages and that is almost for sure enough to be transformative.
02:30:42
Speaker
And then there's another question, which I have a much less confident take on. Does that create an intelligent explosion that just like gets out of control or, you know, maybe not entirely out of control, but just, you know, totally takes off in the, you know, do we stay on an exponential curve beyond that point or do we kind of level off at like, you know, top of the human, you know, intellect range with also, you know, other superhuman abilities in terms of speed and, you know, energy usage and, you know,
02:31:11
Speaker
ability to recall information and all that stuff. I don't know. I think the you know the leveling off at the top of the human range is enough to create a radically different future.

Governance and Regulatory Challenges in AI

02:31:20
Speaker
and Beyond that, you know my there's there's a reason they call it the singularity, I guess. I mean, if you if you have an AI with the same kind of raw intelligence as the smartest humans, and then that AI can run at 10x or 100x or 1000x the speed,
02:31:36
Speaker
That's already some some very strange creature, and it's debatable whether that would qualify as superintelligence, but you can make that argument, I think. That speed alone is extremely important for for your ability to have an an effect on the world.
02:31:52
Speaker
So you've laid out for us all of these amazing advances. I think one area in which we haven't seen as much innovation is the area of governance, and both kind of self-governance from the corporations and governments responding to risks from AI and and potential harms from AI and so on. What what is your outlook in terms of governance?
02:32:20
Speaker
Yeah, time is short. I mean, I think anthropic put it pretty well in a recent ah piece where they basically said the the window of opportunity may be rapidly closing and governments probably need to get their act together in the next 18 months. I think it was the time frame that they gave to create some new rules or else the situation may kind of get away from them. and and irreversible way so i don't uh in general if i don't know what to think i'll you know adopt the point of view of entropic leadership and i think that's a pretty good jumping off point you know we've we've seen like a lot of experiments in sort of exotic self-government from companies opening obviously most notably
02:33:00
Speaker
that doesn't really seem to have worked. It seems like we basically have Sam Altman in charge there now. and you know It's not clear what they are really thinking. or like you know I do believe that they're sincere in wanting to create a AGI for a benefit of all humanity. I don't know what risk they are willing to take to create that. you know and and it's If their inner ah Thinking is like, hey, if it's an 80 20, you know, it's worth rolling the dice. I don't know. You know, it right now it does not seem like they have a a plan that they believe will work. Right. They had the super intelligence team. That's been dissolved and gone away. It hasn't really been replaced by anything other than just more general assurances that like will continue to make a safe. And of course, we really care. I'm going to work hard on this. But we haven't really seen a.
02:33:46
Speaker
clear sense of how exactly, how will you know, you know, what risk are you willing to take to get there? Like those questions are very much unanswered. And my view is that the government should do more. And I say that as a basically lifelong libertarian who, you know, mostly would like the government to get out of, you know, a lot of things that it's it's currently got itself inserted into.
02:34:12
Speaker
Suffice it to say, I'm not somebody who's wanting the government to come in and solve all the problems in society, but this one does seem like there's a role for government and yet the governments are kind of not sure what to do, right? they Don't understand the technology very well. They're behind and you know trying to catch up. But it's a challenge. I do this basically full time. And that is another thing that is kind of tipped is like I can no longer keep up. You know, I I know I'm missing things. Two years ago, I felt pretty confident that if something really mattered, I would see it today. I don't have that confidence anymore. Two years ago, I felt like if something really mattered, I would even read it. If it really, really mattered, I would get around to reading it. Now I'm like,
02:34:53
Speaker
I can't, you know, I have to satisfy myself with the tweets on a lot of things because there's just just not enough time. You know, the the number of papers is going exponential. The number of researchers is going exponential. Number of startups, number of products, you know, all these different dimensions are are roughly following a a similar trajectory.

AI's Potential in Routine and Innovative Tasks

02:35:11
Speaker
So nobody can really keep up with it all. It makes it it's very hard to have a holistic view and or an integrated worldview of what's happening.
02:35:20
Speaker
And you know governments obviously are great at that in the first place, right? So I think they are they have an impulse to do something, but what that something is is not entirely clear. The influence of the US versus China rivalry is like definitely pernicious. And and you know one of the things that has kind of made me most concerned over the last year has to see has been to see Sam Altman switch as far as I can tell from a previous position of We don't know what China's gonna do and too many people think they do and you know try to justify their analysis on an analysis of China. And I don't think that's a good way to go to now putting out op-eds that say it's Western AI or Chinese AI, there's no third way. And I don't like the sound of that at all. So we're in a tough spot. I think as much as we might like to see governments come in and do something good, I don't have a ton of confidence that they're going to
02:36:17
Speaker
for all the usual reasons, plus the just extreme difficulty, plus you know the sort of arms race, you know competitive dynamics, all of that is really hard. Maybe we want to end this by talking about giving kind of like a big picture overview of where we are with AI, both in terms of progress and in terms of the safety of the technology as a whole.
02:36:44
Speaker
Good question. Let me see how I can do. So progress-wise, all this stuff is kind of a Rorschach test. People are looking at the exact same evidence and coming to very different conclusions. And one wonders how that can really be.
02:36:59
Speaker
I think it is pretty objectively clear at this point that the frontier models are closing in on expert performance on routine tasks. That's always my standard way of framing it. And each word is doing real work there. Of course, you know frontier models, like you don't get this from you know llama 8B routine, meaning like there is data for the models to learn from, examples that they can follow.
02:37:26
Speaker
and task meaning like it's relatively finite in scope. It's not yet something that is job sized, right? So I think that's a pretty good account of capabilities with the latest O1 models. You might even say that they're starting to exceed expert performance in some of those routine tasks.
02:37:43
Speaker
And we're seeing flashes of these eureka moments where models occasionally figure something out that people don't necessarily know coming in. I think that is a huge space to watch, right? Can the AIs you know come up with scientific hypotheses, for example, that are sufficiently likely to you know to bear fruit that they are worth investing the time and resources to do the offline wet lab or real world experimentation. We do see flashes of that. There was just a ah report that came out in just in the last week, I think, that showed that in a material science lab,
02:38:21
Speaker
when the scientists were equipped with a model that could take in requested or required material properties, it could then spit out candidate chemicals, you know compounds, substances that it predicted would have those properties and then they would go and you know actually synthesize that and and measure the properties to see if it was right. and That did meaningfully accelerate the rate at which new successful materials were discovered and patents were filed and so on by like tens of percentage points, not just a couple of points and also not a multiple. It was you know kind of in that middle ground of depending on exactly which measure, 20% to 50% faster science. You're talking about the new paper from MIT here, Aidan Toner Rogers. Yes, I think that's the one. I'll link that in in the show notes for for people to dig into then.
02:39:10
Speaker
Somebody said, it I couldn't substantiate this myself. It might have just been somebody guessing online, but somebody said, this is bullish for 3M, which is a big American, you know they make like post-it notes and all the sort of ah specialized substances, lots of medical devices, you know all these sorts of things that have these like, extremely well crafted adhesives. They make you know a huge, huge, huge number of products. so there are good if I don't know if that is actually the company that was being studied, but if it was, it's a it's a pretty big base in terms of the number of new things that they're developing all the time and the number of new patents. So would it would be a good place to study that sort of thing. So that's like really you know the the kind of next frontier is, okay, we have a pretty
02:39:54
Speaker
Consistent ability if we're willing to put in some work to get the models to do. Expert level work on stuff that is pretty well tried ground. Can they go beyond that and do things that are genuinely new? Can they come up with insights that we don't currently have flashes of that right now? um You might even call it sparks of AGI, but I wouldn't. definitely I wouldn't say it is the norm. And when I advise people on like practical AI implementation, if they're not you know doing deep study and you know they're not experts in AI, I typically say, don't even try for that. you know just Just go try to automate something that you want to not have to do anymore in your daily life. You can probably be successful with that. But don't think that it's going to like take over for you when it comes to strategy or big picture, etc et cetera, et cetera.
02:40:46
Speaker
Because that's like, again, flashes and hard. you know People put a lot of work into figuring out how to make that stuff happen. It does seem like other modalities are one big way that that can happen. Notably, it's not when they use this model at this company, whether it was 3M or whatever, it was not a language model. It was a model that was specifically trained on all of the data relevant to the task of guessing you know what ah chemical would have what properties.
02:41:13
Speaker
So special purpose systems you know in any number of ways are typically required to get to that sort of thing. Another big trend there is, of course, automatic verification. If there is some way to do automatic verification, we see this in like math proofs. there are you know Increasingly, every so often you get a report of, oh, this language model was able to solve a math problem or improve on the state of the art solution to a math problem. And how did it do it? Typically, it makes a ton of guesses.
02:41:40
Speaker
And it's an objective score in a closed loop where you know some theorem proving software or whatever can actually come in and and evaluate quickly. Simulation can also do that. We see that a little bit in robotics where it's like one of the, in fact, I used to say no Eureka moments and now I have to say precious few Eureka moments because of a paper called Eureka that came out of it where they used GPT-4 to write reward functions for robotics training purposes.
02:42:08
Speaker
And I always stop to emphasize there that like that's not something that you know the average person on the street can do. That's like already, a you know kind of by definition, an expert task. Writing a reward function to indicate to the system, like are you getting closer to operating this robot hand effectively or not, is a non-trivial thing. But the key that allowed that to work is that they were then able to take the reward function, write it in simulation, and see if it actually worked.
02:42:35
Speaker
So having some way to get quick feedback and then power like a lot of guesses with language models is another way that we're starting to see some of these breakthroughs. And people, again, see this very differently. that Some people are like, oh, well, you know that's not really that impressive. It took a million guesses for the AI to come up with a better solution to the math problem. And you know that might have cost thousands of dollars or whatever.
02:42:56
Speaker
And I'm always kind of like, well, zero humans had come up with it. You know, in some of these cases in in all of human history and you know, what were their salaries, you know, sort of people that were potentially trying and and not, you know, coming up with that answer.
02:43:08
Speaker
So it's very strange, but I think that's a decent summary of the overall capabilities and definitely the you know the frontier to watch are those additional eureka moments and other non-language modalities as they get developed into specialist systems and also kind of folded back in and and connected with language models. There's definitely a lot of like the language model sort of directing you know the ah specialist model in how to, you know what space to explore and and kind of tying all that together.

Critiques of AI's Problem-Solving Methods

02:43:36
Speaker
do Do you think we should generally care about how AIs solve problems? So there are these types of complaints that you mentioned. Oh, it's just trying a bunch of of different utility functions, say, and and testing and simulation, whether they achieve the goal, the specified goal. and And these, I'm not sure we should take such complaints that seriously. But on the other hand, maybe we can learn something about future progress by considering whether a given model is actually functioning in a general way or is is is kind of just using brute force to solve a problem? Is there something interesting going on on that frontier? Yeah, that's a really good question. Again, I think you could see it a lot of different ways. I think it definitely still counts and you know should not be dismissed when it comes to just an understanding of
02:44:24
Speaker
how powerful AIs are overall are getting and how transformative they're likely to be because they are still typically cost competitive with humans even when you're running you know millions potentially of generations to get something you know that's a that's a little nugget of value. When it's on the frontier, you know that frontier progress is not not cheap to come by through any known means. right so it it doesn't It doesn't mean that this isn't valuable because it's more expensive than running chat GPT or whatever.
02:44:54
Speaker
I do think it really matters when it comes to you know the big picture control problem. and There again, you can kind of break this down so many different ways. Narrow systems that are purpose built for a particular domain, you know you don't really expect an alpha fold to in addition to predicting structure or whatever, like also give you paragraph reasoning, it's just not meant to do that. And so, you know, you can kind of take the good with the bad that the good, if you're thinking like control and big picture safety questions, you would maybe like to have that paragraph reasoning. But you can also take some comfort in the fact that like,
02:45:31
Speaker
these narrow domain specific systems, like they only do what they do, right? There's not much chance of alpha fold all of a sudden, you know, taking over computer networks or, you know, breaking out of its environment and, you know, self replicating in the wild. but Those sorts of things seem to be limited to much more general systems.
02:45:50
Speaker
And those like can give you a certain account of what they do and how they're solving their problem. The new O1 series of models has this like internal chain of thought, which OpenAI is not sharing with the rest of the world. but which And they've given multiple reasons for that, one of which, and certainly you know a big one, would be competitive reasons. They don't want people to take those reasoning traces and then go train their own models on them. Another one is that they feel like you know they want to study it deeply and you know watch out for weird stuff and so on and so forth. So interesting you know mix of reasons that all leads to we don't get to see those reasoning traces. But I do think for those general purpose things, you know you really do want
02:46:30
Speaker
them to be faithful in terms of their reasoning. you know What they're actually doing in the end, you want to be a direct result of the reasoning that they're demonstrating as opposed to, and sometimes you know there has been research that has shown that it's not always super faithful, especially if you kind of set it up in weird ways where you sort of tempt it to answer one way, but then kind of force it to give a reasoning and then still, you know like for example, if you just give multiple choice and all the answers are A and then you have the reasoning and it like gives an A. it's like It seems like it's giving an A because all the previous ones were A. It's recognizing that pattern, but it doesn't necessarily say that in its reasoning and it sort of gives some other explanation and you're like kind of left to wonder, geez, did it you know was it telling the full truth? or you know And is that even the right way to phrase the question is is a little weird, but
02:47:16
Speaker
you would definitely want to see as much as possible. You would want to be confident that the rationale given is the actual rationale that led to the answer as opposed to some you know post hoc generalization or even worse you know if you're worried about deceptive AIs, then you would worry that they might be outright lying to you. And we don't really have a good handle on that. I think the you know there's a lot of different ways that people are approaching the safety problem, of course. Right now, we're in this kind of, I hesitate to use the term arms race, but I think that's not a bad
02:47:51
Speaker
way to think about it where the models are getting more and more capable and they're also you know just kind of as they go through this post training process. They're also getting more and more post training on what to do and what not to do and that's like gradually working better the rate at which they do bad things you know when prompted is falling.
02:48:10
Speaker
the rate at which they refuse to do benign things is also falling. And so OpenAI i can say, look, our new O1 model is not only our most capable, but it's also our most aligned as measured by how often it does something we don't want it to do or you know refuses to do something it should do. And that's great. And that seems like alignment and capabilities are the same to a certain degree.
02:48:34
Speaker
But then, of course, one also wonders, you know, does that flip at some point? Are there threshold effects that could be hit where, you know, especially if you have some sort of deception? Of course, I think your listeners will be familiar with the worries about RLHF where because we are not fully reliable and consistent evaluators of language models or of ah of anything, the language models have some reason to start to model reality distinctly from modeling the human evaluator. And then that divergence opens up at least some potential for deception. And you know to what degree this will pop up naturally is unclear, but it has been
02:49:15
Speaker
engineered. Anthropic has done some interesting studies on things like sleeper agents, where they specifically train a model to, you could call it deceptive or not deceptive. In their one paper, they basically created a model that was harmless in 2023. This was done in 2023. And then harmful in 2024. And they just trained it to behave differently depending on what date it was told it is.
02:49:40
Speaker
and then you could say, okay, well, that's fine, but they engineered that. How worried about it should we really be? The next layer was they then applied just standard safety post-training processes to that model and said, will our normal processes remove this bad behavior if it already exists? And the answer was no, not not automatically. So, okay, you know again, it was engineered, but if this were to pop up organically,
02:50:08
Speaker
would the standard practices eliminate it? we don't There's good reason to think that it would not at this point in time. And all of this kind of leads us to, I think, just a general state of still pretty radical uncertainty. like that you know Things that did not happen over the last 18 months, like we didn't get the deep fakes in the election nearly as much as we thought we would, and we also did not really get any clarity on really almost any of the big picture questions.
02:50:36
Speaker
all the lab leaders are still saying we're going to have AGI in the next couple of years. There's also like an and yeah a certain discourse that's like everything is you know flattening off and scaling is kind of petering out and you should expect that because We're getting to human level performance, and like how much further could you really go when you're trained on human data? I don't think that's really going to hold. I do think we're going to find other ways to continue to teach these things other than just imitating us. Say more about that, Nathan. That's interesting. What other ways might we teach AI?
02:51:06
Speaker
anything where you can get actual real feedback, you know, what coding is ah is a great example of this, like does the code run? Does it achieve the objectives? I think we're going to probably see a lot more of that kind of automated thing. And that'll be better in some ways than others, right? Like we may see poetry kind of flatten off at roughly human level. I'm not sure that we even have a concept of like superhuman poetry or what that would look like. But I would say,
02:51:34
Speaker
That could be a real challenge and it's it's unclear that like getting superhuman at coding will generalize to superhuman poetry, but that's like a pretty good you know contrasting you know pair of of use cases where code, you run the code, you can quickly execute the code, you can quickly get the error messages, you can quickly run all unit tests.
02:51:54
Speaker
and it seems like there's a, anything where you can do a sort of self-play type setup is likely to continue to advance. Math is another one, you know, can you solve this geometry problem or not? We have ways to to determine whether a solution is valid. Poetry, creative writing, you know, being a good cognitive behavioral therapist, like those things are probably much harder to objectively evaluate and the feedback time is definitely a lot longer. you know Did you come up with a good go-to-market strategy for an e-commerce business? Well, we may not know until a year from now, you know and and that may not ever find its way back into
02:52:36
Speaker
you know the the feedback process in the training process anyway so people probably see some divergence but there are a lot of things you know that of of course are fairly objectively evaluated and and a lot of science is headed that direction it's still right now takes a lot of time and effort to run the actual physical experiments.
02:52:55
Speaker
But there is a ground truth there and the labs themselves are becoming more automated and you also have simulation that can sort of sit in between. So it seems quite likely that science will be one of those areas, probably a little slower. The the more amenable something is to immediate programmatic evaluation, the faster you should probably expect consumer superhuman stuff to emerge.
02:53:18
Speaker
the slower or the you know the more resource intensive it is to get that ground truth for feedback, it'll be slower, but it should still be possible. And then in some things where it's like inherently a human aesthetic, it may and may be a different story. you know There just might not be such a thing as superhuman poetry. So yeah, I think that's right that's pretty good summary of that.

Interpretability of AI Systems

02:53:41
Speaker
the you know Keeping these things on the rails or not on the rails, is like it's kind of it's just very unclear whether we can continue to pilot enough examples of what to do and what not to do that will actually get to something robust, and especially as they you know become more powerful and you have these deception worries.
02:54:01
Speaker
It's kind of anybody's guess. That's and a pretty natural bridge to maybe the last thing that I wanted to talk about, which is interpretability, which has had an unbelievable last year and a half in terms of progress.
02:54:13
Speaker
because at least for some people, and I've definitely count myself among them, it seems like unlikely that we're going to get the level of confidence. It might work, but I don't think we'll be sure that it's working even if it is working to just continue to train the systems on do this, don't do this example by example at infinitum. It would certainly be a lot more appealing if you could look inside the systems and say, why are they doing what they're doing? And if we could understand that,
02:54:41
Speaker
then you could hopefully have a lot more confidence that it is not just not messing up or not you know going rogue right now, but that it genuinely never will because you really understand how it works. And ideally, you might even be able to detect it if it were going that direction. So you know i'm I'm not by any means an expert in interpretability. I don't practice it myself. I just try to kind of keep up with what is going on and a lot is going on, you know increasingly tons of different techniques.
02:55:13
Speaker
anthropic is Everybody sort of knows that they're one of the big leaders in pushing this stuff forward. But they have, if not directly inspired, you know certainly the other frontier lads are doing stuff like like this too, OpenAI and DeepMind as well, of course. And I would say, you know lot of it seems like if you asked people a year and a half ago, or even just a year ago, how much progress we would have made by now, and then you showed them the actual progress, my general sense is that most people would be pleasantly surprised.
02:55:43
Speaker
There was a big sense of like, man, this black box is crazy. We really don't have much traction. We really have no idea what's going on. Maybe we'll never be able to figure it out. And then a lot of things kind of you know as a certain number of dominoes have fallen. I think Chris Olat does a really nice, he's the interpretability lead at Anthropic. He does a nice job of describing what he thinks the phases of understanding will be. First phase he describes is just understanding essentially what the models are thinking about.
02:56:15
Speaker
obviously you know anthropomorphizing there and you know running risk of misleading analogy. But you have the weights right in a model. You have the billions of parameters, which are the numbers that are used to process the inputs. And then typically in a transformer, you know obviously the most common architecture today, you have these layers. And then between the layers, you have these sort of intermediate results.
02:56:39
Speaker
and the intermediate results they call activations or it's sometimes called neurons, what is their value and what does that represent is like the first phase and they have made tremendous, tremendous progress.
02:56:53
Speaker
over just the last year in understanding what concepts are active at any given time. And it's actually turned out, and I think this is philosophical implications, too, and probably also reflects that there is a qualitative difference between earlier, smaller, less capable models and today's frontier systems. Everybody's, of course, heard the term stochastic parrot, or at least I'm going to assume people have. Basically, you know, that means like these things aren't really thinking in any any meaningful way. that they don't they're not They're not representing concepts in the way that we're representing concepts. They're just making sort of statistical correlations between tokens and that allows them to sort of create a pastiche of something that sort of looks like it's you know real language, but it's always just kind of an empty imitation in the stochastic parrot interpretation.
02:57:43
Speaker
My sense is that is probably not too far off for early language models, but these days, certainly with the the larger language models with the interpretability techniques that have been developed, I think we can now say pretty definitively that that's not true, and that there is a higher order conceptual representation that is you know gradually comes online through this whole process of training, such that with these big frontier models, we can now look inside and say,
02:58:13
Speaker
what concepts are active as the network is doing its processing. The most famous example of this was Golden Gate Claude, where they put out a version of the model where they had isolated but they say that like millions and millions of different features.
02:58:32
Speaker
And then they took one, the Golden Gate Bridge feature, and just artificially turned that up. And that created the effect at runtime where you went to talk to the model. All I wanted to talk about is golden um and you would have this or Golden Gate Bridge. And so they called it Golden Gate Cloud. So it would, basically no matter what you asked it, give you some answer that had some you know way of working in the Golden Gate Bridge.
02:58:56
Speaker
The way they do that is pretty complicated. It's called sparse auto encoders. That's definitely been like the new hotness of the last year. And basically that is sort of saying, let's say we have a guess you know that, and it is predicated on some, I think, very ah good hunches from folks like Chris Ola and others who, you know I think,
02:59:17
Speaker
Maybe they had more, I mean, they've been at it for years, so I think they certainly have you know a lot of different reasons and experiences to develop this intuition, but it was not consensus you know view that something like this would work, and yet it seems to have worked really remarkably well. Basically, what they they find is that As big as these networks are and for all the billions of parameters, the number of numbers in those intermediate states, you know, you put in an input, you go through a calculation, you have an intermediate output, then you do more calculation, you have another intermediate output. Those are not that big. They may be thousands of numbers. They are thousands of numbers, but they're not millions of numbers. So to fit all of the semantic content
03:00:03
Speaker
into that space of just thousands of numbers, some sort of powerful compression has to be happening. And they call the compression superposition, which is basically to say that the same neuron, the same position in that array can mean different things depending on what other neurons it co-fires with. So if you have neuron one firing by itself and everything else is quiet, that might mean one thing, but if neuron one and two are firing,
03:00:31
Speaker
that probably means something different and if one and three are firing and if one, two and three and you know, obviously you have a sort of exploding exponential space of the combinatorics of these thousands of numbers. The challenge then is, can you untangle that and figure out what pattern corresponds to what concept? And they basically do this by taking, essentially inserting a super wide layer in the middle of a,
03:01:00
Speaker
normal transformer and say, okay, we're going to try to recover normal behavior, but we're going to do it in such a way where you have to project through this super wide layer, like literally millions of neurons. This is like way out of it because there's a compute cost to doing this. So this is not how they're going to run their thing normally on an ongoing basis. But in order to try to separate everything out,
03:01:28
Speaker
Can we take this densely packed, highly compressed thousands of numbers where the same neuron is involved in bunches of different concepts depending on what it's co-firing with? Can we create a sort of clean representation of that by projecting it onto this millions of neurons wide space and then projecting it back in a way that recovers the normal behavior? And they find that basically, yes, they can do this. And it's kind of a two-step process. First is,
03:01:57
Speaker
training the thing so that it mostly works. There is some loss. They refer to this as like the dark matter. you know They've found millions of features, but there are way more than millions of things in the world. The anthropic folks have just noted it or you know just described described this very simply as like,
03:02:13
Speaker
just how many small businesses are there and how many small businesses does Claude know about? I recently went to Brazil and found a Amazon river cruise operator that had one boat and Claude told me about this business by name. Without without searching? Yeah, no connection to the internet. I was just like, I'm going to Brazil. you know I want to stop somewhere else besides I was going to go to Sao Paulo. Where else should I go? It gave me all these ideas. I kind of dug in on a couple of them and it gave me all the way down to specific by name, one boat, river cruise operator. So when you think about how many of those sorts of things there are, you know how many different concepts it has to have an accurate representation of to be able to give that level of detailed response, it's like way more than even 10 million.
03:03:03
Speaker
So they've kind of identified like the most common 10 million and the rest are sort of lost in that process and they they refer to that right now as the dark matter of this interpretability. So you get down to the level of like, you know, in the interpretability, you might get down to the level of Amazon River Cruises you know might be a concept, but that specific boat operator, you know they've not gotten granular enough to separate that sort of thing out. but the two The two phases are basically create this super wide layer that fires in a sparse way so that now the hope is that each neuron has an independent meaning and you can look at, okay, if if neuron one is firing, that means a certain thing.
03:03:45
Speaker
If seven is also firing, that means a different thing. So one and seven kind of mean both of those concepts. And this has to be validated. It's not obvious in advance that it will work, but the second step is actually going through and labeling all those neurons and looking at, okay, now we have this thing. It has, you know, we can run the network through this. It does have some loss when we force it to go through this sparse representation, but it like still mostly works. So now can we look at as we run stuff through the network,
03:04:15
Speaker
what causes each individual neuron to light up and develop some conceptual understanding of that. And they're of course then you know applying the language models to that. This is known as auto interpretability and it's like taking, it's you know it's saying, okay, Claude, here is a ton of different examples that caused this one neuron to light up.
03:04:38
Speaker
how would you describe those examples? How would you sort of summarize what conceptually is common across all these examples that is causing that thing to light up? And it's like decent at that. you know I think we should be, and you know the anthropic folks are are careful to say like,
03:04:57
Speaker
We don't want to delegate all this understanding to AI. It is important that we... It went back to your question. It is important that we understand it, but we hopefully... When you have millions of concepts, you're going to need help. And so it is interesting to see that the language models so are already quite useful in terms of labeling the what they call the feature that activates a given neuron in these like super wide sparse auto encoders. Ton of jargon in this space.
03:05:22
Speaker
Anyway, that's all phase one. It's gone really well. I think most people would have said they wouldn't be as far along as they are now, even just a year ago. And then the next step would be understanding the circuits or the algorithms that actually process the information right now we can kind of go in and say okay this concept is active that is useful on its own right because that's how you can steer behavior they call this activation steering and tropic is piloting this i think with some users or it's in like limited beta testing where essentially you can make your own golden gate
03:05:56
Speaker
Claude, but you can make kind of whatever you want Claude within some bounds, perhaps. ka Can you get to something something very interesting for safety with only an understanding of of the concepts involved? So could you get to, for example, an honest model by only having interpreted the concepts? Or would that require interpretation of the underlying algorithms?
03:06:19
Speaker
Well, it seems like it's a big step in the right direction. I mean, the you could both steer behavior, as I demonstrated with Golden Gate Claude, and you can also run detection algorithms, which you can do even without this whole mess. You can just run kind of general classifiers and say, here's the pattern of activations, and this is good or bad, and then train sort of an auxiliary network to classify the activity within the main network as harmful or not harmful. and This is sort of a variation on that where you could say, okay, if this feature lights up, you know that's a concern. right If this feature is bioterrorism or whatever, right to take the the number one ah concern, if you could isolate a bioterrorism feature and you know presumably that does come out in like
03:07:10
Speaker
if not the first million or 10 million at 100 million features, like at some point you're gonna be able to find that. And then you could say, if that ever lights up a board, you know, just like stop immediately what you're doing. And that could be quite powerful. It certainly doesn't answer everything. It doesn't certainly doesn't necessarily address open source, but it at least gives you some ability to have, I would say, kind of like order of magnitude. All these things seem to be like order of magnitude questions, right? It's like it's not perfect by any means, but
03:07:45
Speaker
If you add up you know multiple interventions that each kind of give you an order of magnitude improvement, then eventually you do start to get somewhere. And this feels like the kind of thing that can give you an order of magnitude or like a couple orders of magnitude of confidence that if we see, and of course, you know in a production system, it wouldn't just be one feature you'd be looking for, but you know a whole laundry list. If we see any of these things lighting up, then some you know some different process is going to happen. Maybe we just abort and say, we can't talk about that. Maybe we you know flag it for later classification. Obviously, you could have tiers of you know sensitivity around these features. That's basically possible today with
03:08:27
Speaker
layer one, or or level one of interpretability, which is just understanding what are the concepts that are active at any given time as a network is processing stuff. and Then circuits, they're kind of starting to work on now, it seems. Anthropics just put out an update on what they call cross coders.
03:08:47
Speaker
which is looking at how concepts relate to each other. And I um might be over interpreting their work a little bit to say evolve as they go through the different layers of the network. But this seems to be like clearly a first step toward understanding not just what concepts are active, but how are those concepts being processed? How is that evolution happening as you go through the entire process?
03:09:13
Speaker
and this you know For context, they just put out, I think a little more than a year ago, but not much more than a year ago, their first toward monosomanticity paper, which monosomantic, polysemanticity, monosomanticity, polysemanticity is the compressed version where concepts are represented by some subset of you know the thousands of activations. Monosomanticity is, now we've got millions of neurons and hopefully each one is is lighting up for a specific single concept.
03:09:44
Speaker
That also has like a really interesting geometry and sort of almost like fractal nature to it where they have found that the more you can sort of do like feature splitting. like The more neurons you give the thing, the more granular the features get. and that That seems to happen in like a pretty intuitive way where you might see, there's so many different examples of this, but you might see like a Spanish neuron, but then you might start to be like, if you gave more room, you might have different flavors of Spanish. You you might have like Argentine Spanish or you know Spanish Spanish or Mexican Spanish. And it just depends on how many concepts you are willing to do the compute to try to separate out. You see like the the wider that space gets, the more granular the individual features get.
03:10:28
Speaker
That was just a year ago that they put out the very first thing with like a very small model, kind of a toy problem, a deep study of it that had some real insights. But when they put that out, it was like, okay, this is cool, but you know this thing has how many parameters? And it was not many. And you're going to ramp that up you know to something that has orders of magnitude, more parameters, like your actual cloud, but you, you know, that you run as a product. I don't think it was obvious that they were going to be able to do that at all, ah let alone do it as fast as they did. And, you know, here we are a year later, and they basically sort of have with this crosscoder thing, seemingly a first
03:11:05
Speaker
result that is getting beginning to get at circuits. How are the concepts across different layers related to each other and how do they evolve through the the computation process? So if we're optimistic, a year from now, you might we might come back and say, hey, they did it again. you know And now we actually have a really good account of how concepts are our being processed, how they're relating to each other, you know what the not just what concepts are active, but how those concepts are coming to be active and and how they're ultimately leading to outputs. And then the third level would be understanding the thing at like a high level as a whole. And it's ah not even exactly clear what that means, but it's sort of just saying like we kind of need to see the forest for

Biologically Inspired AI Safety Approaches

03:11:46
Speaker
the trees. you know
03:11:47
Speaker
have similar problems in biology. We can read the DNA sequences. you know We can look at what's being expressed, what proteins are being ah transcribed and and produced within a cell at any given time. But it's not yet clear how does that translate to like organism level health and and wellness. you know And so there's a question similarly, like even if we can sort of describe a lot of circuits, does the leap to a high level understanding of what the system is going to do at any given point in time, does that like just fall out of that or does that take like another you know set of novel insights? and I don't think that's entirely clear, but it's you certainly can't guess that you certainly can't bet that it will come for free.
03:12:26
Speaker
yeah and And I guess ultimately what we want is to ask can these kind of broad open-ended questions like, as a user, should I trust this model? As a developer, should I deploy this model? Something like that. And and for those questions to be answered in a fully satisfying way, we would need to understand the model as a whole and understand its behavior. So we would we would have to have the third level of interpretability. Is that right?
03:12:53
Speaker
Yeah, I think that that would be the the dream, you know, we could all sleep well at night and there could be other solutions that could that are as yet not invented, you know, that that could also work. I guess maybe think about it in two different ways. There's like coming up with a scheme that could work and then there's satisfying yourself that this scheme actually will work and those are not the same thing, right? You might find that if we just you know come up with the right loss function or you know penalize the wrong behavior enough or whatever, that maybe that gets us to some robust safety, but we don't know. We don't know if we're on track for that or not. you know I've recently did an episode with a couple of guys from AE Studio where they are creating biologically inspired approaches to safety, starting with observations along the lines of human empathy
03:13:46
Speaker
seems to be in a mechanistic way grounded in reuse of certain cognitive machinery to think about others versus to think about oneself. right The reason we have like you know sympathy pains or empathy pains, the reason we you know kind of have a good theory of mind is because we're thinking about ourselves with literally the same processes that we're using to think about other people with obviously some differences, but a lot of overlap.
03:14:12
Speaker
And so they're trying to bring that into AI systems and say, you know, maybe we can make these things inherently pro social by trying to push the way that they think about or process information about themselves versus other entities, push those as close together as possible. Probably still, again, have to be somewhat different because that's necessary to be functional, but push those as close together as we can.
03:14:38
Speaker
Maybe pro-sociality falls out of that. There has been some evidence that that could be true. They've done some interesting, like, you know, against small scale toy problem agent, you know, deceptive agent that becomes non deceptive when they do this sort of training. So it's like very promising, but.
03:14:54
Speaker
Let's say you do that, you know, and then you're like, OK, cool. You know, we've run X test and Y test and it seems good and we've got this theory that says it could be good. But if it's going to be like a super intelligence, you know, you'd want to be a little more confident than that. And that's where hopefully this sort of looking inside these things could really make a difference. I've also you know one one aside. We didn't talk too much about other architectures or state space models in particular, which is a different kind of architecture. But one worry I had when those originally dropped was, and I think they're probably at this point, best understood as efficiency mechanisms more than new. They have some advantages and capabilities, but not like super decisive, but what they can do is save you a lot of compute and hybrids are the way that it's going. you know People are not moving past attention entirely, but they're creating hybrid structures that have state space and attention mechanisms.
03:15:46
Speaker
in some sort of you know sequence. I was really concerned that we were going to have to kind of start over on interpretability because I thought, geez, if you have a fundamentally new architecture that's processing and processing information in different ways, should we expect that all these techniques that we've developed to understand the transformer will work for these other architectures, I kind of thought didn't seem super likely. And to my very pleasant surprise, those problems have not been nearly as severe as as thought, I still think you you know should expect that there's going to be some differences and some surprises in other architectures compared to the one that is, you know, most well studied today. But the techniques do seem to work. um And they have been applied on some interesting sample problems like the
03:16:35
Speaker
Othello GPT project was one where there was like an original Transformer version of it where the Transformer learns to play the game Othello and it's just looking at like sequences of moves and it's kind of like a chessboard. You know don't have to know the game too much. I don't really know the game much but What was remarkable was that just from linear sequences of moves, the transformer learned that this is a 2D game and sort of started to represent 2D structure just based on these like notational sequences of moves. And they were able to, you know through interpretability techniques, go in and see that and kind of understand, OK, this is how this thing is working. We can see this representation of the board
03:17:16
Speaker
even though it was never even told that there was a board. It turns out same basic techniques applied to a state space model that was also trained to play a fellow basically also worked you know and came up with a very similar it sort of is you know representing things in a similar way and they were able to locate and and sort of understand that representation in a similar way. So I am very bullish overall on interpretability and it feels like the one thing that, and again, it's kind of a race, right? Cause like, and it's, it's a weird race too, because some of these things you might, somebody might say, well, maybe we should have stopped at GPT two, done all the interpretability work and then continue to scale. I don't think that unfortunately is really viable because I just don't think GPT two is like qualitatively the same thing as a GPT four plus, you know, it would almost be like,
03:18:06
Speaker
you know mouse i mean the old joke about health results. right like You should always append in mice to the headline and there's probably something similar where you know you could do a bunch of things in GPD too, but does that really generalize to the higher level things? It doesn't seem like it does in and ah probably a lot of important ways.

Conclusion and Future of AI Interpretability

03:18:27
Speaker
So these things do have to be kind of invented and, you know, are created in order to be studied. And how quickly can the interpretability catch up to new architectures, new capabilities as they come online does seem like a big question. But the my update from the last year has been that it's is just toward much more optimism that all the many different techniques that have been developed actually will work, even if somebody or at least have a pretty good chance of working, even if somebody comes up with a new architectural innovation that you know they weren't designed originally to handle. So that's good. But yeah, we're all we're all headed toward this. like you know The crystal ball gets real foggy, not too far out. And I think this is kind of true for most people. It's like, what trend wins you know is the current question, or can there be some breakthrough that can really resolve things? Right now, the trends are obviously more capabilities, more alignment,
03:19:24
Speaker
as measured and more understanding. But those measurements and those understandings are definitely not complete and don't really make any guarantees about what sort of behavior we're going to see. So can they get there? Can they get over the hump in time to have confidence when really powerful things are starting to happen? That's not clear. And it definitely remains a space to watch that we can maybe check back in on again in a year or so.
03:19:54
Speaker
Let's hope so. It's going to be interesting to see perhaps also a bit a bit scary. Nathan, thanks for chatting with me. It's been great. Thank you, guys. A lot of fun.