Become a Creator today!Start creating today - Share your story with the world!
Start for free
00:00:00
00:00:01
Episode 44: Daniel Sternberg: The State of Artificial Intelligence image

Episode 44: Daniel Sternberg: The State of Artificial Intelligence

S3 E44 ยท CogNation
Avatar
62 Plays1 year ago

We talk to Dr. Daniel Sternberg, head of data at Notion Labs, about how to understand new developments in AI (artificial intelligence) like DALL-E-2 and chatgpt. Topics include the possibility for general intelligence in AI, similarities between human cognition and generative AI models, and the potential for sentient AI.

Recommended
Transcript

Introductions and Backgrounds

00:00:07
Speaker
Hello and welcome to Cognation. I'm your host, Rolf Nelson. And I'm Joe Hardy. On this episode of Cognation, we have a very special guest, Dr. Daniel Sternberg. Hi, Daniel. Hi.
00:00:20
Speaker
Nice to be here. Yeah, welcome to the show. So Daniel and I and Rolf as well have all worked together in the past. We worked together a little bit when we were at Lumosity some years ago, and Rolf was involved in that work as well as a collaborator. So we've known each other for some years.

Cognitive Psychology and AI at Notion

00:00:42
Speaker
Dr. Sternberg received his PhD in cognitive psychology from Stanford.
00:00:46
Speaker
where he studied human learning processes by combining behavioral experiments with computational models of learning and decision making. He's currently leading up data at Notion and we wanted to talk to him today a bit about artificial intelligence and where we see artificial intelligence going and particularly around the idea of

Future of AGI and Philosophical Implications

00:01:10
Speaker
What is the future of artificial general intelligence? When do we think that artificial and general intelligence may come about as a thing? And there's probably a lot of questions that fall from that and interesting aspects. And what I hope that we can do in today's conversation is explore this from the perspective that
00:01:33
Speaker
All three of us are cognitive psychologists by training. And Daniel in particular has been working a lot in the data field over the years and has pretty good knowledge now of where we are with things like large language models and other forms of generative AI. So hoping that he can contribute there. I know also from many of our previous conversations, you said he has an interest in the philosophy of this, which is kind of the intersection that we're often working in.

Philosophy of AI

00:02:03
Speaker
So Daniel, thank you for coming on the show. Yeah, thanks for having me. Yeah, I'm definitely interested in the philosophy side of things. I'm opinionated, too, so we'll see how that goes. Perfect. But I can't hear any of your opinions about all this stuff, too. We like opinions.
00:02:18
Speaker
Yeah, great. Yeah, those will definitely come out along the way. It's an interesting space. It's moving really quickly too. And so my opinions are also gonna over time, I think shift this year based on the new advancements that are coming out. So it's gonna be really interesting. I hope that hopefully this will be a really interesting conversation. Yeah, absolutely. Yeah. So I mean, kind of what I thought might be a good
00:02:40
Speaker
uh, organizing question for the conversation is when do we think that we will have an artificial general intelligence?

Defining AGI

00:02:49
Speaker
Like when will AI achieve artificial general intelligence? And I guess to answer that question, you know, we need to think about what is artificial general intelligence
00:03:00
Speaker
and what is perhaps general intelligence and maybe what is intelligence, all these questions probably need to be answered. But before we get into that part, I think what would be interesting is to kind of talk a little bit about why are we talking about this

Recent AI Advancements

00:03:17
Speaker
today? Today is an interesting moment in the history of artificial intelligence where we've made some really
00:03:27
Speaker
sort of surprising advances recently. I'm thinking particularly about things like chat GPT, but also Dolly 2 and other things, the whole open AI thing, and other folks as well, but open AI sort of from a demonstration perspective is especially exciting and surprising.

Large Language Models and Their History

00:03:46
Speaker
So I thought maybe, Daniel, if you could start a little bit helping us out, understand like what is like chat GPT, what are large language models and how do they work and why is that interesting now? Yeah, for sure. So what's interesting about one of the things that I find most interesting, at least about
00:04:05
Speaker
chat GPT about large language models in general is to maybe go back a little bit in time first and talk a little bit about the history of these models really over the course of many decades. So some people might say, or the way I like to think about it is we are in the third wave of neural networks. And by third wave, I mean there've actually been kind of three periods of time in history actually going back technically to the 40s, 1940s.
00:04:32
Speaker
around neural networks. So the first wave of neural networks came in the 40s into late 50s. Frank Rosenblatt invented something called the perceptron in the late 50s. There were actually in the 40s some folks, McCulloch and Pitts, who built out, who were trying to build an electrical version of something that looks kind of like a neural network. And this was a wave that fizzled out in the early 70s
00:05:01
Speaker
after some of the limitations of those models were found. And without going into kind of detail there, there were certain types of functions that they could not kind of calculate. And so they were more or less abandoned for a period of

Deep Learning and Neural Networks

00:05:16
Speaker
time. Second wave of neural networks was started in the very late 70s and was heavily really in the 80s, throughout the 80s into the early 90s.
00:05:26
Speaker
with the discovery of the backpropagation algorithm, which is actually a component of many neural networks that we're even still using today. And that wave was much more successful. There was a ton of research spawned out of it. Much of my exposure to neural networks
00:05:48
Speaker
when I was in college as an undergraduate and in graduate school related to those models, which were able to, in theory, approximate any function, any mathematical function. But in practice, we're very, very hard to train to do the types of complex tasks that we even started seeing neural networks doing a decade ago because of limitations, both algorithmic limitations and
00:06:17
Speaker
as importantly, or perhaps more importantly, computational limitations of computational power. Just as these were getting populated, these were actually the current wave of the first examples of the current wave of neural networks and deep learning models were coming on just as I was finished in grad school. My last few years in grad school was where they were beginning to become popular.
00:06:41
Speaker
and beginning to be shown to outperform other machine learning models at specific tasks. So I remember in particular a model developed by Jeff Hinton, who was a professor at the University of Toronto, has been at Google Brain for some time, that beat the best model at handwritten digit recognition, basically on a dataset of
00:07:08
Speaker
I think it was basically from the USPS like zip code recognition. There was a problem that they needed to solve and it beat the best models in class. I remember seeing him give a talk and he said something like, these techniques that we've used to train these networks layer by layer have improved this or made it 10 times faster to train these models. There's something like this, I'm paraphrasing.
00:07:35
Speaker
But computers in that span of time also got 10,000 times faster. And that made a really huge difference. And from there, we've seen this really steady, since the late aughts, 2006 or so, when a number of simultaneous papers came

Transformer Models and Language Processing

00:07:51
Speaker
out.
00:07:51
Speaker
Over the last almost more than 15 years now, we've seen this rapid improvement in lockstep between the computational power improvements and the types of neural network architectures that can best take advantage of that computational power. What really unlocked the current wave of LLMs, the most recent one over the last, I would say five years,
00:08:21
Speaker
was the development of some of a type of model called the transformer. For the kind of first decade of deep learning, when it came to language models, generally, these models were based on something called recurrent neural networks. So neural networks that try to, you know, in all these cases, what you're trying to do is essentially, you know, chat GPT, the model that GPT 3.5 model that chat GPT, the product is based on,
00:08:50
Speaker
They're all based on this idea of you're trying to predict the next token. The token is like a word, but it's not exactly a word. It's basically an exhaustive set of combinations of commonly occurring letter strings. And they're all trying to do that, but for the first decade or so, these models were recurrent neural networks where they were trying to actually
00:09:15
Speaker
predict a sentence token by token, you can think of it as you have one sentence, you're predicting the next token each time, and then at the end you go back and you change the weights in this neural network based on all of those inputs. What transformer models do that is different, the kind of
00:09:37
Speaker
intuitive version of them is they actually do this all in parallel. And they are using a tension, so basically a tension gating to focus on the next element in the sequence, but they can do it all at once. And the reason that is really powerful is that, without going into a ton of detail about it, is that it's
00:10:00
Speaker
way more efficient to train. You can train it using GPUs, which is where we get a lot of the computational power from these models. You can train them way faster because you're training large amounts at once. And so they enabled models like GPT-3 to be built, which could never have been trained using LSTMs or other types of recurrent neural networks that were popular before that.

Limitations and Inspirations from Human Cognition

00:10:27
Speaker
because it would have been so time-consuming and computationally expensive to do it. And so what this has meant is they've been able to radically scale up the
00:10:38
Speaker
size of the models and the amount of training data that they can be trained on over the past five years. And so GPT-3, so first off, stepping back, chat GPT is technically based on a model called GPT-3.5, which is basically GPT-3, which is a model that came out, I want to say two or three years ago, with some additional tuning for specific types of tasks.
00:11:06
Speaker
And GPT-3, just to give a little bit of background, has something like, this may not mean much to people, 195 billion parameters, which is basically weights in the model. That's a lot of weights. I was reading somewhere, they need approximately 700 gigabytes of memory just to store all of those weights. And it was trained on, oh, I had this somewhere.
00:11:34
Speaker
It was trained on many terabytes of data, let's put it that way. You can't just run this model on your computer. You cannot just run this model on your computer. It will not work. It's not open access. It's not open source. OpenAI developed this model. They own the model.
00:11:54
Speaker
They are running the model on, I assume, very large GPU-based clusters with a lot of memory. There's a lot of interest I know in this field in figuring out how to smartly prune the models so you can take a trained model and run a less memory-hogging version of said model. But that mostly maintains performance.
00:12:23
Speaker
But I know that's a field of active research. I wonder if, well, we can skip back, we can go back to some of this stuff, but I wonder if we can start thinking a little bit about how these models were inspired by, originally inspired by neural networks. I mean, it was Hebbian Networks and Rosenblatt took that kind of stuff up and thought about how to make a digital version of it.
00:12:50
Speaker
And some people care about the similarities between neurons and neural networks, and some people don't. If they work, they work, right? But I wonder if you're, from your perspective, you have a good understanding of both cognitive architecture and neural architecture and these neural network architectures. So what do you see as, I mean, you said pruning, that sounds like a neural term, right? Neural pruning that you're-
00:13:19
Speaker
You're getting rid of some connections. You talked a little bit about some attentional gating that might go on, which sounds like cognitive function. What do you see emerging, I guess, in neural networks that look similar to human cognition or human brains?
00:13:38
Speaker
For sure. And I come at this from a slightly biased perspective based on my training and I want to say the labs even that I was in as a graduate student. I think at the most basic level, the
00:13:57
Speaker
similarity that I see between the way these models learn and the way humans learn at the most basic level is that they are learning about the patterns and statistics of the environment in which they live. I want to use the word live
00:14:14
Speaker
So we live in a three-dimensional world and they live in a world that just receives data from some input, right? Exactly. And we have sensors that they don't have. We have
00:14:29
Speaker
Yeah, we also have factors that they don't have. Yes, exactly. Effectors that they don't have. So they're in this very impoverished environment. So if you're a GPT-3, you are just getting tokens of input given to you. But you're trying to learn the statistics of those. And obviously, the person building the model is thinking about the problem they're trying to solve.
00:14:58
Speaker
and they are constructing the infrastructure of the model with components that they think are well suited to that task, to that statistical learning task.

AI vs. Human Cognition

00:15:11
Speaker
And so I think there are some basic algorithmic aspects of how those models work, some of which are
00:15:20
Speaker
pretty reasonable, potentially, approximations of neural processing at a very basic level, in a way much simpler though. So, for example, the simple idea of
00:15:33
Speaker
you're doing some calculations between input multiplying that by some, it's matrix multiplication basically with a bunch of functions, nonlinear functions attached to it, attached to the outputs of those. It's a good enough approximation of what a neuron is signaling.
00:15:53
Speaker
Sure. Yeah, it's a decent approximation of what a neuron is signaling or some set of neurons is signaling. I don't know that they have the exact same characteristics of neurons. In fact, I'm sure they don't. But there are other parts of it that are arguably less neurally plausible. So for many years, people have said things like backpropagation, which is critical to learning in these models, is
00:16:19
Speaker
not a biologically plausible idea. The idea that we have some direct contact with ground truth in the way that we're in the way that, you know, these data structures have a definite. Exactly. And when you train like a very, very deep model, and when I say deep, it means there's many, many layers of
00:16:41
Speaker
matrices multiplying on top of each other, essentially. And you need to recalculate what those should all be based on the ground truth information and then propagate the error back and tune the weights of these models through many, many, many, many layers. It's not like when I do something in the world and then my eye gets output, then it gets to send error signals all the way back through the system again through many, many layers of neurons.
00:17:07
Speaker
We obviously respond with much less feedback often, a few trials and things. Although the models are getting better at that quite a bit. That's an interesting thing to see.
00:17:23
Speaker
I think there's the, you know, when I was in grad school, so my advisor was, my advisor whose name is Jay McClelland was a pioneer of neural networks in the 80s and specifically around neural networks for modeling human cognition. So he was a psychologist by training and worked with other psychologists and also with computer scientists. And he wrote the famous parallel distributed processing books too. Exactly, exactly. And it was
00:17:56
Speaker
He has been, or at least was at the time, very interested in the analogy of these models to human cognition and how we learn as a way of more
00:18:10
Speaker
pushing the boundaries and understanding how you can, what types of cognitive behaviors that we take for granted in humans can be developed using statistical learning. And so I always thought of it mostly at that level, that he's not saying this is like, we might be interested in more neurally plausible models as well directly, but more of this idea of
00:18:33
Speaker
after the cognitive revolution of the 60s and 70s, what is it that we can learn statistically without having to make assumptions about learning rules, innateness of specific concepts or capabilities? To me, it was really all at least my interpretation was often this is about how do we
00:19:01
Speaker
How do we surprise ourselves in a lot of ways with what we can learn through simply measuring the statistics of some input? And you can get surprisingly far doing that, which is one of the reasons that I've been really, one of the things I found really funny in the discourse right now around these models is there's a very
00:19:27
Speaker
The most naive criticism I've seen of consciousness, sentience, whatever you want to call it from these models is, yeah, but they're just learning the statistics of their input. And when I hear that, I'm like, you have no idea how we are learning or processing information in our brains necessarily. So why would we assume that learning the statistics of the input is not how we also learn many things?
00:19:55
Speaker
The input, I'm sure, looks different. Yeah. I mean, in some ways, that's partially learned evolutionarily, right? So some of that, quote unquote, learning is something that's built into our system, right? Yeah. You've got effectors and sensors that have been

Evolutionary Learning and AI Predictions

00:20:10
Speaker
co-evolved with our environment to take in the important aspects of that's part of the statistical space that's relevant to our survival, right?
00:20:23
Speaker
For sure. Yeah, and maybe just to draw out that analogy a little bit further, I agree with you. And the things that are stable in our environment are things that we can, if it makes sense to over a long, long period of time, actually encode evolutionarily as a bias in our learning systems in some way.
00:20:41
Speaker
So some people, I would say, might think, yes, you can learn the statistics of natural scenes, but natural scenes look a certain way no matter what, because that's just the way the environment of Earth looks. And so we might, over very, very long periods of time, encode biases toward that, even in early visual cortex or something like that. Well, that's a place where you could see that machine learning, I mean,
00:21:07
Speaker
If this stuff's only been around a couple decades, it's hard to fault them for not catching up to the amount of work that evolution's already done, right? I mean, 100%. And the other thing that's funny in that analogy is that if you look at the models, then there's two things going on. One, they can potentially learn things that we would learn evolutionarily, but they have to learn it through training. And then two, unlike, well, I don't want to get theological here, but unlike us potentially,
00:21:35
Speaker
There is a creator who is trying to, as I mentioned earlier, design these models to have a structure that makes sense for the problem at hand, whereas we need to evolutionarily get to that state, right? Which is more or less blind design, right? Yeah. Exactly.
00:21:56
Speaker
Yeah, well, I think that's yeah, there's a lot there. I mean, I think that, you know, that kind of bring bringing it back to the sort of organizing question for this conversation, you know, it makes me want to, you know, dive in a little bit more on what are these models good at today? And what are they not good at?
00:22:21
Speaker
and what are humans still good at that machines are not good at and what are machines actually a lot better at than humans are. And maybe it makes sense to talk a little bit about what we think about as what we mean by intelligence or general intelligence for humans. And then maybe that can inform a little bit what we mean by artificial general intelligence. In general, when it comes to intelligence,
00:22:51
Speaker
We think of, I mean, I've made this argument on this show a few times. But generally speaking, when we say intelligence relative to another species or relative to a machine, we basically mean, does that thing think like us? We grant intelligence to a thing, the more it thinks like us. I mean, that's in a very basic way.
00:23:19
Speaker
maybe not ultimately the best model for thinking about what intelligence is, but that I think from a colloquial perspective, that is what we mean by intelligence is things that think like us. Because forever and ever, we've always been the smartest things around. This starts to break down a little bit when we talk about machines that may be more intelligent than us or super intelligent. They're already super intelligent in certain domains.
00:23:46
Speaker
But in general, some of the characteristics of intelligence, when I talk about human intelligence, which is a capacity for learning, reasoning, understanding, and aptitudes for grasping truths, relationships, facts, and meanings, this idea that is inherently tied up with the idea of flexibility, especially general intelligence, the idea that you can be flexibly solving problems
00:24:13
Speaker
that you encounter in your environment and learning from relatively few examples of how to solve those problems efficiently and effectively. That's sort of how I would define intelligence for the sake of this conversation. I don't know if you guys have inputs that you want to throw in there.
00:24:33
Speaker
Well, I think the common intelligence is interesting. As you mentioned, machines have been, there are many things they've been better at than us. Any computer can do all kinds of mental calculations, sorry, mental, but calculations that I could never do mentally myself. For example, math, addition, subtraction, multiplication, all those, super, super fast, way better than us, have been forever since the calculator.
00:25:02
Speaker
Exactly. So I think if we say, is a machine intelligent at X, we may mean, yeah, are they very good at that task or do they do that task in a way that is similar to us or better than us?

General Intelligence in Humans and AI

00:25:22
Speaker
Yeah. I mean, so one frame for talking about, so I guess then general becomes the sort of one of the important phrases here, right? One of the important words in this conversation because the idea of general intelligence is that
00:25:38
Speaker
If you're good at one thing, you're actually good at other things as well. In psychology, general intelligence refers to this factor. There are all these models that basically analyze cognitive abilities and variations in cognitive abilities in populations across different people. What they find is that
00:25:58
Speaker
there are different factors so that some people are good at people who are, you can break down abilities into different subcategories, if you will. Verbal ability, spatial reasoning ability, these sorts of things pop out as factors statistically in the analysis of people's cognitive abilities and performance. And general intelligence
00:26:26
Speaker
pops out as sort of like the primary principle factor that actually explains a lot of the variability across all these different capabilities like spatial reasoning and verbal reasoning. And it turns out that people who are good at one actually tend to be good at the other. So there's this characteristic of what's called G or general intelligence, which is that people who are good at one thing tend to be good at other things cognitively. And people are bad.
00:26:53
Speaker
tend to be bad. I mean, it's a statistical thing. So obviously there's tons of exceptions and everyone's different. Everyone's unique, but there's this overwhelming factor, which is no one has really sufficiently explained to my, to my satisfaction. And I don't know what the answer is. Why that works that way. I have some theories, but you know, there's this G G factor that sort of dominates all the other factors. So if you're good at one thing, you're good at other things. That's general intelligence. Now, uh,
00:27:22
Speaker
General intelligence in the machine learning situation just means really just transfer that you can get good at one kind of task, but then you can actually solve another type of task as well. If your whole world is just solving one type of simple task, then you don't have anything like general intelligence. You've just got a very specific intelligence. It may still be super intelligent, but in the sense that it solves that problem better than humans, but you're not generally super intelligent.
00:27:52
Speaker
I think one of the things that's particularly interesting about this moment with the demonstration of something like chat GPT, which again is based on mostly a model that's existed for a few years, and they all say GPT-4 is coming, I can't say more than that, but it takes a problem that sounds very constrained. The model is trained to predict
00:28:22
Speaker
to predict tokens of text. Take a token as input, predict the next token. And then what it's trying to do is it's processing through everything you send it when you send it something to chat GPT, and then it's generating for some amount of time. And that problem sounds very
00:28:43
Speaker
constrained, it's just about predicting the next token. Yet, as of I think the last news articles I read related to this were telling me chat GPT has passed a bar exam, chat GPT has
00:29:03
Speaker
has gotten a really high GMAT score. There's a bunch of examples of a high enough score to get into the top business schools. There's a ton of examples of this right now on a variety of types of tasks that it's able to do well at. This is why it's surprising now. Exactly. It's not just answering some very basic question in one very specialized domain. It's like anything that you can type
00:29:32
Speaker
There's a decent chance it's going to come back with something pretty smart, frankly. I mean, it's just this past couple of weeks, I've been using it in different ways just to explore where it could be useful already. And I was doing some programming problem using a Squarespace website, some very specific, not interesting domain of knowledge, but asked it to solve this problem
00:30:02
Speaker
how I code this JavaScript thing for a Squarespace website. It had the answer and it produced it in JavaScript, perfectly formatted, ready to cut and paste and just throw into the project. Asked it to create a dialogue in Spanish for about a trip to like Titicaca that a father and a son might have.
00:30:26
Speaker
because we're planning this trip to Bolivia and blah, blah, blah. So I want to have this conversation with my son. We want to practice Spanish.

Enhancing AI Capabilities

00:30:32
Speaker
And then translate it back to English. So it creates this whole Spanish lesson, essentially, and does that perfectly in two seconds. These are very different kinds of problems. And it's able to solve them in interesting ways. It's surprising to me that it happened
00:30:56
Speaker
this year, I wasn't ready for it. It's general in ways that I find surprising, I guess.
00:31:06
Speaker
Definitely. And I think one of the things that's interesting too is I want to maybe briefly just to break down the components a little bit. I think this is relevant for the topic of thinking about intelligence and how intelligent the current generation can be. In a moment, I'm sure we'll talk about what this looks like over the longer term as the technology itself gets better.
00:31:27
Speaker
But even if you just take one of these models, you can do a lot of things to give it more power. So it's worth thinking of chat GPT as a user interface with particular components on top of this language model, right? So I have a language model. OpenAI is presumably
00:31:46
Speaker
When you send something to chat GPT, they are actually seeding at first with some prompt. So there's a whole topic right now around these models. People talk about prompt engineering, which is basically learning how to give it the right text to do a good job at different types of tasks. And they are setting it up to do a particular type of chatbot task. Some people even tried to verse engineer by getting it to give you back what the prompt it was given was.
00:32:14
Speaker
And so that's a particular interface. You can give these models more and they can get better. So for example, imagine I have some large base of knowledge that I want to give, that I want chat, that I want GPT to be able to work with.
00:32:33
Speaker
If I have some way of, for example, a really crazy example is imagine it essentially can tell me what it wants to search for in Google, I can give it information back, and then it can use that to help improve its answers to questions. I know a lot of people are saying what's built into chat GPT is in some ways better than Google. But still, imagine you can give it access to specific information, so more recent information.
00:32:59
Speaker
You can potentially get these models even in the current generation with a little bit more fine tuning potentially or maybe even in some cases you don't need to train the model further. If you add additional bits of applications on top of it, you can make the model much more powerful. So I could ask it to go get me
00:33:26
Speaker
the most recent news articles related to some topic. So if you've ever used Chat with GBT, it actually doesn't have much information since late 2021 when that model was trained.
00:33:39
Speaker
But I can actually have it search for that information and I can provide it back to it. So now if I have a search interface, I can train the model to use that search interface. I could train the model to respond in a particular format that I can then put into a computer and have it do something and give it back output.
00:33:59
Speaker
And so I actually think these models are, if we're surprised right now, I think this year, a lot of what you're going to see that's going to continue to surprise you is less, oh my gosh, there's a more amazing language model. That will probably happen at least once this year. But in addition is all of the things that people put on top of it that enrich its ability to do things with its current understanding of the statistics of language and knowledge and all of that.
00:34:26
Speaker
And I think that's gonna be even more surprising for people. And again, it's about enriching the environment that the model has access to, right? There are a lot of things I can do to make myself seem a lot more intelligent because I have the ability to go get new information for myself, for example. Yeah. It would be, I mean, it's also interesting to think at this point how the architecture of these systems is different than our cognitive architecture
00:34:56
Speaker
So what process is it going through? And we understand something about how human reasoning and judgment and all of that works. And it's very different from the way that information gets processed through a neural network. For one, there's not a continuity of processing. We would have a stream of consciousness that
00:35:25
Speaker
continues on over time and that we're aware of, whereas there might be a lot of discontinuities between, you know, processes that are going on in a computer.

Cognitive Psychology in AI Development

00:35:35
Speaker
And, you know, other sorts of differences that might show up in cognition too, references to real world events or reference, you know, an answer to a question might come from an autobiographical memory, something that you've experienced before versus just a database that you're grabbing from Google. So there may be, you know, a representation of yourself that may not exist in the system too. So, you know, any other sorts of
00:36:06
Speaker
How do you think about, you're thinking about how's the brain thinking and how is this other system thinking? How do you conceptualize that? I mean, there are some things that these models have and that they've been built over, as researchers have iterated on them over the years to improve them, that in the structure of the current generation of models, my understanding at least is, and I should caveat here,
00:36:32
Speaker
I have not, obviously, I do not have the many millions of dollars to go and build one of these models myself from scratch. It is extremely expensive. You need a lot of computational power. Yes, correct. And so that's why you're seeing specific companies standing up that are funded just to build these and maintain platforms for them. But what I was going to say is there are components that have been built into these models that
00:37:02
Speaker
Actually, I'm not going to say they remind me exactly of how a brain works, but the attentional gating, the need for in the prior generation of a current networks, some ways of maintaining and persisting something in memory. And then also, actually, that generation had an active forgetting process as well that was needed.
00:37:27
Speaker
those are things that are purported to exist in brains or something similar to them. And so you do end up creating some things, some structures that you actually need to do these processes that are at least in some level analogous to the way a brain works. But I agree with you. And you could see like attention, of course, is for a brain, it's a way to
00:37:55
Speaker
It's a way to be able to process the best information and ignore the worst information.
00:38:01
Speaker
For a neural network, of course, it's a process to save computational power, right? Exactly. That you really focus on what you need. But I agree with you in that you don't see the same kind of grouping, at least in my understanding of the models. And I would have to think about if there's some analogy there. I'm missing something close to, well, we have this slow learning process through, in fact, in the brain, there's many of them.
00:38:28
Speaker
long-term memory consolidation, there's a reward-driven learning process, and there's consolidation and faster episodic learning through the hippocampus. So there's these different learning systems, they're interrelated, but they also work differently. And this is the kind of thing I was interested in actually when I was in grad school.
00:38:52
Speaker
and related to the research I was doing, was trying to actually learn about how we could tease apart those learning systems, even just in behavioral experiments potentially based on the demands of the task, which was a pretty fun, well, really nerdy but fun exercise back when I was in grad school.
00:39:10
Speaker
And I think, again, these models are much more tuned or designed around the task that they are, the problem they're designed to solve. So autobiographical memory, arguably for the purposes of what they're trying to do with a large language model today, is not a critical component.
00:39:33
Speaker
Yeah, it's not going to be alive long enough for it to really matter. That's because the next generation is going to come along.
00:39:41
Speaker
Yeah, I think one of the interesting things that we mentioned how we've been talking a lot about how we're so surprised by the way the surprise comes from this model was just trained to do this training to learn to predict the statistics of language on the language input it got. And it does all of these things that are very surprising that look like general knowledge.
00:40:04
Speaker
I think one of the more intriguing aspects of that to me that makes me think about the psychology, go back to the psychology world, is like what is it that I assumed mechanistically had to be built into a system versus like is actually in some way just afforded by learning over time. The statistics of the environment. Yeah, the statistics of the environment paired with

Rapid AI Development and Model Training

00:40:25
Speaker
the right inputs for a particular task. We should focus on that a bit because it's really interesting to think about what are the inputs that this thing has? What are the outputs that it has? What are those sensors and effectors and how does that
00:40:48
Speaker
relate to what it can do. And I agree. I think that's why I think most people were surprised. I think it's just that the model is performing so much better than it was before, but just that you can engineer a system that is so useful and interesting to use
00:41:11
Speaker
with the existing systems, doing it with the right engineering. It's about the user interface to your point. It's like creating the environment that the user can communicate with the machine effectively in something like real time. I think that's the real innovation that OpenAI has revealed in the past few months. I don't know if you've heard very recently over the last few weeks, there's a lot of
00:41:40
Speaker
I know people at these companies, but there's a lot of seemingly sour grapes coming from Meta and Google and some of the big tech companies around what's out now because they're like, look, we've had this technology internally for some time. We've had versions that were just about as good as this. But what they haven't done
00:42:02
Speaker
One, with chatgbt, they package it into this product that they've been willing to launch and I think OpenAI is willing to take more risks than some of the other folks are. But in addition, they haven't given people the building blocks to work with these models themselves.
00:42:20
Speaker
And I think what you're starting to see right now is that with, and OpenAI is not the only company, there's a number of these companies out there now that are fast on their heels, but is that they're creating these platform and developer components to make it pretty easy to build pretty impressive things into applications of all kinds.
00:42:44
Speaker
And it's very different than Google giving you access to some cloud compute thing where you can deploy a model. This is literally like, just send us text and we'll send you something back. And so you can build some pretty amazing things. And yeah, I agree. I think that in terms of what they have access to, again, it's this very... In theory, it's this very impoverished, all I get is tokens and all I give you is tokens.
00:43:14
Speaker
But, but it's also like, where are those tokens coming from? Right? The tokens that they're being trained on. Yeah. Um, it's a huge volume of information. Um, that's been scraped from the internet in various ways. It's the internet. It's the entire internet basically, right? Yeah. I mean, just about, right? Like if you think of, it's like, if it was, if you could Google it, like then chat GPT essentially might very likely have access to that information.
00:43:43
Speaker
if it was before 2021, right? Yeah. So apparently it was trained on 45 terabytes of information from the open internet. I mean, I'm looking this up right now. It says the datasets and they don't describe exactly what they are.
00:43:57
Speaker
Common crawl, which is eight years of web crawling. Web text two is the text of web pages from all outbound Reddit links from posts with three plus upvotes. So not surprisingly, a lot of Reddit. You might get that in some of the ways that it talks sometimes, although you can tell it to talk differently.
00:44:15
Speaker
Books one and books two which are some internet based corpora of just like complete text of books and Wikipedia, like all of Wikipedia.
00:44:29
Speaker
And so it's like basically all of the information on the internet. And it's also worth noting, and the other thing that's worth noting here is it is 45 terabytes, so it's true that it can't actually store all of this information in the 700 gigabytes of memory that it essentially stores in the parameters of the model.
00:44:47
Speaker
But it's much closer to that than, for sure, any image model is, in the sense that it really does seemingly memorize bits of text, for sure. And so because it is, the parameters of those models are so huge. And so it's, I mean, I say that images can't work that way. Images are just much larger. Text is easily compressible.
00:45:18
Speaker
And images can be compressed, obviously. But the amount of information in an image is actually much higher. And so image models definitely cannot do that. It's an interesting topic around copyright, which we don't need to go into today. But there's a whole world around that right now. But yeah, I think it is the text of the internet. It's the internet that they're trained on. So imagine a person who has just fed the internet for their entire upbringing. What a miserable person that would be. Exactly.
00:45:47
Speaker
And also, at that point, now they may get trained... GBT 3.5, which is what chat.gbt is running, is basically that, what we just described, plus it's trained on... They actually had tasks that they had it do. I don't know the details of those tasks.
00:46:05
Speaker
And they also fine-tuned the model quite heavily based on the desired output for those tasks. So for example, I think there are versions of GPT-3 plus whatever that are trained on more code, for example, or writing code.
00:46:28
Speaker
And so they actually try to get it to be better at doing certain types of instruction based tasks. But the fundamental model is the same. They're not changing the model. They're just fine tuning the weights of the model. So I'm going to ask a sort of naive question, which I've always sort of wondered, which is, so I mean, you can have something that specialized in this area and something that specialized in that area.

Creating General Intelligence

00:46:54
Speaker
Why is a general intelligence not just sort of a Swiss army knife of all of these different skills plus some sort of way to move between them or decide between them in the same way that our brains are somewhat modular and contain bags of tricks that are sort of interconnected in ways that we can, you know, so for facial recognition we've got a nice area that can
00:47:22
Speaker
be specialized at that. I feel like there's something wrong with the way I'm imagining this, but what's the idea of an executive function system that farms out a lot of duties to other things when they Google something when it needs to and uses other things as tools in the same way that we use specialized processing in our brain as tools?
00:47:51
Speaker
So are you saying like, I mean, kind of tricks kind of. Yeah. All right. Yeah. Okay. So you're imagining like, what if you just had different models that were stitched together by some other model? Yeah. All right. That's gotta be where it's going, right? A little bit.
00:48:05
Speaker
Well, some things are kind of like that already, right? So Dali is essentially a combination of models that were then trained together. And I'm not an expert in the structure of those models. But essentially, you are taking a language model and an image model. You're putting them together. One of the most
00:48:29
Speaker
critical parts of the models we have, these newer, this newest generation of models. There's a concept that, you know, we now talk about it as fine tuning a lot. A few years ago, the term I remember people using was called transfer learning. And the idea was like, I take a large IOWA model, all it was ever done, trained to do was like, you know, predict tokens.
00:48:48
Speaker
Now I want to use it in a particular very specific context. What I could do is I can take that model. I've already got all of the weights trained based on this huge dataset. I can't possibly train it. I don't have the resources to do it. But now I have the specific task I want it to be really good at. I can add a layer downstream of that and I can just train that layer without backpropagating anything to improve at the task that I want to do.
00:49:15
Speaker
And when you take something like the model that, when you take something like Dali, you essentially have, well, there's a large language model that was trained on language. So it just understands language. Sorry, I don't want to get anyone mad at me for saying understands language. But I mean, we're with me for the moment. It's in Gary Marcus isn't listening. As long as Gary Marcus isn't listening, I won't get in trouble.
00:49:39
Speaker
And then we have a model that's trained on images. And then we have a data set that is labeled images. And that allows me to essentially learn this relationship between these. But I don't have to learn language over again. I don't have to learn the structure of images over again. Yeah, that seems to be so much of the beauty of having this in digital format is that you can just
00:50:05
Speaker
copy and use that same snippet and insert it into a bigger system. This starts to get into this question. We started off asking when will we have artificial general intelligence. Now if we take the Nick Bostrom definition of super intelligence and say AGI is when computers are just better at everything,
00:50:30
Speaker
Everything is sort of relevant. Everything that matters, you know, that we care about. Computers are just better at everything. Getting to that stage somewhat quickly, I think a lot of people have opined and it sort of makes sense from like a, you know, science fiction perspective. Like what you really need is like computers to make better computers and algorithms to make better algorithms and so on and so forth.
00:50:54
Speaker
And it feels like that is where this wants to go next, right? It's like you want to tell the machine to make itself better. Yeah. And I mean, I think that kind of to me also ties in with this question of the environment of what this algorithm is allowed to know and allowed to experience and allowed to affect, right?
00:51:17
Speaker
I think a next big leap in terms of moving towards machines that really are making themselves smarter is they need to be able to allocate resources and affect resources in the real world outside of just their, you know,
00:51:32
Speaker
clusters that they've got access to directly. Are we creeping towards Robopocalypse right now? Yeah, definitely. It's an interesting question. This is where I mentioned before we started, I have opinions. Let's hear them. But this is an area where I have very conflicting opinions, and I'm not sure
00:51:56
Speaker
There's a part of me that thinks of this as kind of a historian, and the historian in me assumes almost that we are going to hit some kind of large technical wall in the next few years at some point because we always have.
00:52:17
Speaker
The history of AI is like. Winter is right. It's like. And then and then I winter and then I winter and I don't think it would necessarily be winter this time, but I do think it would. You know, there's a world where maybe we just early spring. Yeah, we need to have some next leap in like there's some fundamental aspect of these architectures that is limiting and will not allow us to get to get there. So you don't see inertia as being as being so
00:52:47
Speaker
So our momentum, I guess, is being so strong that it's inevitable. I think it might be approaching.
00:52:56
Speaker
Yeah, I'm not. Yeah, I also, I'm gonna avoid talking to the Singularity for a moment. We can talk about it later. Singularity. So that's one side. The other side of me with going back to the surprise that we've kept talking about is, as I mentioned earlier, I actually think, yes, training, bigger models, training them, better types of training for them. I've actually heard some people say that
00:53:19
Speaker
They don't necessarily think the models just need to get bigger and bigger and bigger. There are some things that we can do more efficiently as well. And if you give them better input that is more tuned to specific types of tasks, you can actually train them much faster and train smaller models potentially. But I think the more important thing right now that's going to, again, I think lead to more progress sooner is
00:53:46
Speaker
We have really only scratched the surface of what we can do with the models that already exist if we actually give them a little bit of fine tuning and access to effectors that they don't have access to right now.

Reinforcement Learning and AI Limitations

00:54:00
Speaker
And when I say effector in that case, I don't mean a robot operating in the world, because like, you know, if I want to talk about things machines are bad at, you know, walking, walking. Yeah, walking is a really good one. Picking up a cup off a table, right? Those are actually still really hard. And the reason they're still really hard is actually because the here, the you know, the nice thing about language models is that most problems can be re-instantiated as language and language is
00:54:29
Speaker
Is like is low information from a you know from a computer standpoint per unit time Relative to what I need to do to move an object in the physical and in a physical get feedback Yes, and get all the feedback. Yeah, exactly and so that's I think but in the context of
00:54:52
Speaker
a computer that lives on the internet, there's a lot of things I can give it access to do, and there's a lot of things I can do where it can get feedback. I actually think it starts to be useful to think that this problem is more of a reinforcement learning problem.
00:55:07
Speaker
So reinforcement learning problem where you have there's a state of the world. The model is observing the state of the world. It's taking an action. It's observing how the state of the world changes. And while I can only give it information in the form of text,
00:55:24
Speaker
I can give it all kinds of texts. I can say, you have three things you can do. You can look this up on, you can do a Google search, you can write this code in this language, or you can, I don't know, name a third thing. And if it can choose one, it can choose it in text, I can then give it a task to do, it can do that task in that text, and I can give it feedback on what happened.
00:55:52
Speaker
And so even just adding that in, you start to see an ability for these models to do some pretty potentially impressive things if they know how to do it. So chat GPT can write code decently. Well, what if I actually give it access to an interactive engine for running the code that it wrote and getting output from it?
00:56:17
Speaker
Right, so really obvious example thing that most computers are good at that these models are terrible at is basic math, which is ironic and fun in a lot of ways, right? If I if I if you ask chat GPT to multiply two three to three digit numbers, it's going to often give you a result that looks semi reasonable when you calculate it is off by some amount.
00:56:37
Speaker
So it's actually terrible at that. But obviously, I could give it a calculator. The reason why you couldn't give it a calculator. Just let it run that piece of code, right? Or a little Python window. Exactly. That's the other funny thing. You can ask it to write. I played around with this once. Here's a just funny example.
00:56:58
Speaker
I ask it to calculate compound interest as a fun one, and it does it wrong. You can even tell it to walk through its steps, but a lot of people get the fun as well. That came up the other day in the New York Times. There was this article that was written by Chat GPT, actually, and it got compound interest wrong of what it was. This is a place where
00:57:22
Speaker
artificial intelligence is interesting when it's making interesting mistakes like that, because of course, you know, in people, we understand a lot about cognition from the patterns of mistakes that we make. And when we, you know, it tells us how we operate in normal circumstances. I mean, so that, that's a fascinating thing to me. Is there any, anything else that you've noticed about chat, trying around with it and sort of seeing capabilities, what it, what it
00:57:48
Speaker
Yeah, I mean that gives you a sense of how it's calculating that stuff. It's not doing it like a calculator.
00:57:55
Speaker
Yeah, I mean, if you give it very simple numbers, it does it, but my assumption is it's almost memorized to them. That could be wrong. Just similar to how a person would do it, right? Exactly. But I think when we're talking about getting to super intelligence or AGI, well, in that world, if I'm just trying to say, how do I make this super intelligent machine, I totally should just let it run Python code. Because if I ask it to write a function for generating compound interest in Python, it will likely write it correctly.
00:58:25
Speaker
So if I just let it run it and run things through it, it will likely do it correctly. But if I ask it to just explain it to me using its neural system, it cannot actually do it. So I'm trying to think of other examples of what it's particularly bad at.
00:58:41
Speaker
I don't know, Joe, if you've seen any. Well, I mean, you know, there's the things about like, you know, sometimes, you know, if you ask it for recommendations on a movie that's appropriate for a child, for example, it might get that horribly wrong. And you might be exposing your child to like a movie that you don't want them to watch. That happened to me the other day. Actually, I don't even remember, but I fortunately previewed it myself.
00:59:05
Speaker
Right. Just, you know, anticipating that they've also never, it's never seen a movie, right? So that's why you never seen a movie doesn't know about like, yeah. So there's this, but I mean, the things that it really can't do or anything like, yeah, anything that requires moving things around in the world, right? Like affecting anything in the actual world outside of this screen that's right in front of you, right? Like it's pretty good at putting words on the screen and not very good at other things outside the world. But that's where, you know, it's interesting. And I got thinking about this as we were talking, like,
00:59:34
Speaker
it's really bad at like going to the store to get some milk. But what if you said, all right, you can actually have access to my credit card and the internet and you can, you know, tell Postmates to like go get me some milk at the store. And like suddenly you're asking it to solve that problem. And maybe then it's just simple like, oh, okay, well give me the best price on getting the milk from the store.
01:00:01
Speaker
in this amount of time. I mean, you could engineer a system today that would solve that problem pretty well. Definitely. So that's where you start being like, well, we give it the credit card, tell it to make itself better, give it access, give instructions to humans who are willing to do it for money. And suddenly, you've got a system that can build better versions of itself.
01:00:27
Speaker
Yeah, to the extent when we say intelligence, we mean, you know, performs like us in the world, not

AI in Robotics and Hardware Constraints

01:00:35
Speaker
just like can do cognitive tasks, when I like give it to it in its environment, in a way that makes sense in its environment.
01:00:43
Speaker
But an obvious example would be I mentioned the passing a bar exam. But it's like, well, it passed a bar exam in the sense that I fed it the text of the bar exam and I asked it to give me answers to questions. I didn't make it sit at a desk and look at a piece of paper or something like that, or look at a computer screen. But to the extent we mean, oh, it has to do it with the effectors and sensors that humans have in the equivalent way,
01:01:11
Speaker
then it's nowhere near. That could take a very long time. We need good avatar suits and we'd just pop our AIs into them and let them run around the world. They should get a lot smarter if they're interacting with the actual world. Well, yeah, that's one thing where one of the things that they're not good at is
01:01:36
Speaker
understanding the customs or ways of interacting in a specific subgenre of the world that is not just reflective of the broad internet. So it's like letting it go out into the world and experience those things and how to interact in those environments is one of the ways that it's going to learn. Yeah, exactly. But no, I mean, yeah, exactly. So in terms of getting to artificial general intelligence,
01:02:06
Speaker
maybe it doesn't need to go through recreating. Maybe it doesn't need to go through robotics, I guess. That's sort of the question. Because if you need to have the same factors as we have, then it's going to take a long time. Because you have to build a robot that moves like a person, which is maybe closer than we think. But as we know, it's hard. But if you can only tell a person to do the thing and then feed that back into the computer, maybe it gets there a lot faster.
01:02:34
Speaker
Yeah, I'd be very curious to see do we see major advancements in robotics in the next 10 years. I mean, this is an area I know very little about, like very, very little. And so I assume, you know, robots and companies that are trying to train robots are using these types of models in some ways.
01:02:53
Speaker
On the other hand, to the extent that model, you can think about it, certainly self-driving cars, which you might think of as being like robots, are definitely using deep learning models. And I don't know off the top of my head how much of that is happening at the edge, which means in the
01:03:12
Speaker
inside of the car itself hopefully in your car it's all happening inside the car and not relying on a cloud connection but presumably you know a humanoid robot you can't actually have the level of computing that even can be in a car because the car is quite large
01:03:28
Speaker
and has access to a really intense power source. A lot of battery. Yeah, a lot of battery. Whereas you can't have that on a humanoid robot. And so there's actual fundamental hardware limitations there that, again, one of the powers of these just pure language models is all they're trying to do is language, and they can live on gigantic beefy machines in a data center somewhere.
01:03:53
Speaker
And so they can be pretty powerful, but then obviously that's not going to work in the real world. But I think, yeah, going back to your point, Joe, I agree. I think we probably need to
01:04:08
Speaker
We were probably in the early stages of figuring out and debating what we even mean when we start to talk about intelligence because there are different versions here and maybe we need to open our minds a little bit to the idea that machines are going to be
01:04:26
Speaker
able to do a lot of tasks and do them in a very different way from us. And I don't mean there's very different way because they are because of the architecture of the model. But there's also in a very different way because, again, the sensors and effectors are all just completely different than ours in the way that they're getting the task done. But nonetheless, they are the
01:04:53
Speaker
they are the way that the computation is actually happening, right? Like Joe, in your example, yes, you could go get a person to then go get the thing for you. But you can think of it as kind of like the sterile Chinese room kind of thing, where the artificial intelligence is the
01:05:11
Speaker
the computation happening in the room and then the person going to get the thing is just performing a rote task. That is literally just, we need you to swipe the credit card or we need you to walk over there and get the milk, which to be fair, you still have to recognize what milk is and all of that. The machine's not doing that for you.
01:05:33
Speaker
Yeah, no, it is. So I mean, yeah, no, I think that's exactly right. And then in terms of the reason why the milk issue came up is specifically around thinking about how machines can make themselves smarter. And they need access. They need certain permissions to do things in the world. Maybe they don't actually need
01:05:58
Speaker
All of those actual effectors, they don't need to like be able to identify what milk is at the same time as like being able to like find the best price for, you know, for it. And so on and so forth. Maybe you could be, you're solving part of that task and then outsourcing.
01:06:13
Speaker
the rest to a human effector, essentially. And in the sense that if you're like, well, go find the best price on these types of GPUs and

Future AI Impact and Singularities

01:06:24
Speaker
assemble of them in this way so that you can make more and more and more of yourself with the right kind of funding. You can imagine the
01:06:37
Speaker
the kind of feedback loops that are, you know, leading to, you know, what has been called the singularity or something like that, you know, vast, like exponential growth in the, in the improvement of, of the machine improving itself, uh, maybe not as far away as we think. So, which brings me to the, to the question, this, this, the, the, the question. So when is it going to be, when are we really pushing, you're really pushing for this? When are we going to have artificial general intelligence?
01:07:07
Speaker
I want a year. I want a year. You want a year? I want a year. I'm going to say 20 years into the future because that's what everyone's always said. I was close to that. I was going to say 2050, which is like 28 years. I was going to say 2050 also. I was going to say 2050 also. Well, if we're doing prices right rules, then I'll do 2024.
01:07:31
Speaker
The thing is, it's funny, part of me, and I guess I need to think more about or imagine for myself more what this world looks like, is like I see a world where many of the ways that we're defining AGI are achieved in that period of time, maybe. But I'm still not bought into the singularity, and so then I need to figure out what is that world where we have AGI but there is no singularity.
01:07:57
Speaker
There's things I think that are closer. I was thinking of an example playing off of your milk example. Something that I could see you doing with a language model pretty soon plus some other models. There's already smart fridges that probably have a camera inside of them and try to see what is in the fridge and remind you it knows what's in there and what's not.
01:08:18
Speaker
And imagine you feed the output of that in a language-based format to a large language model. So it's just like streaming information to it about what is in your fridge and has access to information. And it has the ability to order things for you on Amazon, which obviously the fridges probably already do.
01:08:41
Speaker
But it can be much smarter about it. Yeah, they've been around forever. But it can be much smarter about it, right? It can think about, what does this mean for the types of food these people like to make? And what are some other things I could order for them? That would be interesting. Make some suggestions about what they should eat. Yeah, make some suggestions about what they should eat. And then I can order it for them on Amazon automatically and it will come to their home.
01:09:02
Speaker
or Instacart or whatever. Someone needs to cook it. That's where the human is still way ahead. The machines are not so good at cooking novel recipes. No, for sure. Even there was that burger flipping robot that was very popular in San Francisco briefly. Yeah, we were over there. That was good. But it was not very smart. It did not look like a smart machine.
01:09:24
Speaker
No, for sure. And so there's parts of it, though, that you can start to see if you stitch a few different types of models. And actually, that's an interesting point, too. You have a model over here that, I don't know, knows what's in your fridge and keeps track of things. And then you have another model over here that's a language model that's doing other things and ordering things for you. And you connect those things to each other.
01:09:46
Speaker
um and so the other thing to keep in mind is there's there's literally the connection of like the the dolly type example where these things are actually like deeply connected to each other through layers of weight and then there's a version where it's no these are actually a bunch of components that can speak to each other and we can almost imagine that you can either imagine them as being
01:10:08
Speaker
parts of a brain, you can imagine them as being a society, like there's different ways to think, you know, to to analogize them. But even if there are societies that society like hyper intelligent globally, when you think of it, when you think of all the parts of it together.
01:10:24
Speaker
But yeah, I mean, it starts to make you think you could get there by 2050, make me think that, yeah. Yeah, yeah. I mean, I guess in that world, it's clear why the singularity is not so imminent in that world, because you do have to stitch together different pieces, and it requires physical resources, and those physical resources will be constraining in certain ways that are not
01:10:48
Speaker
are infinitely exponential, right? That are not just like going to like, you know, this just, you know, absolute, you know, pulse of progress, but are, you know, yeah, fast, fast. Yeah, you don't have to get too fast.
01:11:04
Speaker
You'd have to get to a place right in that world where those machines are talking to each other in a way where they are generating novel tasks to solve, novel things to do from machine to machine, which is possible.
01:11:23
Speaker
That's entirely going to happen. Just don't hook them up in a way that they have control over their own power or something like that. There's the person who's going to go and say, and I'm half joking here.
01:11:43
Speaker
Yeah, exactly. Exactly. There's always should be a person who can turn the thing off. And they have no way of stopping you from turning it off. Okay, so another thing that gets brought up a lot in, you know, thinking about bad things that can happen with super intelligence is people talk a lot about motivations. And it's, it's hard for me to imagine current
01:12:09
Speaker
current models having motivations in the same way that I think of humans having motivations. So the kind of motivation that might make me deceive you to achieve a greater end, right? And that's what a lot of people worry about, that we're going to get some robot that
01:12:27
Speaker
You don't need that though because you just set up the loss function in the right way. The what function? You set up the rules that this thing is learning by in the right way so that you are asking it to solve a certain kind of problem. For example, there was that New York Times article that I sent
01:12:48
Speaker
the other day, Rolf, about diplomacy, that game that the AI won this tournament, this diplomacy tournament, which is like a World War I simulation. It's like chat-based. So people are talking, everyone thought this was a person that won this tournament. It was an AI. So it was passing the Turing test in that limited context.
01:13:09
Speaker
But in that case, that's all about deception. And that all the only motivation that machine needed was to say like win this game, like we're telling you the only motivation there is that is the goal state. Exactly. Yeah. Yeah. And so yeah, you can definitely and this comes up when you start having machines teaching each other. Like if you're saying make yourself better, you know, like that's, you can see how that could go wrong really, really quickly.

Aligning AI Motivations with Human Goals

01:13:35
Speaker
Yeah, on the motivation side, yeah, I agree, Joe. I think you can basically, their primary motivation, the primary motivation of the GPT 3.5 model underlying chat GPT is to do a good job of
01:13:57
Speaker
providing the output that is required. And so that can take any form, whether that be lying, whether that be being really good. In fact, one of the things that there's this heavy topic in AI that you'll read about around AI safety and their specific
01:14:18
Speaker
OpenAI is thinking about this, but I know there are other companies out there who are really focused on this idea of steerable AI, which basically means, can we make it, steer it toward the outcomes desired by humans and desirable to humans? Literally, you will see people add prompts for GPT models that are like,
01:14:45
Speaker
You are a good and honest AI who wants to provide helpful answers to humans and prompt it with that so that it takes a tone and approach. Give it some moral nurturing.
01:15:00
Speaker
Exactly, exactly. And then I assume in the tasks, they do things with the tasks themselves to try to like orient it toward that. When I say tasks, sorry, the training to solve specific tasks that is also oriented toward that is my understanding.
01:15:19
Speaker
So it's definitely, but I hear, I know what you mean, like it doesn't have, it's, you know, you can't say like it has its own motivation separate from what it is being asked to do or what it's being trained to do. Yeah, because we feel, I mean, it gets discussed a lot as motivation, but motivation feels like something maybe less on the surface and more
01:15:44
Speaker
embedded within the system, I guess. Yeah. And I would, I would also say like, this is a, this is a whole nother podcast. Cause it's an interesting question, Ralph. It's a whole nother podcast. Cause I mean, I think you could say something, you have a similar conversation about human motivation. Yeah. And like, what is that really like in that way? Is that really what we think about when we think about motivation? That's what I love about a lot of this is I think, I think you can always go back to, you know,
01:16:17
Speaker
What does it say about that? Can I make one really meta point then? This is more on the philosophy side.
01:16:28
Speaker
There's often when people talk about our models of human cognition and human intelligence, they're often based on analogies to the machinery of the era in which we're talking about. So if you go back really far, how does the brain work? How does the mind work? You go back further and there's people talk about fluid systems because of plumbing and water and things like that.

Philosophical Questions in AI Development

01:16:56
Speaker
We get electricity and electrical systems in the 70s. Computers are becoming more common in 60s and 70s in universities. You get the cognitive revolution as information processing machines because that's what we were starting to use. And so it's interesting now, it's very weird and meta,
01:17:16
Speaker
is these AI systems are making us re-analogize and think about human cognition in different ways. All of those surprising aspects are also making us think about, well, actually, how much is based on a big thing that's been surprising to me is the ability to really change the way these things work and get them to do different tasks just through a prompt versus
01:17:39
Speaker
in addition to what the huge amounts of training they got. And that's been really interesting because the prompt can still radically seem to change their motivations and how they act, et cetera. And that's been surprising to me. And so I think it is like this technology is going to push our assumptions and change our assumptions about what is intelligence, what is computation. I think a lot of what is cognition
01:18:08
Speaker
I think a lot of the debate, the early stages of the debate that are really, that's really just starting, you know, started really, I would say last year, that you're seeing just on Twitter and wherever, is these early stages of like the folk psychology of intelligence and cognition is being deeply affected by
01:18:27
Speaker
the surprising aspects of these models. And I just think it's, you know, I saw Joe, you'd written in some notes for today, something around the family resemblances in Wittgenstein. And it's exactly that. Every time we get some new technology, we learn that our assumptions and theory and folk theories about cognition and intelligence were impoverished, actually.
01:18:50
Speaker
And this new technology is causing us to go, wait, these things I was thinking you needed, you don't actually need. I don't need to assume all of these things. Yeah, now this is like a whole season worth of money. Yeah, I know. I got to declare we're trying to do it. Rolf actually put the Wittgenstein in there, but I knew that you would appreciate it because we talked about it. Thanks, Rolf. Thanks, Rolf.
01:19:11
Speaker
I think this is a great place to wrap it, actually, because like Ralph said there, I mean, this is could be another season. We definitely need to come back and do another one with you, though, Daniel, because I mean, it's just like awesome.

Excitement for Future AI Advancements

01:19:22
Speaker
We definitely we need to get into this sentiment conversation because that was the other way we could have gone today. And I'm glad we went this way because I think it was more generative. But now that we've got this base.
01:19:31
Speaker
I think we've got an opportunity to go there, which is great. I'll leave it with one last question, which is, we like to ask this when we have experts on the show, which is, what are you really excited about now? We could be in this field or adjacent related technology stuff. What's coming down the pike that you're really excited about that you see coming? Yeah. I'm a little biased because I'm starting to pay more attention to this at work as well. One of the things I mentioned earlier is that
01:20:02
Speaker
you can do so many things with these models once you give them access to more tools to work with. And I think over the next year or two, a lot of what you're going to see in terms of things that are really going to surprise you
01:20:21
Speaker
are going to come from the ways that these building blocks get used together combined with more old-fashioned pieces of software that they can interact with. I think you're going to see some really interesting and cool
01:20:46
Speaker
very generalizable and generalized feeling use cases of these models to solve very specific problems that look very much unlike just like giving you language output. I'll put it this way, you can already get them to write code, you can already get them to
01:21:07
Speaker
generate structured text for you. A really simple example would be Markdown is the most obviously easy one. You can ask them to provide their output in Markdown so you can format everything really nicely. But you can do way more than that. Imagine they can write SQL and then they can run it against the database. That's a pretty cool example. These things are not far away. They are very close and it's literally just stitching the software together. There's going to be, with maybe a little bit of extra training,
01:21:36
Speaker
And so I think that's going to be really interesting in the near term. I am very excited about the next couple of generations of the models as well and what they're going to be able to do. But I think the thing that's going to surprise the most people in the world in the next year or two is going to be just what happens when you give them more interesting, you know, still inside of a computer, but more interesting effectors to work with. Great. That's all. That's a great place to wrap. Thanks, Daniel, for being on the show. Really enjoyed it. And I look forward to having you back. Yeah, thanks for having me.